00:00:00.001 Started by upstream project "autotest-per-patch" build number 120610 00:00:00.001 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.073 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.074 The recommended git tool is: git 00:00:00.074 using credential 00000000-0000-0000-0000-000000000002 00:00:00.075 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.101 Fetching changes from the remote Git repository 00:00:00.109 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.140 Using shallow fetch with depth 1 00:00:00.140 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.140 > git --version # timeout=10 00:00:00.167 > git --version # 'git version 2.39.2' 00:00:00.167 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.167 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.167 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.454 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.465 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.476 Checking out Revision a704ed4d86859cb8cbec080c78b138476da6ee34 (FETCH_HEAD) 00:00:04.476 > git config core.sparsecheckout # timeout=10 00:00:04.488 > git read-tree -mu HEAD # timeout=10 00:00:04.505 > git checkout -f a704ed4d86859cb8cbec080c78b138476da6ee34 # timeout=5 00:00:04.525 Commit message: "packer: Insert post-processors only if at least one is defined" 00:00:04.526 > git rev-list --no-walk a704ed4d86859cb8cbec080c78b138476da6ee34 # timeout=10 00:00:04.627 [Pipeline] Start of Pipeline 00:00:04.642 [Pipeline] library 00:00:04.644 Loading library shm_lib@master 00:00:04.644 Library shm_lib@master is cached. Copying from home. 00:00:04.664 [Pipeline] node 00:00:04.681 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.682 [Pipeline] { 00:00:04.693 [Pipeline] catchError 00:00:04.695 [Pipeline] { 00:00:04.709 [Pipeline] wrap 00:00:04.719 [Pipeline] { 00:00:04.726 [Pipeline] stage 00:00:04.727 [Pipeline] { (Prologue) 00:00:04.902 [Pipeline] sh 00:00:05.179 + logger -p user.info -t JENKINS-CI 00:00:05.199 [Pipeline] echo 00:00:05.201 Node: WFP8 00:00:05.209 [Pipeline] sh 00:00:05.506 [Pipeline] setCustomBuildProperty 00:00:05.517 [Pipeline] echo 00:00:05.519 Cleanup processes 00:00:05.524 [Pipeline] sh 00:00:05.802 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.802 2746771 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.815 [Pipeline] sh 00:00:06.098 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.098 ++ grep -v 'sudo pgrep' 00:00:06.098 ++ awk '{print $1}' 00:00:06.098 + sudo kill -9 00:00:06.098 + true 00:00:06.112 [Pipeline] cleanWs 00:00:06.123 [WS-CLEANUP] Deleting project workspace... 00:00:06.123 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.129 [WS-CLEANUP] done 00:00:06.134 [Pipeline] setCustomBuildProperty 00:00:06.151 [Pipeline] sh 00:00:06.435 + sudo git config --global --replace-all safe.directory '*' 00:00:06.503 [Pipeline] nodesByLabel 00:00:06.504 Found a total of 1 nodes with the 'sorcerer' label 00:00:06.514 [Pipeline] httpRequest 00:00:06.518 HttpMethod: GET 00:00:06.519 URL: http://10.211.164.101/packages/jbp_a704ed4d86859cb8cbec080c78b138476da6ee34.tar.gz 00:00:06.522 Sending request to url: http://10.211.164.101/packages/jbp_a704ed4d86859cb8cbec080c78b138476da6ee34.tar.gz 00:00:06.530 Response Code: HTTP/1.1 200 OK 00:00:06.530 Success: Status code 200 is in the accepted range: 200,404 00:00:06.531 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_a704ed4d86859cb8cbec080c78b138476da6ee34.tar.gz 00:00:08.142 [Pipeline] sh 00:00:08.421 + tar --no-same-owner -xf jbp_a704ed4d86859cb8cbec080c78b138476da6ee34.tar.gz 00:00:08.438 [Pipeline] httpRequest 00:00:08.442 HttpMethod: GET 00:00:08.443 URL: http://10.211.164.101/packages/spdk_99b3305a57090397d476627a0fbcaca26b7cfada.tar.gz 00:00:08.444 Sending request to url: http://10.211.164.101/packages/spdk_99b3305a57090397d476627a0fbcaca26b7cfada.tar.gz 00:00:08.455 Response Code: HTTP/1.1 200 OK 00:00:08.455 Success: Status code 200 is in the accepted range: 200,404 00:00:08.456 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_99b3305a57090397d476627a0fbcaca26b7cfada.tar.gz 00:00:33.332 [Pipeline] sh 00:00:33.620 + tar --no-same-owner -xf spdk_99b3305a57090397d476627a0fbcaca26b7cfada.tar.gz 00:00:36.162 [Pipeline] sh 00:00:36.442 + git -C spdk log --oneline -n5 00:00:36.442 99b3305a5 nvmf/auth: Diffie-Hellman exchange support 00:00:36.442 f808ef364 nvmf/auth: add nvmf_auth_qpair_cleanup() 00:00:36.442 60b78ebde nvme/auth: make DH functions public 00:00:36.442 33fdd170e nvme/auth: get dhgroup from EVP_PKEY in nvme_auth_derive_secret() 00:00:36.442 a0b47b88d nvme/auth: split generating dhkey from getting pubkey 00:00:36.454 [Pipeline] } 00:00:36.471 [Pipeline] // stage 00:00:36.479 [Pipeline] stage 00:00:36.481 [Pipeline] { (Prepare) 00:00:36.501 [Pipeline] writeFile 00:00:36.518 [Pipeline] sh 00:00:36.797 + logger -p user.info -t JENKINS-CI 00:00:36.810 [Pipeline] sh 00:00:37.092 + logger -p user.info -t JENKINS-CI 00:00:37.104 [Pipeline] sh 00:00:37.382 + cat autorun-spdk.conf 00:00:37.382 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:37.382 SPDK_TEST_NVMF=1 00:00:37.382 SPDK_TEST_NVME_CLI=1 00:00:37.382 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:37.382 SPDK_TEST_NVMF_NICS=e810 00:00:37.382 SPDK_TEST_VFIOUSER=1 00:00:37.382 SPDK_RUN_UBSAN=1 00:00:37.382 NET_TYPE=phy 00:00:37.389 RUN_NIGHTLY=0 00:00:37.394 [Pipeline] readFile 00:00:37.456 [Pipeline] withEnv 00:00:37.457 [Pipeline] { 00:00:37.466 [Pipeline] sh 00:00:37.741 + set -ex 00:00:37.741 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:37.741 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:37.741 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:37.741 ++ SPDK_TEST_NVMF=1 00:00:37.741 ++ SPDK_TEST_NVME_CLI=1 00:00:37.741 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:37.741 ++ SPDK_TEST_NVMF_NICS=e810 00:00:37.741 ++ SPDK_TEST_VFIOUSER=1 00:00:37.741 ++ SPDK_RUN_UBSAN=1 00:00:37.741 ++ NET_TYPE=phy 00:00:37.741 ++ RUN_NIGHTLY=0 00:00:37.741 + case $SPDK_TEST_NVMF_NICS in 00:00:37.741 + DRIVERS=ice 00:00:37.741 + [[ tcp == \r\d\m\a ]] 00:00:37.741 + [[ -n ice ]] 00:00:37.741 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:37.741 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:37.741 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:37.741 rmmod: ERROR: Module irdma is not currently loaded 00:00:37.741 rmmod: ERROR: Module i40iw is not currently loaded 00:00:37.741 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:37.741 + true 00:00:37.741 + for D in $DRIVERS 00:00:37.741 + sudo modprobe ice 00:00:37.741 + exit 0 00:00:37.751 [Pipeline] } 00:00:37.768 [Pipeline] // withEnv 00:00:37.773 [Pipeline] } 00:00:37.789 [Pipeline] // stage 00:00:37.797 [Pipeline] catchError 00:00:37.799 [Pipeline] { 00:00:37.814 [Pipeline] timeout 00:00:37.815 Timeout set to expire in 40 min 00:00:37.816 [Pipeline] { 00:00:37.831 [Pipeline] stage 00:00:37.834 [Pipeline] { (Tests) 00:00:37.850 [Pipeline] sh 00:00:38.131 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:38.132 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:38.132 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:38.132 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:38.132 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:38.132 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:38.132 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:38.132 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:38.132 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:38.132 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:38.132 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:38.132 + source /etc/os-release 00:00:38.132 ++ NAME='Fedora Linux' 00:00:38.132 ++ VERSION='38 (Cloud Edition)' 00:00:38.132 ++ ID=fedora 00:00:38.132 ++ VERSION_ID=38 00:00:38.132 ++ VERSION_CODENAME= 00:00:38.132 ++ PLATFORM_ID=platform:f38 00:00:38.132 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:38.132 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:38.132 ++ LOGO=fedora-logo-icon 00:00:38.132 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:38.132 ++ HOME_URL=https://fedoraproject.org/ 00:00:38.132 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:38.132 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:38.132 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:38.132 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:38.132 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:38.132 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:38.132 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:38.132 ++ SUPPORT_END=2024-05-14 00:00:38.132 ++ VARIANT='Cloud Edition' 00:00:38.132 ++ VARIANT_ID=cloud 00:00:38.132 + uname -a 00:00:38.132 Linux spdk-wfp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:38.132 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:40.662 Hugepages 00:00:40.662 node hugesize free / total 00:00:40.662 node0 1048576kB 0 / 0 00:00:40.662 node0 2048kB 0 / 0 00:00:40.662 node1 1048576kB 0 / 0 00:00:40.662 node1 2048kB 0 / 0 00:00:40.662 00:00:40.662 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:40.920 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:40.920 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:40.920 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:40.920 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:40.920 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:40.920 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:40.920 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:40.920 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:40.920 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:00:40.920 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:40.920 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:40.920 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:40.920 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:40.920 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:40.920 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:40.920 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:40.920 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:40.920 + rm -f /tmp/spdk-ld-path 00:00:40.920 + source autorun-spdk.conf 00:00:40.920 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:40.920 ++ SPDK_TEST_NVMF=1 00:00:40.920 ++ SPDK_TEST_NVME_CLI=1 00:00:40.920 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:40.920 ++ SPDK_TEST_NVMF_NICS=e810 00:00:40.920 ++ SPDK_TEST_VFIOUSER=1 00:00:40.920 ++ SPDK_RUN_UBSAN=1 00:00:40.920 ++ NET_TYPE=phy 00:00:40.920 ++ RUN_NIGHTLY=0 00:00:40.920 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:40.920 + [[ -n '' ]] 00:00:40.920 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:40.920 + for M in /var/spdk/build-*-manifest.txt 00:00:40.920 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:40.920 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:40.920 + for M in /var/spdk/build-*-manifest.txt 00:00:40.920 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:40.920 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:40.920 ++ uname 00:00:40.920 + [[ Linux == \L\i\n\u\x ]] 00:00:40.920 + sudo dmesg -T 00:00:40.920 + sudo dmesg --clear 00:00:40.920 + dmesg_pid=2747781 00:00:40.920 + [[ Fedora Linux == FreeBSD ]] 00:00:40.920 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:40.920 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:40.921 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:40.921 + [[ -x /usr/src/fio-static/fio ]] 00:00:40.921 + export FIO_BIN=/usr/src/fio-static/fio 00:00:40.921 + FIO_BIN=/usr/src/fio-static/fio 00:00:40.921 + sudo dmesg -Tw 00:00:40.921 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:40.921 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:40.921 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:40.921 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:40.921 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:40.921 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:40.921 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:40.921 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:40.921 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:41.179 Test configuration: 00:00:41.179 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:41.179 SPDK_TEST_NVMF=1 00:00:41.179 SPDK_TEST_NVME_CLI=1 00:00:41.179 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:41.179 SPDK_TEST_NVMF_NICS=e810 00:00:41.179 SPDK_TEST_VFIOUSER=1 00:00:41.179 SPDK_RUN_UBSAN=1 00:00:41.179 NET_TYPE=phy 00:00:41.179 RUN_NIGHTLY=0 20:53:56 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:41.179 20:53:56 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:41.179 20:53:56 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:41.179 20:53:56 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:41.179 20:53:56 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:41.179 20:53:56 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:41.179 20:53:56 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:41.179 20:53:56 -- paths/export.sh@5 -- $ export PATH 00:00:41.179 20:53:56 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:41.179 20:53:56 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:41.179 20:53:56 -- common/autobuild_common.sh@435 -- $ date +%s 00:00:41.179 20:53:56 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713466436.XXXXXX 00:00:41.179 20:53:56 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713466436.KKFLOh 00:00:41.179 20:53:56 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:00:41.179 20:53:56 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:00:41.179 20:53:56 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:41.179 20:53:56 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:41.179 20:53:56 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:41.179 20:53:56 -- common/autobuild_common.sh@451 -- $ get_config_params 00:00:41.179 20:53:56 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:00:41.179 20:53:56 -- common/autotest_common.sh@10 -- $ set +x 00:00:41.179 20:53:56 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:41.179 20:53:56 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:00:41.179 20:53:56 -- pm/common@17 -- $ local monitor 00:00:41.179 20:53:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:41.179 20:53:56 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2747815 00:00:41.179 20:53:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:41.179 20:53:56 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2747817 00:00:41.179 20:53:56 -- pm/common@21 -- $ date +%s 00:00:41.179 20:53:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:41.179 20:53:56 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2747820 00:00:41.179 20:53:56 -- pm/common@21 -- $ date +%s 00:00:41.179 20:53:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:41.179 20:53:56 -- pm/common@21 -- $ date +%s 00:00:41.179 20:53:56 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2747823 00:00:41.179 20:53:56 -- pm/common@26 -- $ sleep 1 00:00:41.179 20:53:56 -- pm/common@21 -- $ date +%s 00:00:41.179 20:53:56 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713466436 00:00:41.179 20:53:56 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713466436 00:00:41.179 20:53:56 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713466436 00:00:41.179 20:53:56 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713466436 00:00:41.179 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713466436_collect-vmstat.pm.log 00:00:41.179 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713466436_collect-bmc-pm.bmc.pm.log 00:00:41.179 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713466436_collect-cpu-load.pm.log 00:00:41.179 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713466436_collect-cpu-temp.pm.log 00:00:42.114 20:53:57 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:00:42.114 20:53:57 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:42.114 20:53:57 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:42.114 20:53:57 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:42.114 20:53:57 -- spdk/autobuild.sh@16 -- $ date -u 00:00:42.114 Thu Apr 18 06:53:57 PM UTC 2024 00:00:42.114 20:53:57 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:42.114 v24.05-pre-442-g99b3305a5 00:00:42.114 20:53:57 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:42.114 20:53:57 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:42.114 20:53:57 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:42.114 20:53:57 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:00:42.114 20:53:57 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:00:42.114 20:53:58 -- common/autotest_common.sh@10 -- $ set +x 00:00:42.373 ************************************ 00:00:42.373 START TEST ubsan 00:00:42.373 ************************************ 00:00:42.373 20:53:58 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:00:42.373 using ubsan 00:00:42.373 00:00:42.373 real 0m0.000s 00:00:42.373 user 0m0.000s 00:00:42.373 sys 0m0.000s 00:00:42.373 20:53:58 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:00:42.373 20:53:58 -- common/autotest_common.sh@10 -- $ set +x 00:00:42.373 ************************************ 00:00:42.373 END TEST ubsan 00:00:42.373 ************************************ 00:00:42.373 20:53:58 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:42.373 20:53:58 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:42.373 20:53:58 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:42.373 20:53:58 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:42.373 20:53:58 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:42.373 20:53:58 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:42.373 20:53:58 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:42.373 20:53:58 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:42.373 20:53:58 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:42.631 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:42.631 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:42.889 Using 'verbs' RDMA provider 00:00:55.690 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:07.894 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:07.894 Creating mk/config.mk...done. 00:01:07.894 Creating mk/cc.flags.mk...done. 00:01:07.894 Type 'make' to build. 00:01:07.894 20:54:22 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:01:07.894 20:54:22 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:07.894 20:54:22 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:07.894 20:54:22 -- common/autotest_common.sh@10 -- $ set +x 00:01:07.894 ************************************ 00:01:07.894 START TEST make 00:01:07.894 ************************************ 00:01:07.894 20:54:22 -- common/autotest_common.sh@1111 -- $ make -j96 00:01:07.894 make[1]: Nothing to be done for 'all'. 00:01:08.471 The Meson build system 00:01:08.471 Version: 1.3.1 00:01:08.471 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:08.471 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:08.471 Build type: native build 00:01:08.471 Project name: libvfio-user 00:01:08.471 Project version: 0.0.1 00:01:08.471 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:08.471 C linker for the host machine: cc ld.bfd 2.39-16 00:01:08.471 Host machine cpu family: x86_64 00:01:08.471 Host machine cpu: x86_64 00:01:08.471 Run-time dependency threads found: YES 00:01:08.471 Library dl found: YES 00:01:08.471 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:08.471 Run-time dependency json-c found: YES 0.17 00:01:08.471 Run-time dependency cmocka found: YES 1.1.7 00:01:08.471 Program pytest-3 found: NO 00:01:08.471 Program flake8 found: NO 00:01:08.471 Program misspell-fixer found: NO 00:01:08.471 Program restructuredtext-lint found: NO 00:01:08.471 Program valgrind found: YES (/usr/bin/valgrind) 00:01:08.471 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:08.471 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:08.471 Compiler for C supports arguments -Wwrite-strings: YES 00:01:08.471 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:08.471 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:08.471 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:08.471 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:08.471 Build targets in project: 8 00:01:08.471 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:08.471 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:08.471 00:01:08.471 libvfio-user 0.0.1 00:01:08.471 00:01:08.471 User defined options 00:01:08.471 buildtype : debug 00:01:08.471 default_library: shared 00:01:08.471 libdir : /usr/local/lib 00:01:08.471 00:01:08.471 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:09.407 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:09.407 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:09.407 [2/37] Compiling C object samples/null.p/null.c.o 00:01:09.407 [3/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:09.407 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:09.407 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:09.407 [6/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:09.407 [7/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:09.407 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:09.407 [9/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:09.407 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:09.407 [11/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:09.407 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:09.407 [13/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:09.407 [14/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:09.407 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:09.407 [16/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:09.407 [17/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:09.407 [18/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:09.407 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:09.407 [20/37] Compiling C object samples/server.p/server.c.o 00:01:09.407 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:09.407 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:09.407 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:09.407 [24/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:09.407 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:09.407 [26/37] Compiling C object samples/client.p/client.c.o 00:01:09.407 [27/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:09.407 [28/37] Linking target samples/client 00:01:09.407 [29/37] Linking target test/unit_tests 00:01:09.407 [30/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:09.665 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:01:09.665 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:09.665 [33/37] Linking target samples/gpio-pci-idio-16 00:01:09.665 [34/37] Linking target samples/null 00:01:09.665 [35/37] Linking target samples/shadow_ioeventfd_server 00:01:09.665 [36/37] Linking target samples/lspci 00:01:09.665 [37/37] Linking target samples/server 00:01:09.665 INFO: autodetecting backend as ninja 00:01:09.665 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:09.665 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:10.232 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:10.232 ninja: no work to do. 00:01:14.420 The Meson build system 00:01:14.420 Version: 1.3.1 00:01:14.420 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:14.420 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:14.420 Build type: native build 00:01:14.420 Program cat found: YES (/usr/bin/cat) 00:01:14.420 Project name: DPDK 00:01:14.420 Project version: 23.11.0 00:01:14.420 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:14.420 C linker for the host machine: cc ld.bfd 2.39-16 00:01:14.420 Host machine cpu family: x86_64 00:01:14.420 Host machine cpu: x86_64 00:01:14.420 Message: ## Building in Developer Mode ## 00:01:14.420 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:14.420 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:14.420 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:14.420 Program python3 found: YES (/usr/bin/python3) 00:01:14.420 Program cat found: YES (/usr/bin/cat) 00:01:14.420 Compiler for C supports arguments -march=native: YES 00:01:14.420 Checking for size of "void *" : 8 00:01:14.420 Checking for size of "void *" : 8 (cached) 00:01:14.420 Library m found: YES 00:01:14.420 Library numa found: YES 00:01:14.420 Has header "numaif.h" : YES 00:01:14.420 Library fdt found: NO 00:01:14.420 Library execinfo found: NO 00:01:14.420 Has header "execinfo.h" : YES 00:01:14.420 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:14.420 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:14.420 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:14.420 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:14.420 Run-time dependency openssl found: YES 3.0.9 00:01:14.420 Run-time dependency libpcap found: YES 1.10.4 00:01:14.420 Has header "pcap.h" with dependency libpcap: YES 00:01:14.420 Compiler for C supports arguments -Wcast-qual: YES 00:01:14.420 Compiler for C supports arguments -Wdeprecated: YES 00:01:14.420 Compiler for C supports arguments -Wformat: YES 00:01:14.420 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:14.420 Compiler for C supports arguments -Wformat-security: NO 00:01:14.420 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:14.420 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:14.420 Compiler for C supports arguments -Wnested-externs: YES 00:01:14.420 Compiler for C supports arguments -Wold-style-definition: YES 00:01:14.420 Compiler for C supports arguments -Wpointer-arith: YES 00:01:14.420 Compiler for C supports arguments -Wsign-compare: YES 00:01:14.420 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:14.420 Compiler for C supports arguments -Wundef: YES 00:01:14.420 Compiler for C supports arguments -Wwrite-strings: YES 00:01:14.420 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:14.420 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:14.420 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:14.420 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:14.420 Program objdump found: YES (/usr/bin/objdump) 00:01:14.420 Compiler for C supports arguments -mavx512f: YES 00:01:14.420 Checking if "AVX512 checking" compiles: YES 00:01:14.420 Fetching value of define "__SSE4_2__" : 1 00:01:14.420 Fetching value of define "__AES__" : 1 00:01:14.420 Fetching value of define "__AVX__" : 1 00:01:14.420 Fetching value of define "__AVX2__" : 1 00:01:14.420 Fetching value of define "__AVX512BW__" : 1 00:01:14.420 Fetching value of define "__AVX512CD__" : 1 00:01:14.420 Fetching value of define "__AVX512DQ__" : 1 00:01:14.420 Fetching value of define "__AVX512F__" : 1 00:01:14.420 Fetching value of define "__AVX512VL__" : 1 00:01:14.420 Fetching value of define "__PCLMUL__" : 1 00:01:14.420 Fetching value of define "__RDRND__" : 1 00:01:14.420 Fetching value of define "__RDSEED__" : 1 00:01:14.420 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:14.420 Fetching value of define "__znver1__" : (undefined) 00:01:14.420 Fetching value of define "__znver2__" : (undefined) 00:01:14.420 Fetching value of define "__znver3__" : (undefined) 00:01:14.420 Fetching value of define "__znver4__" : (undefined) 00:01:14.420 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:14.420 Message: lib/log: Defining dependency "log" 00:01:14.420 Message: lib/kvargs: Defining dependency "kvargs" 00:01:14.420 Message: lib/telemetry: Defining dependency "telemetry" 00:01:14.420 Checking for function "getentropy" : NO 00:01:14.420 Message: lib/eal: Defining dependency "eal" 00:01:14.420 Message: lib/ring: Defining dependency "ring" 00:01:14.420 Message: lib/rcu: Defining dependency "rcu" 00:01:14.420 Message: lib/mempool: Defining dependency "mempool" 00:01:14.420 Message: lib/mbuf: Defining dependency "mbuf" 00:01:14.420 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:14.420 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:14.420 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:14.420 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:14.420 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:14.420 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:14.420 Compiler for C supports arguments -mpclmul: YES 00:01:14.420 Compiler for C supports arguments -maes: YES 00:01:14.420 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:14.420 Compiler for C supports arguments -mavx512bw: YES 00:01:14.420 Compiler for C supports arguments -mavx512dq: YES 00:01:14.420 Compiler for C supports arguments -mavx512vl: YES 00:01:14.420 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:14.420 Compiler for C supports arguments -mavx2: YES 00:01:14.420 Compiler for C supports arguments -mavx: YES 00:01:14.420 Message: lib/net: Defining dependency "net" 00:01:14.420 Message: lib/meter: Defining dependency "meter" 00:01:14.420 Message: lib/ethdev: Defining dependency "ethdev" 00:01:14.420 Message: lib/pci: Defining dependency "pci" 00:01:14.420 Message: lib/cmdline: Defining dependency "cmdline" 00:01:14.420 Message: lib/hash: Defining dependency "hash" 00:01:14.420 Message: lib/timer: Defining dependency "timer" 00:01:14.420 Message: lib/compressdev: Defining dependency "compressdev" 00:01:14.420 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:14.420 Message: lib/dmadev: Defining dependency "dmadev" 00:01:14.420 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:14.420 Message: lib/power: Defining dependency "power" 00:01:14.420 Message: lib/reorder: Defining dependency "reorder" 00:01:14.420 Message: lib/security: Defining dependency "security" 00:01:14.420 Has header "linux/userfaultfd.h" : YES 00:01:14.420 Has header "linux/vduse.h" : YES 00:01:14.420 Message: lib/vhost: Defining dependency "vhost" 00:01:14.420 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:14.420 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:14.420 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:14.420 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:14.420 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:14.420 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:14.420 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:14.420 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:14.420 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:14.420 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:14.420 Program doxygen found: YES (/usr/bin/doxygen) 00:01:14.420 Configuring doxy-api-html.conf using configuration 00:01:14.420 Configuring doxy-api-man.conf using configuration 00:01:14.420 Program mandb found: YES (/usr/bin/mandb) 00:01:14.420 Program sphinx-build found: NO 00:01:14.420 Configuring rte_build_config.h using configuration 00:01:14.420 Message: 00:01:14.420 ================= 00:01:14.420 Applications Enabled 00:01:14.420 ================= 00:01:14.420 00:01:14.420 apps: 00:01:14.420 00:01:14.420 00:01:14.420 Message: 00:01:14.420 ================= 00:01:14.420 Libraries Enabled 00:01:14.420 ================= 00:01:14.420 00:01:14.420 libs: 00:01:14.420 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:14.420 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:14.420 cryptodev, dmadev, power, reorder, security, vhost, 00:01:14.420 00:01:14.420 Message: 00:01:14.420 =============== 00:01:14.420 Drivers Enabled 00:01:14.420 =============== 00:01:14.420 00:01:14.420 common: 00:01:14.420 00:01:14.420 bus: 00:01:14.420 pci, vdev, 00:01:14.420 mempool: 00:01:14.420 ring, 00:01:14.420 dma: 00:01:14.420 00:01:14.420 net: 00:01:14.420 00:01:14.420 crypto: 00:01:14.420 00:01:14.420 compress: 00:01:14.420 00:01:14.420 vdpa: 00:01:14.420 00:01:14.420 00:01:14.420 Message: 00:01:14.420 ================= 00:01:14.420 Content Skipped 00:01:14.420 ================= 00:01:14.420 00:01:14.420 apps: 00:01:14.420 dumpcap: explicitly disabled via build config 00:01:14.420 graph: explicitly disabled via build config 00:01:14.420 pdump: explicitly disabled via build config 00:01:14.420 proc-info: explicitly disabled via build config 00:01:14.420 test-acl: explicitly disabled via build config 00:01:14.420 test-bbdev: explicitly disabled via build config 00:01:14.420 test-cmdline: explicitly disabled via build config 00:01:14.420 test-compress-perf: explicitly disabled via build config 00:01:14.420 test-crypto-perf: explicitly disabled via build config 00:01:14.420 test-dma-perf: explicitly disabled via build config 00:01:14.420 test-eventdev: explicitly disabled via build config 00:01:14.420 test-fib: explicitly disabled via build config 00:01:14.420 test-flow-perf: explicitly disabled via build config 00:01:14.420 test-gpudev: explicitly disabled via build config 00:01:14.420 test-mldev: explicitly disabled via build config 00:01:14.420 test-pipeline: explicitly disabled via build config 00:01:14.420 test-pmd: explicitly disabled via build config 00:01:14.420 test-regex: explicitly disabled via build config 00:01:14.420 test-sad: explicitly disabled via build config 00:01:14.420 test-security-perf: explicitly disabled via build config 00:01:14.420 00:01:14.420 libs: 00:01:14.420 metrics: explicitly disabled via build config 00:01:14.420 acl: explicitly disabled via build config 00:01:14.420 bbdev: explicitly disabled via build config 00:01:14.420 bitratestats: explicitly disabled via build config 00:01:14.420 bpf: explicitly disabled via build config 00:01:14.420 cfgfile: explicitly disabled via build config 00:01:14.420 distributor: explicitly disabled via build config 00:01:14.420 efd: explicitly disabled via build config 00:01:14.420 eventdev: explicitly disabled via build config 00:01:14.420 dispatcher: explicitly disabled via build config 00:01:14.421 gpudev: explicitly disabled via build config 00:01:14.421 gro: explicitly disabled via build config 00:01:14.421 gso: explicitly disabled via build config 00:01:14.421 ip_frag: explicitly disabled via build config 00:01:14.421 jobstats: explicitly disabled via build config 00:01:14.421 latencystats: explicitly disabled via build config 00:01:14.421 lpm: explicitly disabled via build config 00:01:14.421 member: explicitly disabled via build config 00:01:14.421 pcapng: explicitly disabled via build config 00:01:14.421 rawdev: explicitly disabled via build config 00:01:14.421 regexdev: explicitly disabled via build config 00:01:14.421 mldev: explicitly disabled via build config 00:01:14.421 rib: explicitly disabled via build config 00:01:14.421 sched: explicitly disabled via build config 00:01:14.421 stack: explicitly disabled via build config 00:01:14.421 ipsec: explicitly disabled via build config 00:01:14.421 pdcp: explicitly disabled via build config 00:01:14.421 fib: explicitly disabled via build config 00:01:14.421 port: explicitly disabled via build config 00:01:14.421 pdump: explicitly disabled via build config 00:01:14.421 table: explicitly disabled via build config 00:01:14.421 pipeline: explicitly disabled via build config 00:01:14.421 graph: explicitly disabled via build config 00:01:14.421 node: explicitly disabled via build config 00:01:14.421 00:01:14.421 drivers: 00:01:14.421 common/cpt: not in enabled drivers build config 00:01:14.421 common/dpaax: not in enabled drivers build config 00:01:14.421 common/iavf: not in enabled drivers build config 00:01:14.421 common/idpf: not in enabled drivers build config 00:01:14.421 common/mvep: not in enabled drivers build config 00:01:14.421 common/octeontx: not in enabled drivers build config 00:01:14.421 bus/auxiliary: not in enabled drivers build config 00:01:14.421 bus/cdx: not in enabled drivers build config 00:01:14.421 bus/dpaa: not in enabled drivers build config 00:01:14.421 bus/fslmc: not in enabled drivers build config 00:01:14.421 bus/ifpga: not in enabled drivers build config 00:01:14.421 bus/platform: not in enabled drivers build config 00:01:14.421 bus/vmbus: not in enabled drivers build config 00:01:14.421 common/cnxk: not in enabled drivers build config 00:01:14.421 common/mlx5: not in enabled drivers build config 00:01:14.421 common/nfp: not in enabled drivers build config 00:01:14.421 common/qat: not in enabled drivers build config 00:01:14.421 common/sfc_efx: not in enabled drivers build config 00:01:14.421 mempool/bucket: not in enabled drivers build config 00:01:14.421 mempool/cnxk: not in enabled drivers build config 00:01:14.421 mempool/dpaa: not in enabled drivers build config 00:01:14.421 mempool/dpaa2: not in enabled drivers build config 00:01:14.421 mempool/octeontx: not in enabled drivers build config 00:01:14.421 mempool/stack: not in enabled drivers build config 00:01:14.421 dma/cnxk: not in enabled drivers build config 00:01:14.421 dma/dpaa: not in enabled drivers build config 00:01:14.421 dma/dpaa2: not in enabled drivers build config 00:01:14.421 dma/hisilicon: not in enabled drivers build config 00:01:14.421 dma/idxd: not in enabled drivers build config 00:01:14.421 dma/ioat: not in enabled drivers build config 00:01:14.421 dma/skeleton: not in enabled drivers build config 00:01:14.421 net/af_packet: not in enabled drivers build config 00:01:14.421 net/af_xdp: not in enabled drivers build config 00:01:14.421 net/ark: not in enabled drivers build config 00:01:14.421 net/atlantic: not in enabled drivers build config 00:01:14.421 net/avp: not in enabled drivers build config 00:01:14.421 net/axgbe: not in enabled drivers build config 00:01:14.421 net/bnx2x: not in enabled drivers build config 00:01:14.421 net/bnxt: not in enabled drivers build config 00:01:14.421 net/bonding: not in enabled drivers build config 00:01:14.421 net/cnxk: not in enabled drivers build config 00:01:14.421 net/cpfl: not in enabled drivers build config 00:01:14.421 net/cxgbe: not in enabled drivers build config 00:01:14.421 net/dpaa: not in enabled drivers build config 00:01:14.421 net/dpaa2: not in enabled drivers build config 00:01:14.421 net/e1000: not in enabled drivers build config 00:01:14.421 net/ena: not in enabled drivers build config 00:01:14.421 net/enetc: not in enabled drivers build config 00:01:14.421 net/enetfec: not in enabled drivers build config 00:01:14.421 net/enic: not in enabled drivers build config 00:01:14.421 net/failsafe: not in enabled drivers build config 00:01:14.421 net/fm10k: not in enabled drivers build config 00:01:14.421 net/gve: not in enabled drivers build config 00:01:14.421 net/hinic: not in enabled drivers build config 00:01:14.421 net/hns3: not in enabled drivers build config 00:01:14.421 net/i40e: not in enabled drivers build config 00:01:14.421 net/iavf: not in enabled drivers build config 00:01:14.421 net/ice: not in enabled drivers build config 00:01:14.421 net/idpf: not in enabled drivers build config 00:01:14.421 net/igc: not in enabled drivers build config 00:01:14.421 net/ionic: not in enabled drivers build config 00:01:14.421 net/ipn3ke: not in enabled drivers build config 00:01:14.421 net/ixgbe: not in enabled drivers build config 00:01:14.421 net/mana: not in enabled drivers build config 00:01:14.421 net/memif: not in enabled drivers build config 00:01:14.421 net/mlx4: not in enabled drivers build config 00:01:14.421 net/mlx5: not in enabled drivers build config 00:01:14.421 net/mvneta: not in enabled drivers build config 00:01:14.421 net/mvpp2: not in enabled drivers build config 00:01:14.421 net/netvsc: not in enabled drivers build config 00:01:14.421 net/nfb: not in enabled drivers build config 00:01:14.421 net/nfp: not in enabled drivers build config 00:01:14.421 net/ngbe: not in enabled drivers build config 00:01:14.421 net/null: not in enabled drivers build config 00:01:14.421 net/octeontx: not in enabled drivers build config 00:01:14.421 net/octeon_ep: not in enabled drivers build config 00:01:14.421 net/pcap: not in enabled drivers build config 00:01:14.421 net/pfe: not in enabled drivers build config 00:01:14.421 net/qede: not in enabled drivers build config 00:01:14.421 net/ring: not in enabled drivers build config 00:01:14.421 net/sfc: not in enabled drivers build config 00:01:14.421 net/softnic: not in enabled drivers build config 00:01:14.421 net/tap: not in enabled drivers build config 00:01:14.421 net/thunderx: not in enabled drivers build config 00:01:14.421 net/txgbe: not in enabled drivers build config 00:01:14.421 net/vdev_netvsc: not in enabled drivers build config 00:01:14.421 net/vhost: not in enabled drivers build config 00:01:14.421 net/virtio: not in enabled drivers build config 00:01:14.421 net/vmxnet3: not in enabled drivers build config 00:01:14.421 raw/*: missing internal dependency, "rawdev" 00:01:14.421 crypto/armv8: not in enabled drivers build config 00:01:14.421 crypto/bcmfs: not in enabled drivers build config 00:01:14.421 crypto/caam_jr: not in enabled drivers build config 00:01:14.421 crypto/ccp: not in enabled drivers build config 00:01:14.421 crypto/cnxk: not in enabled drivers build config 00:01:14.421 crypto/dpaa_sec: not in enabled drivers build config 00:01:14.421 crypto/dpaa2_sec: not in enabled drivers build config 00:01:14.421 crypto/ipsec_mb: not in enabled drivers build config 00:01:14.421 crypto/mlx5: not in enabled drivers build config 00:01:14.421 crypto/mvsam: not in enabled drivers build config 00:01:14.421 crypto/nitrox: not in enabled drivers build config 00:01:14.421 crypto/null: not in enabled drivers build config 00:01:14.421 crypto/octeontx: not in enabled drivers build config 00:01:14.421 crypto/openssl: not in enabled drivers build config 00:01:14.421 crypto/scheduler: not in enabled drivers build config 00:01:14.421 crypto/uadk: not in enabled drivers build config 00:01:14.421 crypto/virtio: not in enabled drivers build config 00:01:14.421 compress/isal: not in enabled drivers build config 00:01:14.421 compress/mlx5: not in enabled drivers build config 00:01:14.421 compress/octeontx: not in enabled drivers build config 00:01:14.421 compress/zlib: not in enabled drivers build config 00:01:14.421 regex/*: missing internal dependency, "regexdev" 00:01:14.421 ml/*: missing internal dependency, "mldev" 00:01:14.421 vdpa/ifc: not in enabled drivers build config 00:01:14.421 vdpa/mlx5: not in enabled drivers build config 00:01:14.421 vdpa/nfp: not in enabled drivers build config 00:01:14.421 vdpa/sfc: not in enabled drivers build config 00:01:14.421 event/*: missing internal dependency, "eventdev" 00:01:14.421 baseband/*: missing internal dependency, "bbdev" 00:01:14.421 gpu/*: missing internal dependency, "gpudev" 00:01:14.421 00:01:14.421 00:01:14.679 Build targets in project: 85 00:01:14.679 00:01:14.679 DPDK 23.11.0 00:01:14.679 00:01:14.679 User defined options 00:01:14.679 buildtype : debug 00:01:14.679 default_library : shared 00:01:14.679 libdir : lib 00:01:14.679 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:14.679 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:14.679 c_link_args : 00:01:14.679 cpu_instruction_set: native 00:01:14.679 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:01:14.679 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:01:14.679 enable_docs : false 00:01:14.679 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:14.679 enable_kmods : false 00:01:14.679 tests : false 00:01:14.679 00:01:14.679 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:14.954 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:14.954 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:14.954 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:15.220 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:15.220 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:15.220 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:15.220 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:15.220 [7/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:15.220 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:15.220 [9/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:15.220 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:15.220 [11/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:15.220 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:15.220 [13/265] Linking static target lib/librte_kvargs.a 00:01:15.220 [14/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:15.220 [15/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:15.220 [16/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:15.220 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:15.220 [18/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:15.220 [19/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:15.220 [20/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:15.220 [21/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:15.220 [22/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:15.220 [23/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:15.220 [24/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:15.220 [25/265] Linking static target lib/librte_log.a 00:01:15.220 [26/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:15.220 [27/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:15.220 [28/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:15.220 [29/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:15.220 [30/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:15.484 [31/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:15.484 [32/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:15.484 [33/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:15.484 [34/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:15.484 [35/265] Linking static target lib/librte_pci.a 00:01:15.484 [36/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:15.484 [37/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:15.484 [38/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:15.484 [39/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:15.484 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:15.484 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:15.484 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:15.484 [43/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:15.484 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:15.484 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:15.484 [46/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:15.484 [47/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:15.484 [48/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:15.745 [49/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:15.745 [50/265] Linking static target lib/librte_meter.a 00:01:15.745 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:15.745 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:15.745 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:15.745 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:15.745 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:15.745 [56/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:15.745 [57/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:15.745 [58/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:15.745 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:15.745 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:15.745 [61/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:15.745 [62/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:15.745 [63/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:15.745 [64/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:15.745 [65/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:15.745 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:15.745 [67/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:15.745 [68/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:15.745 [69/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:15.745 [70/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:15.745 [71/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:15.745 [72/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:15.745 [73/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:15.745 [74/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:15.745 [75/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:15.745 [76/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:15.745 [77/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:15.745 [78/265] Linking static target lib/librte_ring.a 00:01:15.745 [79/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:15.745 [80/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:15.745 [81/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:15.745 [82/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:15.745 [83/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:15.745 [84/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:15.745 [85/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.745 [86/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:15.745 [87/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:15.745 [88/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:15.745 [89/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:15.745 [90/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:15.745 [91/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:15.745 [92/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:15.745 [93/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:15.745 [94/265] Linking static target lib/librte_telemetry.a 00:01:15.745 [95/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:15.745 [96/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:15.745 [97/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:15.745 [98/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.745 [99/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:15.745 [100/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:15.745 [101/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:15.745 [102/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:15.745 [103/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:15.745 [104/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:15.745 [105/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:15.745 [106/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:15.746 [107/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:15.746 [108/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:15.746 [109/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:15.746 [110/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:15.746 [111/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:15.746 [112/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:15.746 [113/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:15.746 [114/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:15.746 [115/265] Linking static target lib/librte_rcu.a 00:01:15.746 [116/265] Linking static target lib/librte_net.a 00:01:15.746 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:15.746 [118/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:15.746 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:15.746 [120/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:15.746 [121/265] Linking static target lib/librte_cmdline.a 00:01:15.746 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:15.746 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:15.746 [124/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:15.746 [125/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:15.746 [126/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:15.746 [127/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:15.746 [128/265] Linking static target lib/librte_mempool.a 00:01:15.746 [129/265] Linking static target lib/librte_timer.a 00:01:15.746 [130/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:15.746 [131/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:15.746 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:15.746 [133/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.746 [134/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:15.746 [135/265] Linking static target lib/librte_eal.a 00:01:15.746 [136/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:15.746 [137/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:16.005 [138/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:16.005 [139/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:16.005 [140/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:16.005 [141/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:16.005 [142/265] Linking static target lib/librte_compressdev.a 00:01:16.005 [143/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:16.005 [144/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.005 [145/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.005 [146/265] Linking static target lib/librte_mbuf.a 00:01:16.005 [147/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:16.005 [148/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:16.005 [149/265] Linking target lib/librte_log.so.24.0 00:01:16.005 [150/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:16.005 [151/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:16.005 [152/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:16.005 [153/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:16.005 [154/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:16.005 [155/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:16.005 [156/265] Linking static target lib/librte_dmadev.a 00:01:16.005 [157/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:16.005 [158/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:16.005 [159/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.005 [160/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.005 [161/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:16.005 [162/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:16.005 [163/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:16.005 [164/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:16.005 [165/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:16.005 [166/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:16.005 [167/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:16.005 [168/265] Linking static target lib/librte_reorder.a 00:01:16.005 [169/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:16.005 [170/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:16.005 [171/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.005 [172/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:16.005 [173/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.005 [174/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:16.005 [175/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:16.005 [176/265] Linking target lib/librte_kvargs.so.24.0 00:01:16.264 [177/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:16.264 [178/265] Linking target lib/librte_telemetry.so.24.0 00:01:16.264 [179/265] Linking static target lib/librte_hash.a 00:01:16.264 [180/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:16.264 [181/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:16.264 [182/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:16.264 [183/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:16.264 [184/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:16.264 [185/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:16.264 [186/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:16.264 [187/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:16.264 [188/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:16.264 [189/265] Linking static target lib/librte_power.a 00:01:16.264 [190/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:16.264 [191/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:16.264 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:16.264 [193/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:16.264 [194/265] Linking static target lib/librte_security.a 00:01:16.264 [195/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:16.264 [196/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:16.264 [197/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:16.264 [198/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:16.264 [199/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:16.264 [200/265] Linking static target drivers/librte_mempool_ring.a 00:01:16.524 [201/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:16.524 [202/265] Linking static target lib/librte_cryptodev.a 00:01:16.524 [203/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:16.524 [204/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:16.524 [205/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:16.524 [206/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:16.524 [207/265] Linking static target drivers/librte_bus_vdev.a 00:01:16.524 [208/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:16.524 [209/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:16.524 [210/265] Linking static target drivers/librte_bus_pci.a 00:01:16.524 [211/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.524 [212/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.524 [213/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.524 [214/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.524 [215/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.783 [216/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.783 [217/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.783 [218/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:16.783 [219/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.783 [220/265] Linking static target lib/librte_ethdev.a 00:01:16.783 [221/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:16.783 [222/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.041 [223/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.041 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.975 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:17.975 [226/265] Linking static target lib/librte_vhost.a 00:01:18.233 [227/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.607 [228/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.801 [229/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.737 [230/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.737 [231/265] Linking target lib/librte_eal.so.24.0 00:01:25.737 [232/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:25.737 [233/265] Linking target lib/librte_timer.so.24.0 00:01:25.996 [234/265] Linking target lib/librte_ring.so.24.0 00:01:25.996 [235/265] Linking target lib/librte_meter.so.24.0 00:01:25.996 [236/265] Linking target lib/librte_pci.so.24.0 00:01:25.996 [237/265] Linking target drivers/librte_bus_vdev.so.24.0 00:01:25.996 [238/265] Linking target lib/librte_dmadev.so.24.0 00:01:25.996 [239/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:25.996 [240/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:25.996 [241/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:25.996 [242/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:25.996 [243/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:25.996 [244/265] Linking target lib/librte_rcu.so.24.0 00:01:25.996 [245/265] Linking target lib/librte_mempool.so.24.0 00:01:25.996 [246/265] Linking target drivers/librte_bus_pci.so.24.0 00:01:26.255 [247/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:26.255 [248/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:26.255 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:01:26.255 [250/265] Linking target lib/librte_mbuf.so.24.0 00:01:26.255 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:26.515 [252/265] Linking target lib/librte_net.so.24.0 00:01:26.515 [253/265] Linking target lib/librte_compressdev.so.24.0 00:01:26.515 [254/265] Linking target lib/librte_reorder.so.24.0 00:01:26.515 [255/265] Linking target lib/librte_cryptodev.so.24.0 00:01:26.515 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:26.515 [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:26.515 [258/265] Linking target lib/librte_hash.so.24.0 00:01:26.515 [259/265] Linking target lib/librte_cmdline.so.24.0 00:01:26.515 [260/265] Linking target lib/librte_ethdev.so.24.0 00:01:26.515 [261/265] Linking target lib/librte_security.so.24.0 00:01:26.773 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:26.773 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:26.773 [264/265] Linking target lib/librte_power.so.24.0 00:01:26.773 [265/265] Linking target lib/librte_vhost.so.24.0 00:01:26.773 INFO: autodetecting backend as ninja 00:01:26.773 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:01:27.709 CC lib/ut/ut.o 00:01:27.709 CC lib/ut_mock/mock.o 00:01:27.709 CC lib/log/log.o 00:01:27.709 CC lib/log/log_flags.o 00:01:27.709 CC lib/log/log_deprecated.o 00:01:27.709 LIB libspdk_ut_mock.a 00:01:27.968 SO libspdk_ut_mock.so.6.0 00:01:27.968 LIB libspdk_ut.a 00:01:27.968 LIB libspdk_log.a 00:01:27.968 SO libspdk_ut.so.2.0 00:01:27.968 SYMLINK libspdk_ut_mock.so 00:01:27.968 SO libspdk_log.so.7.0 00:01:27.968 SYMLINK libspdk_ut.so 00:01:27.968 SYMLINK libspdk_log.so 00:01:28.227 CC lib/dma/dma.o 00:01:28.227 CXX lib/trace_parser/trace.o 00:01:28.227 CC lib/ioat/ioat.o 00:01:28.227 CC lib/util/base64.o 00:01:28.227 CC lib/util/bit_array.o 00:01:28.227 CC lib/util/crc16.o 00:01:28.227 CC lib/util/cpuset.o 00:01:28.227 CC lib/util/crc32c.o 00:01:28.227 CC lib/util/crc32.o 00:01:28.227 CC lib/util/crc32_ieee.o 00:01:28.227 CC lib/util/dif.o 00:01:28.227 CC lib/util/crc64.o 00:01:28.227 CC lib/util/fd.o 00:01:28.227 CC lib/util/file.o 00:01:28.227 CC lib/util/hexlify.o 00:01:28.227 CC lib/util/iov.o 00:01:28.227 CC lib/util/math.o 00:01:28.227 CC lib/util/pipe.o 00:01:28.227 CC lib/util/strerror_tls.o 00:01:28.227 CC lib/util/string.o 00:01:28.227 CC lib/util/uuid.o 00:01:28.227 CC lib/util/fd_group.o 00:01:28.227 CC lib/util/xor.o 00:01:28.227 CC lib/util/zipf.o 00:01:28.485 CC lib/vfio_user/host/vfio_user.o 00:01:28.485 CC lib/vfio_user/host/vfio_user_pci.o 00:01:28.485 LIB libspdk_dma.a 00:01:28.485 SO libspdk_dma.so.4.0 00:01:28.485 SYMLINK libspdk_dma.so 00:01:28.485 LIB libspdk_ioat.a 00:01:28.485 SO libspdk_ioat.so.7.0 00:01:28.744 LIB libspdk_vfio_user.a 00:01:28.744 SYMLINK libspdk_ioat.so 00:01:28.744 SO libspdk_vfio_user.so.5.0 00:01:28.744 LIB libspdk_util.a 00:01:28.744 SYMLINK libspdk_vfio_user.so 00:01:28.744 SO libspdk_util.so.9.0 00:01:29.003 SYMLINK libspdk_util.so 00:01:29.003 LIB libspdk_trace_parser.a 00:01:29.003 SO libspdk_trace_parser.so.5.0 00:01:29.003 SYMLINK libspdk_trace_parser.so 00:01:29.261 CC lib/vmd/vmd.o 00:01:29.261 CC lib/vmd/led.o 00:01:29.261 CC lib/idxd/idxd.o 00:01:29.261 CC lib/idxd/idxd_user.o 00:01:29.261 CC lib/conf/conf.o 00:01:29.261 CC lib/json/json_parse.o 00:01:29.261 CC lib/json/json_util.o 00:01:29.261 CC lib/json/json_write.o 00:01:29.261 CC lib/env_dpdk/env.o 00:01:29.261 CC lib/env_dpdk/memory.o 00:01:29.261 CC lib/env_dpdk/init.o 00:01:29.261 CC lib/env_dpdk/pci.o 00:01:29.261 CC lib/rdma/common.o 00:01:29.261 CC lib/env_dpdk/threads.o 00:01:29.261 CC lib/rdma/rdma_verbs.o 00:01:29.261 CC lib/env_dpdk/pci_ioat.o 00:01:29.261 CC lib/env_dpdk/pci_virtio.o 00:01:29.261 CC lib/env_dpdk/pci_vmd.o 00:01:29.261 CC lib/env_dpdk/pci_idxd.o 00:01:29.261 CC lib/env_dpdk/pci_event.o 00:01:29.261 CC lib/env_dpdk/sigbus_handler.o 00:01:29.261 CC lib/env_dpdk/pci_dpdk.o 00:01:29.261 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:29.261 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:29.519 LIB libspdk_conf.a 00:01:29.519 SO libspdk_conf.so.6.0 00:01:29.519 LIB libspdk_rdma.a 00:01:29.519 LIB libspdk_json.a 00:01:29.519 SYMLINK libspdk_conf.so 00:01:29.519 SO libspdk_rdma.so.6.0 00:01:29.519 SO libspdk_json.so.6.0 00:01:29.519 SYMLINK libspdk_rdma.so 00:01:29.519 SYMLINK libspdk_json.so 00:01:29.519 LIB libspdk_idxd.a 00:01:29.779 SO libspdk_idxd.so.12.0 00:01:29.779 LIB libspdk_vmd.a 00:01:29.779 SO libspdk_vmd.so.6.0 00:01:29.779 SYMLINK libspdk_idxd.so 00:01:29.779 SYMLINK libspdk_vmd.so 00:01:29.779 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:29.779 CC lib/jsonrpc/jsonrpc_server.o 00:01:29.779 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:29.779 CC lib/jsonrpc/jsonrpc_client.o 00:01:30.038 LIB libspdk_jsonrpc.a 00:01:30.038 SO libspdk_jsonrpc.so.6.0 00:01:30.296 SYMLINK libspdk_jsonrpc.so 00:01:30.296 LIB libspdk_env_dpdk.a 00:01:30.296 SO libspdk_env_dpdk.so.14.0 00:01:30.296 SYMLINK libspdk_env_dpdk.so 00:01:30.555 CC lib/rpc/rpc.o 00:01:30.555 LIB libspdk_rpc.a 00:01:30.813 SO libspdk_rpc.so.6.0 00:01:30.813 SYMLINK libspdk_rpc.so 00:01:31.073 CC lib/notify/notify.o 00:01:31.073 CC lib/notify/notify_rpc.o 00:01:31.073 CC lib/trace/trace.o 00:01:31.073 CC lib/trace/trace_flags.o 00:01:31.073 CC lib/trace/trace_rpc.o 00:01:31.073 CC lib/keyring/keyring.o 00:01:31.073 CC lib/keyring/keyring_rpc.o 00:01:31.073 LIB libspdk_notify.a 00:01:31.073 SO libspdk_notify.so.6.0 00:01:31.331 LIB libspdk_trace.a 00:01:31.331 LIB libspdk_keyring.a 00:01:31.331 SYMLINK libspdk_notify.so 00:01:31.331 SO libspdk_trace.so.10.0 00:01:31.331 SO libspdk_keyring.so.1.0 00:01:31.331 SYMLINK libspdk_trace.so 00:01:31.331 SYMLINK libspdk_keyring.so 00:01:31.589 CC lib/sock/sock.o 00:01:31.589 CC lib/sock/sock_rpc.o 00:01:31.589 CC lib/thread/thread.o 00:01:31.589 CC lib/thread/iobuf.o 00:01:31.846 LIB libspdk_sock.a 00:01:31.846 SO libspdk_sock.so.9.0 00:01:31.846 SYMLINK libspdk_sock.so 00:01:32.105 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:32.105 CC lib/nvme/nvme_ctrlr.o 00:01:32.105 CC lib/nvme/nvme_fabric.o 00:01:32.105 CC lib/nvme/nvme_ns_cmd.o 00:01:32.363 CC lib/nvme/nvme_ns.o 00:01:32.364 CC lib/nvme/nvme_pcie_common.o 00:01:32.364 CC lib/nvme/nvme_pcie.o 00:01:32.364 CC lib/nvme/nvme_qpair.o 00:01:32.364 CC lib/nvme/nvme.o 00:01:32.364 CC lib/nvme/nvme_quirks.o 00:01:32.364 CC lib/nvme/nvme_transport.o 00:01:32.364 CC lib/nvme/nvme_discovery.o 00:01:32.364 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:32.364 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:32.364 CC lib/nvme/nvme_tcp.o 00:01:32.364 CC lib/nvme/nvme_opal.o 00:01:32.364 CC lib/nvme/nvme_io_msg.o 00:01:32.364 CC lib/nvme/nvme_poll_group.o 00:01:32.364 CC lib/nvme/nvme_zns.o 00:01:32.364 CC lib/nvme/nvme_stubs.o 00:01:32.364 CC lib/nvme/nvme_cuse.o 00:01:32.364 CC lib/nvme/nvme_auth.o 00:01:32.364 CC lib/nvme/nvme_vfio_user.o 00:01:32.364 CC lib/nvme/nvme_rdma.o 00:01:32.622 LIB libspdk_thread.a 00:01:32.622 SO libspdk_thread.so.10.0 00:01:32.622 SYMLINK libspdk_thread.so 00:01:33.188 CC lib/init/json_config.o 00:01:33.189 CC lib/init/subsystem.o 00:01:33.189 CC lib/init/subsystem_rpc.o 00:01:33.189 CC lib/accel/accel_rpc.o 00:01:33.189 CC lib/accel/accel.o 00:01:33.189 CC lib/blob/blobstore.o 00:01:33.189 CC lib/init/rpc.o 00:01:33.189 CC lib/virtio/virtio.o 00:01:33.189 CC lib/virtio/virtio_vhost_user.o 00:01:33.189 CC lib/blob/request.o 00:01:33.189 CC lib/accel/accel_sw.o 00:01:33.189 CC lib/blob/zeroes.o 00:01:33.189 CC lib/virtio/virtio_vfio_user.o 00:01:33.189 CC lib/blob/blob_bs_dev.o 00:01:33.189 CC lib/virtio/virtio_pci.o 00:01:33.189 CC lib/vfu_tgt/tgt_endpoint.o 00:01:33.189 CC lib/vfu_tgt/tgt_rpc.o 00:01:33.189 LIB libspdk_init.a 00:01:33.189 SO libspdk_init.so.5.0 00:01:33.189 LIB libspdk_vfu_tgt.a 00:01:33.189 LIB libspdk_virtio.a 00:01:33.189 SO libspdk_vfu_tgt.so.3.0 00:01:33.448 SO libspdk_virtio.so.7.0 00:01:33.448 SYMLINK libspdk_init.so 00:01:33.448 SYMLINK libspdk_vfu_tgt.so 00:01:33.448 SYMLINK libspdk_virtio.so 00:01:33.707 CC lib/event/app.o 00:01:33.707 CC lib/event/reactor.o 00:01:33.707 CC lib/event/app_rpc.o 00:01:33.707 CC lib/event/log_rpc.o 00:01:33.707 CC lib/event/scheduler_static.o 00:01:33.707 LIB libspdk_accel.a 00:01:33.707 SO libspdk_accel.so.15.0 00:01:33.707 LIB libspdk_nvme.a 00:01:33.966 SYMLINK libspdk_accel.so 00:01:33.966 SO libspdk_nvme.so.13.0 00:01:33.966 LIB libspdk_event.a 00:01:33.966 SO libspdk_event.so.13.0 00:01:33.966 SYMLINK libspdk_event.so 00:01:34.225 SYMLINK libspdk_nvme.so 00:01:34.225 CC lib/bdev/bdev.o 00:01:34.225 CC lib/bdev/bdev_zone.o 00:01:34.225 CC lib/bdev/bdev_rpc.o 00:01:34.225 CC lib/bdev/scsi_nvme.o 00:01:34.225 CC lib/bdev/part.o 00:01:35.160 LIB libspdk_blob.a 00:01:35.160 SO libspdk_blob.so.11.0 00:01:35.160 SYMLINK libspdk_blob.so 00:01:35.419 CC lib/lvol/lvol.o 00:01:35.419 CC lib/blobfs/blobfs.o 00:01:35.419 CC lib/blobfs/tree.o 00:01:35.985 LIB libspdk_bdev.a 00:01:35.985 LIB libspdk_blobfs.a 00:01:35.985 SO libspdk_bdev.so.15.0 00:01:35.985 LIB libspdk_lvol.a 00:01:35.985 SO libspdk_blobfs.so.10.0 00:01:35.985 SO libspdk_lvol.so.10.0 00:01:35.985 SYMLINK libspdk_bdev.so 00:01:35.985 SYMLINK libspdk_blobfs.so 00:01:35.985 SYMLINK libspdk_lvol.so 00:01:36.244 CC lib/ftl/ftl_core.o 00:01:36.244 CC lib/ftl/ftl_init.o 00:01:36.244 CC lib/ftl/ftl_layout.o 00:01:36.244 CC lib/ftl/ftl_debug.o 00:01:36.244 CC lib/ftl/ftl_io.o 00:01:36.244 CC lib/ftl/ftl_sb.o 00:01:36.244 CC lib/ftl/ftl_l2p.o 00:01:36.244 CC lib/ftl/ftl_l2p_flat.o 00:01:36.244 CC lib/ftl/ftl_band.o 00:01:36.244 CC lib/ftl/ftl_nv_cache.o 00:01:36.244 CC lib/ftl/ftl_band_ops.o 00:01:36.244 CC lib/ftl/ftl_writer.o 00:01:36.244 CC lib/ftl/ftl_rq.o 00:01:36.244 CC lib/ftl/ftl_reloc.o 00:01:36.244 CC lib/ftl/ftl_l2p_cache.o 00:01:36.244 CC lib/ftl/ftl_p2l.o 00:01:36.244 CC lib/ftl/mngt/ftl_mngt.o 00:01:36.244 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:36.244 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:36.244 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:36.244 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:36.244 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:36.244 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:36.244 CC lib/ublk/ublk.o 00:01:36.244 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:36.244 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:36.244 CC lib/ublk/ublk_rpc.o 00:01:36.244 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:36.244 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:36.244 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:36.244 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:36.244 CC lib/ftl/utils/ftl_conf.o 00:01:36.244 CC lib/ftl/utils/ftl_md.o 00:01:36.244 CC lib/nvmf/ctrlr.o 00:01:36.244 CC lib/ftl/utils/ftl_mempool.o 00:01:36.244 CC lib/nvmf/ctrlr_discovery.o 00:01:36.244 CC lib/nvmf/ctrlr_bdev.o 00:01:36.244 CC lib/ftl/utils/ftl_bitmap.o 00:01:36.244 CC lib/ftl/utils/ftl_property.o 00:01:36.244 CC lib/nvmf/nvmf.o 00:01:36.244 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:36.244 CC lib/nvmf/subsystem.o 00:01:36.244 CC lib/nvmf/nvmf_rpc.o 00:01:36.244 CC lib/nvmf/transport.o 00:01:36.244 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:36.244 CC lib/nvmf/tcp.o 00:01:36.244 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:36.244 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:36.244 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:36.244 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:36.244 CC lib/nvmf/vfio_user.o 00:01:36.244 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:36.244 CC lib/nvmf/stubs.o 00:01:36.244 CC lib/nvmf/rdma.o 00:01:36.244 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:36.244 CC lib/nvmf/auth.o 00:01:36.244 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:36.244 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:36.244 CC lib/ftl/base/ftl_base_dev.o 00:01:36.244 CC lib/ftl/base/ftl_base_bdev.o 00:01:36.244 CC lib/ftl/ftl_trace.o 00:01:36.535 CC lib/nbd/nbd_rpc.o 00:01:36.535 CC lib/scsi/lun.o 00:01:36.535 CC lib/nbd/nbd.o 00:01:36.535 CC lib/scsi/port.o 00:01:36.535 CC lib/scsi/dev.o 00:01:36.535 CC lib/scsi/scsi_bdev.o 00:01:36.535 CC lib/scsi/scsi_rpc.o 00:01:36.535 CC lib/scsi/scsi.o 00:01:36.535 CC lib/scsi/task.o 00:01:36.535 CC lib/scsi/scsi_pr.o 00:01:36.794 LIB libspdk_nbd.a 00:01:36.794 SO libspdk_nbd.so.7.0 00:01:37.051 SYMLINK libspdk_nbd.so 00:01:37.051 LIB libspdk_scsi.a 00:01:37.051 SO libspdk_scsi.so.9.0 00:01:37.051 LIB libspdk_ublk.a 00:01:37.051 SO libspdk_ublk.so.3.0 00:01:37.051 SYMLINK libspdk_scsi.so 00:01:37.051 LIB libspdk_ftl.a 00:01:37.051 SYMLINK libspdk_ublk.so 00:01:37.310 SO libspdk_ftl.so.9.0 00:01:37.310 CC lib/iscsi/conn.o 00:01:37.310 CC lib/iscsi/init_grp.o 00:01:37.310 CC lib/iscsi/iscsi.o 00:01:37.310 CC lib/iscsi/param.o 00:01:37.310 CC lib/iscsi/md5.o 00:01:37.310 CC lib/vhost/vhost.o 00:01:37.310 CC lib/vhost/vhost_rpc.o 00:01:37.310 CC lib/iscsi/portal_grp.o 00:01:37.310 CC lib/vhost/vhost_scsi.o 00:01:37.310 CC lib/iscsi/tgt_node.o 00:01:37.310 CC lib/vhost/vhost_blk.o 00:01:37.310 CC lib/iscsi/iscsi_subsystem.o 00:01:37.310 CC lib/vhost/rte_vhost_user.o 00:01:37.310 CC lib/iscsi/iscsi_rpc.o 00:01:37.310 CC lib/iscsi/task.o 00:01:37.567 SYMLINK libspdk_ftl.so 00:01:38.133 LIB libspdk_nvmf.a 00:01:38.133 SO libspdk_nvmf.so.18.0 00:01:38.133 LIB libspdk_vhost.a 00:01:38.133 SO libspdk_vhost.so.8.0 00:01:38.392 SYMLINK libspdk_nvmf.so 00:01:38.392 SYMLINK libspdk_vhost.so 00:01:38.392 LIB libspdk_iscsi.a 00:01:38.392 SO libspdk_iscsi.so.8.0 00:01:38.650 SYMLINK libspdk_iscsi.so 00:01:38.909 CC module/env_dpdk/env_dpdk_rpc.o 00:01:38.909 CC module/vfu_device/vfu_virtio.o 00:01:38.909 CC module/vfu_device/vfu_virtio_blk.o 00:01:38.909 CC module/vfu_device/vfu_virtio_scsi.o 00:01:38.909 CC module/vfu_device/vfu_virtio_rpc.o 00:01:39.167 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:39.167 LIB libspdk_env_dpdk_rpc.a 00:01:39.167 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:39.167 CC module/sock/posix/posix.o 00:01:39.167 CC module/accel/error/accel_error.o 00:01:39.167 CC module/accel/dsa/accel_dsa.o 00:01:39.167 CC module/accel/error/accel_error_rpc.o 00:01:39.167 CC module/scheduler/gscheduler/gscheduler.o 00:01:39.167 CC module/accel/dsa/accel_dsa_rpc.o 00:01:39.167 CC module/accel/ioat/accel_ioat.o 00:01:39.167 CC module/accel/ioat/accel_ioat_rpc.o 00:01:39.167 CC module/keyring/file/keyring.o 00:01:39.167 CC module/keyring/file/keyring_rpc.o 00:01:39.167 CC module/accel/iaa/accel_iaa.o 00:01:39.167 CC module/accel/iaa/accel_iaa_rpc.o 00:01:39.167 CC module/blob/bdev/blob_bdev.o 00:01:39.167 SO libspdk_env_dpdk_rpc.so.6.0 00:01:39.167 SYMLINK libspdk_env_dpdk_rpc.so 00:01:39.425 LIB libspdk_scheduler_dpdk_governor.a 00:01:39.425 LIB libspdk_scheduler_gscheduler.a 00:01:39.425 LIB libspdk_scheduler_dynamic.a 00:01:39.425 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:39.425 LIB libspdk_keyring_file.a 00:01:39.425 LIB libspdk_accel_error.a 00:01:39.425 SO libspdk_scheduler_gscheduler.so.4.0 00:01:39.425 SO libspdk_scheduler_dynamic.so.4.0 00:01:39.425 LIB libspdk_accel_dsa.a 00:01:39.425 LIB libspdk_accel_ioat.a 00:01:39.425 LIB libspdk_accel_iaa.a 00:01:39.425 SO libspdk_keyring_file.so.1.0 00:01:39.425 SO libspdk_accel_dsa.so.5.0 00:01:39.425 SO libspdk_accel_error.so.2.0 00:01:39.425 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:39.425 SO libspdk_accel_ioat.so.6.0 00:01:39.425 SYMLINK libspdk_scheduler_dynamic.so 00:01:39.425 SO libspdk_accel_iaa.so.3.0 00:01:39.425 SYMLINK libspdk_scheduler_gscheduler.so 00:01:39.425 LIB libspdk_blob_bdev.a 00:01:39.425 SYMLINK libspdk_accel_error.so 00:01:39.425 SYMLINK libspdk_keyring_file.so 00:01:39.425 SYMLINK libspdk_accel_dsa.so 00:01:39.425 SO libspdk_blob_bdev.so.11.0 00:01:39.425 SYMLINK libspdk_accel_ioat.so 00:01:39.425 SYMLINK libspdk_accel_iaa.so 00:01:39.425 SYMLINK libspdk_blob_bdev.so 00:01:39.425 LIB libspdk_vfu_device.a 00:01:39.682 SO libspdk_vfu_device.so.3.0 00:01:39.682 SYMLINK libspdk_vfu_device.so 00:01:39.682 LIB libspdk_sock_posix.a 00:01:39.682 SO libspdk_sock_posix.so.6.0 00:01:39.940 SYMLINK libspdk_sock_posix.so 00:01:39.940 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:39.940 CC module/bdev/nvme/bdev_nvme.o 00:01:39.940 CC module/bdev/nvme/nvme_rpc.o 00:01:39.940 CC module/bdev/nvme/bdev_mdns_client.o 00:01:39.940 CC module/bdev/nvme/vbdev_opal.o 00:01:39.940 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:39.940 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:39.940 CC module/bdev/malloc/bdev_malloc.o 00:01:39.940 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:39.940 CC module/bdev/gpt/gpt.o 00:01:39.940 CC module/bdev/gpt/vbdev_gpt.o 00:01:39.940 CC module/bdev/delay/vbdev_delay.o 00:01:39.940 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:39.940 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:39.940 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:39.940 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:39.940 CC module/blobfs/bdev/blobfs_bdev.o 00:01:39.940 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:39.940 CC module/bdev/lvol/vbdev_lvol.o 00:01:39.940 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:39.940 CC module/bdev/raid/bdev_raid_rpc.o 00:01:39.940 CC module/bdev/raid/bdev_raid.o 00:01:39.940 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:39.940 CC module/bdev/iscsi/bdev_iscsi.o 00:01:39.940 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:39.940 CC module/bdev/raid/bdev_raid_sb.o 00:01:39.940 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:39.940 CC module/bdev/raid/concat.o 00:01:39.940 CC module/bdev/raid/raid1.o 00:01:39.940 CC module/bdev/raid/raid0.o 00:01:39.940 CC module/bdev/error/vbdev_error_rpc.o 00:01:39.940 CC module/bdev/ftl/bdev_ftl.o 00:01:39.940 CC module/bdev/error/vbdev_error.o 00:01:39.940 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:39.940 CC module/bdev/split/vbdev_split.o 00:01:39.940 CC module/bdev/split/vbdev_split_rpc.o 00:01:39.940 CC module/bdev/aio/bdev_aio.o 00:01:39.940 CC module/bdev/aio/bdev_aio_rpc.o 00:01:39.940 CC module/bdev/null/bdev_null.o 00:01:39.940 CC module/bdev/null/bdev_null_rpc.o 00:01:39.940 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:39.940 CC module/bdev/passthru/vbdev_passthru.o 00:01:40.198 LIB libspdk_blobfs_bdev.a 00:01:40.198 SO libspdk_blobfs_bdev.so.6.0 00:01:40.198 LIB libspdk_bdev_null.a 00:01:40.198 LIB libspdk_bdev_split.a 00:01:40.198 LIB libspdk_bdev_ftl.a 00:01:40.198 LIB libspdk_bdev_gpt.a 00:01:40.198 SO libspdk_bdev_null.so.6.0 00:01:40.198 LIB libspdk_bdev_error.a 00:01:40.198 SO libspdk_bdev_split.so.6.0 00:01:40.198 SYMLINK libspdk_blobfs_bdev.so 00:01:40.198 SO libspdk_bdev_ftl.so.6.0 00:01:40.198 SO libspdk_bdev_gpt.so.6.0 00:01:40.198 LIB libspdk_bdev_passthru.a 00:01:40.198 SO libspdk_bdev_error.so.6.0 00:01:40.198 LIB libspdk_bdev_zone_block.a 00:01:40.198 LIB libspdk_bdev_delay.a 00:01:40.198 LIB libspdk_bdev_aio.a 00:01:40.198 LIB libspdk_bdev_iscsi.a 00:01:40.198 LIB libspdk_bdev_malloc.a 00:01:40.198 SYMLINK libspdk_bdev_null.so 00:01:40.198 SO libspdk_bdev_passthru.so.6.0 00:01:40.198 SYMLINK libspdk_bdev_split.so 00:01:40.198 SO libspdk_bdev_delay.so.6.0 00:01:40.198 SO libspdk_bdev_zone_block.so.6.0 00:01:40.198 SO libspdk_bdev_iscsi.so.6.0 00:01:40.198 SO libspdk_bdev_aio.so.6.0 00:01:40.456 SO libspdk_bdev_malloc.so.6.0 00:01:40.456 SYMLINK libspdk_bdev_ftl.so 00:01:40.456 SYMLINK libspdk_bdev_gpt.so 00:01:40.456 SYMLINK libspdk_bdev_error.so 00:01:40.456 SYMLINK libspdk_bdev_passthru.so 00:01:40.456 SYMLINK libspdk_bdev_iscsi.so 00:01:40.456 SYMLINK libspdk_bdev_delay.so 00:01:40.456 SYMLINK libspdk_bdev_zone_block.so 00:01:40.456 LIB libspdk_bdev_lvol.a 00:01:40.456 SYMLINK libspdk_bdev_malloc.so 00:01:40.456 SYMLINK libspdk_bdev_aio.so 00:01:40.456 LIB libspdk_bdev_virtio.a 00:01:40.456 SO libspdk_bdev_lvol.so.6.0 00:01:40.456 SO libspdk_bdev_virtio.so.6.0 00:01:40.456 SYMLINK libspdk_bdev_lvol.so 00:01:40.456 SYMLINK libspdk_bdev_virtio.so 00:01:40.715 LIB libspdk_bdev_raid.a 00:01:40.715 SO libspdk_bdev_raid.so.6.0 00:01:40.715 SYMLINK libspdk_bdev_raid.so 00:01:41.651 LIB libspdk_bdev_nvme.a 00:01:41.651 SO libspdk_bdev_nvme.so.7.0 00:01:41.651 SYMLINK libspdk_bdev_nvme.so 00:01:42.220 CC module/event/subsystems/sock/sock.o 00:01:42.220 CC module/event/subsystems/keyring/keyring.o 00:01:42.220 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:42.220 CC module/event/subsystems/iobuf/iobuf.o 00:01:42.220 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:42.220 CC module/event/subsystems/scheduler/scheduler.o 00:01:42.220 CC module/event/subsystems/vmd/vmd.o 00:01:42.220 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:42.220 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:42.220 LIB libspdk_event_sock.a 00:01:42.220 LIB libspdk_event_keyring.a 00:01:42.478 SO libspdk_event_sock.so.5.0 00:01:42.478 LIB libspdk_event_vhost_blk.a 00:01:42.478 LIB libspdk_event_vfu_tgt.a 00:01:42.478 LIB libspdk_event_iobuf.a 00:01:42.478 LIB libspdk_event_scheduler.a 00:01:42.478 SO libspdk_event_keyring.so.1.0 00:01:42.478 SO libspdk_event_vhost_blk.so.3.0 00:01:42.478 LIB libspdk_event_vmd.a 00:01:42.478 SYMLINK libspdk_event_sock.so 00:01:42.478 SO libspdk_event_vfu_tgt.so.3.0 00:01:42.478 SO libspdk_event_iobuf.so.3.0 00:01:42.478 SO libspdk_event_scheduler.so.4.0 00:01:42.478 SYMLINK libspdk_event_keyring.so 00:01:42.478 SO libspdk_event_vmd.so.6.0 00:01:42.478 SYMLINK libspdk_event_vhost_blk.so 00:01:42.478 SYMLINK libspdk_event_vfu_tgt.so 00:01:42.478 SYMLINK libspdk_event_iobuf.so 00:01:42.478 SYMLINK libspdk_event_scheduler.so 00:01:42.478 SYMLINK libspdk_event_vmd.so 00:01:42.736 CC module/event/subsystems/accel/accel.o 00:01:43.022 LIB libspdk_event_accel.a 00:01:43.022 SO libspdk_event_accel.so.6.0 00:01:43.022 SYMLINK libspdk_event_accel.so 00:01:43.281 CC module/event/subsystems/bdev/bdev.o 00:01:43.540 LIB libspdk_event_bdev.a 00:01:43.540 SO libspdk_event_bdev.so.6.0 00:01:43.540 SYMLINK libspdk_event_bdev.so 00:01:43.799 CC module/event/subsystems/scsi/scsi.o 00:01:43.799 CC module/event/subsystems/nbd/nbd.o 00:01:43.799 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:43.799 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:43.799 CC module/event/subsystems/ublk/ublk.o 00:01:44.059 LIB libspdk_event_scsi.a 00:01:44.059 LIB libspdk_event_nbd.a 00:01:44.059 SO libspdk_event_scsi.so.6.0 00:01:44.059 LIB libspdk_event_ublk.a 00:01:44.059 SO libspdk_event_nbd.so.6.0 00:01:44.059 SO libspdk_event_ublk.so.3.0 00:01:44.059 SYMLINK libspdk_event_scsi.so 00:01:44.059 LIB libspdk_event_nvmf.a 00:01:44.059 SYMLINK libspdk_event_nbd.so 00:01:44.059 SYMLINK libspdk_event_ublk.so 00:01:44.059 SO libspdk_event_nvmf.so.6.0 00:01:44.059 SYMLINK libspdk_event_nvmf.so 00:01:44.317 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:44.317 CC module/event/subsystems/iscsi/iscsi.o 00:01:44.575 LIB libspdk_event_vhost_scsi.a 00:01:44.575 LIB libspdk_event_iscsi.a 00:01:44.575 SO libspdk_event_vhost_scsi.so.3.0 00:01:44.575 SO libspdk_event_iscsi.so.6.0 00:01:44.575 SYMLINK libspdk_event_vhost_scsi.so 00:01:44.575 SYMLINK libspdk_event_iscsi.so 00:01:44.833 SO libspdk.so.6.0 00:01:44.833 SYMLINK libspdk.so 00:01:45.099 CXX app/trace/trace.o 00:01:45.099 CC app/trace_record/trace_record.o 00:01:45.099 TEST_HEADER include/spdk/accel_module.h 00:01:45.099 TEST_HEADER include/spdk/accel.h 00:01:45.099 TEST_HEADER include/spdk/barrier.h 00:01:45.099 TEST_HEADER include/spdk/assert.h 00:01:45.099 TEST_HEADER include/spdk/bdev.h 00:01:45.099 TEST_HEADER include/spdk/base64.h 00:01:45.099 CC app/spdk_nvme_discover/discovery_aer.o 00:01:45.099 CC app/spdk_lspci/spdk_lspci.o 00:01:45.099 TEST_HEADER include/spdk/bdev_module.h 00:01:45.099 TEST_HEADER include/spdk/bdev_zone.h 00:01:45.099 TEST_HEADER include/spdk/bit_array.h 00:01:45.099 TEST_HEADER include/spdk/bit_pool.h 00:01:45.099 TEST_HEADER include/spdk/blob_bdev.h 00:01:45.099 CC app/spdk_top/spdk_top.o 00:01:45.099 CC app/spdk_nvme_perf/perf.o 00:01:45.099 TEST_HEADER include/spdk/blobfs.h 00:01:45.099 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:45.099 TEST_HEADER include/spdk/conf.h 00:01:45.099 CC app/spdk_nvme_identify/identify.o 00:01:45.099 TEST_HEADER include/spdk/blob.h 00:01:45.099 TEST_HEADER include/spdk/cpuset.h 00:01:45.099 TEST_HEADER include/spdk/config.h 00:01:45.099 TEST_HEADER include/spdk/crc16.h 00:01:45.099 CC test/rpc_client/rpc_client_test.o 00:01:45.099 TEST_HEADER include/spdk/crc32.h 00:01:45.099 TEST_HEADER include/spdk/crc64.h 00:01:45.099 TEST_HEADER include/spdk/dif.h 00:01:45.099 TEST_HEADER include/spdk/dma.h 00:01:45.099 TEST_HEADER include/spdk/endian.h 00:01:45.099 TEST_HEADER include/spdk/env.h 00:01:45.099 TEST_HEADER include/spdk/event.h 00:01:45.099 TEST_HEADER include/spdk/env_dpdk.h 00:01:45.099 TEST_HEADER include/spdk/fd_group.h 00:01:45.099 TEST_HEADER include/spdk/fd.h 00:01:45.099 TEST_HEADER include/spdk/ftl.h 00:01:45.099 TEST_HEADER include/spdk/file.h 00:01:45.099 TEST_HEADER include/spdk/gpt_spec.h 00:01:45.099 TEST_HEADER include/spdk/histogram_data.h 00:01:45.099 TEST_HEADER include/spdk/hexlify.h 00:01:45.099 TEST_HEADER include/spdk/idxd.h 00:01:45.100 TEST_HEADER include/spdk/idxd_spec.h 00:01:45.100 TEST_HEADER include/spdk/init.h 00:01:45.100 TEST_HEADER include/spdk/ioat.h 00:01:45.100 TEST_HEADER include/spdk/iscsi_spec.h 00:01:45.100 TEST_HEADER include/spdk/ioat_spec.h 00:01:45.100 TEST_HEADER include/spdk/json.h 00:01:45.100 TEST_HEADER include/spdk/jsonrpc.h 00:01:45.100 TEST_HEADER include/spdk/keyring_module.h 00:01:45.100 TEST_HEADER include/spdk/keyring.h 00:01:45.100 TEST_HEADER include/spdk/likely.h 00:01:45.100 CC app/spdk_dd/spdk_dd.o 00:01:45.100 TEST_HEADER include/spdk/lvol.h 00:01:45.100 TEST_HEADER include/spdk/log.h 00:01:45.100 TEST_HEADER include/spdk/mmio.h 00:01:45.100 TEST_HEADER include/spdk/memory.h 00:01:45.100 TEST_HEADER include/spdk/nbd.h 00:01:45.100 TEST_HEADER include/spdk/notify.h 00:01:45.100 TEST_HEADER include/spdk/nvme.h 00:01:45.100 TEST_HEADER include/spdk/nvme_intel.h 00:01:45.100 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:45.100 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:45.100 CC app/nvmf_tgt/nvmf_main.o 00:01:45.100 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:45.100 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:45.100 TEST_HEADER include/spdk/nvme_spec.h 00:01:45.100 TEST_HEADER include/spdk/nvme_zns.h 00:01:45.100 TEST_HEADER include/spdk/nvmf.h 00:01:45.100 TEST_HEADER include/spdk/nvmf_spec.h 00:01:45.100 TEST_HEADER include/spdk/nvmf_transport.h 00:01:45.100 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:45.100 TEST_HEADER include/spdk/opal.h 00:01:45.100 TEST_HEADER include/spdk/opal_spec.h 00:01:45.100 TEST_HEADER include/spdk/pci_ids.h 00:01:45.100 TEST_HEADER include/spdk/pipe.h 00:01:45.100 CC app/spdk_tgt/spdk_tgt.o 00:01:45.100 TEST_HEADER include/spdk/queue.h 00:01:45.100 TEST_HEADER include/spdk/reduce.h 00:01:45.100 CC app/vhost/vhost.o 00:01:45.100 TEST_HEADER include/spdk/scheduler.h 00:01:45.100 TEST_HEADER include/spdk/rpc.h 00:01:45.100 CC app/iscsi_tgt/iscsi_tgt.o 00:01:45.100 TEST_HEADER include/spdk/scsi_spec.h 00:01:45.100 TEST_HEADER include/spdk/scsi.h 00:01:45.100 TEST_HEADER include/spdk/sock.h 00:01:45.100 TEST_HEADER include/spdk/stdinc.h 00:01:45.100 TEST_HEADER include/spdk/thread.h 00:01:45.100 TEST_HEADER include/spdk/string.h 00:01:45.100 TEST_HEADER include/spdk/trace.h 00:01:45.100 TEST_HEADER include/spdk/trace_parser.h 00:01:45.100 TEST_HEADER include/spdk/tree.h 00:01:45.100 TEST_HEADER include/spdk/ublk.h 00:01:45.100 TEST_HEADER include/spdk/util.h 00:01:45.100 TEST_HEADER include/spdk/uuid.h 00:01:45.100 TEST_HEADER include/spdk/version.h 00:01:45.100 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:45.100 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:45.100 TEST_HEADER include/spdk/vhost.h 00:01:45.100 TEST_HEADER include/spdk/xor.h 00:01:45.100 TEST_HEADER include/spdk/vmd.h 00:01:45.100 CXX test/cpp_headers/accel.o 00:01:45.100 TEST_HEADER include/spdk/zipf.h 00:01:45.100 CXX test/cpp_headers/assert.o 00:01:45.100 CXX test/cpp_headers/barrier.o 00:01:45.100 CXX test/cpp_headers/accel_module.o 00:01:45.100 CXX test/cpp_headers/base64.o 00:01:45.100 CXX test/cpp_headers/bdev.o 00:01:45.100 CXX test/cpp_headers/bdev_module.o 00:01:45.100 CXX test/cpp_headers/bit_array.o 00:01:45.100 CXX test/cpp_headers/bdev_zone.o 00:01:45.100 CXX test/cpp_headers/blob_bdev.o 00:01:45.100 CXX test/cpp_headers/bit_pool.o 00:01:45.100 CXX test/cpp_headers/blobfs_bdev.o 00:01:45.100 CXX test/cpp_headers/blob.o 00:01:45.100 CXX test/cpp_headers/conf.o 00:01:45.100 CXX test/cpp_headers/blobfs.o 00:01:45.100 CXX test/cpp_headers/config.o 00:01:45.100 CXX test/cpp_headers/cpuset.o 00:01:45.100 CXX test/cpp_headers/crc16.o 00:01:45.100 CXX test/cpp_headers/crc32.o 00:01:45.100 CXX test/cpp_headers/crc64.o 00:01:45.100 CXX test/cpp_headers/dif.o 00:01:45.363 CXX test/cpp_headers/dma.o 00:01:45.363 CC examples/util/zipf/zipf.o 00:01:45.363 CC examples/ioat/perf/perf.o 00:01:45.363 CC examples/sock/hello_world/hello_sock.o 00:01:45.363 CC examples/ioat/verify/verify.o 00:01:45.363 CC examples/accel/perf/accel_perf.o 00:01:45.363 CC test/event/reactor/reactor.o 00:01:45.363 CC test/nvme/err_injection/err_injection.o 00:01:45.363 CC test/env/pci/pci_ut.o 00:01:45.363 CC test/event/event_perf/event_perf.o 00:01:45.363 CC test/nvme/reserve/reserve.o 00:01:45.363 CC examples/nvme/arbitration/arbitration.o 00:01:45.363 CC examples/vmd/lsvmd/lsvmd.o 00:01:45.363 CC test/env/vtophys/vtophys.o 00:01:45.363 CC test/event/reactor_perf/reactor_perf.o 00:01:45.363 CC examples/idxd/perf/perf.o 00:01:45.363 CC test/nvme/startup/startup.o 00:01:45.363 CC test/env/memory/memory_ut.o 00:01:45.363 CC test/nvme/simple_copy/simple_copy.o 00:01:45.363 CC app/fio/nvme/fio_plugin.o 00:01:45.363 CC test/event/app_repeat/app_repeat.o 00:01:45.363 CC examples/vmd/led/led.o 00:01:45.363 CC examples/bdev/bdevperf/bdevperf.o 00:01:45.363 CC examples/nvmf/nvmf/nvmf.o 00:01:45.363 CC test/thread/poller_perf/poller_perf.o 00:01:45.363 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:45.363 CC test/nvme/compliance/nvme_compliance.o 00:01:45.363 CC examples/nvme/hotplug/hotplug.o 00:01:45.363 CC examples/blob/hello_world/hello_blob.o 00:01:45.363 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:45.363 CC test/app/histogram_perf/histogram_perf.o 00:01:45.363 CC examples/nvme/reconnect/reconnect.o 00:01:45.363 CC test/nvme/aer/aer.o 00:01:45.363 CC examples/nvme/hello_world/hello_world.o 00:01:45.363 CC test/nvme/boot_partition/boot_partition.o 00:01:45.363 CC test/nvme/fdp/fdp.o 00:01:45.363 CC test/app/jsoncat/jsoncat.o 00:01:45.363 CC test/nvme/connect_stress/connect_stress.o 00:01:45.363 CC test/nvme/reset/reset.o 00:01:45.363 CC test/blobfs/mkfs/mkfs.o 00:01:45.363 CC test/nvme/e2edp/nvme_dp.o 00:01:45.363 CC test/nvme/sgl/sgl.o 00:01:45.363 CC test/nvme/cuse/cuse.o 00:01:45.363 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:45.363 CC test/nvme/overhead/overhead.o 00:01:45.363 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:45.363 CC examples/bdev/hello_world/hello_bdev.o 00:01:45.363 CC examples/thread/thread/thread_ex.o 00:01:45.363 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:45.363 CC test/app/stub/stub.o 00:01:45.363 CC test/nvme/fused_ordering/fused_ordering.o 00:01:45.363 CC examples/blob/cli/blobcli.o 00:01:45.363 CC test/bdev/bdevio/bdevio.o 00:01:45.363 CC examples/nvme/abort/abort.o 00:01:45.363 CC test/dma/test_dma/test_dma.o 00:01:45.363 LINK spdk_lspci 00:01:45.363 CC test/app/bdev_svc/bdev_svc.o 00:01:45.363 CC app/fio/bdev/fio_plugin.o 00:01:45.363 CC test/event/scheduler/scheduler.o 00:01:45.363 CC test/accel/dif/dif.o 00:01:45.629 LINK rpc_client_test 00:01:45.629 LINK nvmf_tgt 00:01:45.629 LINK vhost 00:01:45.629 CC test/lvol/esnap/esnap.o 00:01:45.629 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:45.629 LINK spdk_tgt 00:01:45.629 LINK spdk_nvme_discover 00:01:45.629 CC test/env/mem_callbacks/mem_callbacks.o 00:01:45.629 LINK vtophys 00:01:45.629 LINK poller_perf 00:01:45.629 LINK reactor 00:01:45.629 LINK iscsi_tgt 00:01:45.629 LINK lsvmd 00:01:45.629 LINK jsoncat 00:01:45.629 CXX test/cpp_headers/endian.o 00:01:45.629 LINK err_injection 00:01:45.629 CXX test/cpp_headers/env_dpdk.o 00:01:45.629 CXX test/cpp_headers/env.o 00:01:45.629 LINK app_repeat 00:01:45.629 CXX test/cpp_headers/event.o 00:01:45.629 LINK spdk_trace_record 00:01:45.629 CXX test/cpp_headers/fd_group.o 00:01:45.629 CXX test/cpp_headers/fd.o 00:01:45.629 LINK interrupt_tgt 00:01:45.629 CXX test/cpp_headers/file.o 00:01:45.889 LINK zipf 00:01:45.889 CXX test/cpp_headers/ftl.o 00:01:45.889 CXX test/cpp_headers/gpt_spec.o 00:01:45.889 LINK ioat_perf 00:01:45.889 LINK connect_stress 00:01:45.889 LINK reserve 00:01:45.889 LINK reactor_perf 00:01:45.889 LINK event_perf 00:01:45.889 CXX test/cpp_headers/hexlify.o 00:01:45.889 LINK verify 00:01:45.889 LINK cmb_copy 00:01:45.889 LINK startup 00:01:45.889 LINK led 00:01:45.889 LINK bdev_svc 00:01:45.890 CXX test/cpp_headers/histogram_data.o 00:01:45.890 LINK histogram_perf 00:01:45.890 LINK hello_world 00:01:45.890 LINK fused_ordering 00:01:45.890 CXX test/cpp_headers/idxd.o 00:01:45.890 LINK simple_copy 00:01:45.890 LINK spdk_dd 00:01:45.890 LINK env_dpdk_post_init 00:01:45.890 LINK boot_partition 00:01:45.890 LINK pmr_persistence 00:01:45.890 CXX test/cpp_headers/idxd_spec.o 00:01:45.890 CXX test/cpp_headers/init.o 00:01:45.890 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:45.890 LINK mkfs 00:01:45.890 CXX test/cpp_headers/ioat.o 00:01:45.890 LINK stub 00:01:45.890 CXX test/cpp_headers/ioat_spec.o 00:01:45.890 CXX test/cpp_headers/iscsi_spec.o 00:01:45.890 LINK doorbell_aers 00:01:45.890 CXX test/cpp_headers/json.o 00:01:45.890 LINK overhead 00:01:45.890 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:45.890 LINK hello_sock 00:01:45.890 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:45.890 LINK spdk_trace 00:01:45.890 LINK nvmf 00:01:45.890 CXX test/cpp_headers/jsonrpc.o 00:01:45.890 CXX test/cpp_headers/keyring.o 00:01:45.890 LINK idxd_perf 00:01:45.890 CXX test/cpp_headers/keyring_module.o 00:01:45.890 LINK hello_blob 00:01:45.890 CXX test/cpp_headers/likely.o 00:01:45.890 CXX test/cpp_headers/log.o 00:01:45.890 LINK hotplug 00:01:45.890 CXX test/cpp_headers/lvol.o 00:01:45.890 LINK reconnect 00:01:45.890 LINK hello_bdev 00:01:45.890 CXX test/cpp_headers/memory.o 00:01:45.890 CXX test/cpp_headers/mmio.o 00:01:45.890 LINK reset 00:01:45.890 CXX test/cpp_headers/nbd.o 00:01:45.890 LINK thread 00:01:45.890 LINK sgl 00:01:45.890 CXX test/cpp_headers/notify.o 00:01:45.890 LINK scheduler 00:01:46.150 LINK nvme_dp 00:01:46.150 LINK pci_ut 00:01:46.150 LINK aer 00:01:46.150 CXX test/cpp_headers/nvme.o 00:01:46.150 CXX test/cpp_headers/nvme_intel.o 00:01:46.150 CXX test/cpp_headers/nvme_ocssd.o 00:01:46.150 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:46.150 CXX test/cpp_headers/nvme_spec.o 00:01:46.150 LINK nvme_compliance 00:01:46.150 CXX test/cpp_headers/nvme_zns.o 00:01:46.150 LINK accel_perf 00:01:46.150 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:46.150 CXX test/cpp_headers/nvmf_cmd.o 00:01:46.150 CXX test/cpp_headers/nvmf.o 00:01:46.150 CXX test/cpp_headers/nvmf_spec.o 00:01:46.150 CXX test/cpp_headers/nvmf_transport.o 00:01:46.150 LINK arbitration 00:01:46.150 CXX test/cpp_headers/opal.o 00:01:46.150 CXX test/cpp_headers/opal_spec.o 00:01:46.150 CXX test/cpp_headers/pci_ids.o 00:01:46.150 CXX test/cpp_headers/pipe.o 00:01:46.150 CXX test/cpp_headers/queue.o 00:01:46.150 CXX test/cpp_headers/reduce.o 00:01:46.150 CXX test/cpp_headers/rpc.o 00:01:46.150 CXX test/cpp_headers/scheduler.o 00:01:46.150 CXX test/cpp_headers/scsi.o 00:01:46.150 CXX test/cpp_headers/scsi_spec.o 00:01:46.150 CXX test/cpp_headers/sock.o 00:01:46.150 CXX test/cpp_headers/stdinc.o 00:01:46.150 CXX test/cpp_headers/string.o 00:01:46.150 LINK fdp 00:01:46.150 CXX test/cpp_headers/thread.o 00:01:46.150 CXX test/cpp_headers/trace.o 00:01:46.150 CXX test/cpp_headers/trace_parser.o 00:01:46.150 CXX test/cpp_headers/tree.o 00:01:46.150 LINK abort 00:01:46.150 CXX test/cpp_headers/ublk.o 00:01:46.150 CXX test/cpp_headers/util.o 00:01:46.150 CXX test/cpp_headers/uuid.o 00:01:46.150 CXX test/cpp_headers/version.o 00:01:46.150 CXX test/cpp_headers/vfio_user_pci.o 00:01:46.150 LINK test_dma 00:01:46.150 CXX test/cpp_headers/vfio_user_spec.o 00:01:46.150 CXX test/cpp_headers/vhost.o 00:01:46.150 CXX test/cpp_headers/vmd.o 00:01:46.150 CXX test/cpp_headers/xor.o 00:01:46.150 LINK blobcli 00:01:46.150 CXX test/cpp_headers/zipf.o 00:01:46.150 LINK bdevio 00:01:46.150 LINK dif 00:01:46.150 LINK spdk_bdev 00:01:46.150 LINK nvme_fuzz 00:01:46.408 LINK nvme_manage 00:01:46.408 LINK spdk_nvme 00:01:46.408 LINK spdk_top 00:01:46.665 LINK spdk_nvme_identify 00:01:46.665 LINK mem_callbacks 00:01:46.665 LINK memory_ut 00:01:46.665 LINK spdk_nvme_perf 00:01:46.665 LINK bdevperf 00:01:46.665 LINK vhost_fuzz 00:01:46.923 LINK cuse 00:01:47.491 LINK iscsi_fuzz 00:01:49.395 LINK esnap 00:01:49.395 00:01:49.395 real 0m42.404s 00:01:49.395 user 6m30.187s 00:01:49.395 sys 3m28.407s 00:01:49.395 20:55:05 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:01:49.395 20:55:05 -- common/autotest_common.sh@10 -- $ set +x 00:01:49.395 ************************************ 00:01:49.395 END TEST make 00:01:49.395 ************************************ 00:01:49.395 20:55:05 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:49.395 20:55:05 -- pm/common@30 -- $ signal_monitor_resources TERM 00:01:49.395 20:55:05 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:01:49.395 20:55:05 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.395 20:55:05 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:49.395 20:55:05 -- pm/common@45 -- $ pid=2747830 00:01:49.395 20:55:05 -- pm/common@52 -- $ sudo kill -TERM 2747830 00:01:49.395 20:55:05 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.395 20:55:05 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:49.395 20:55:05 -- pm/common@45 -- $ pid=2747832 00:01:49.395 20:55:05 -- pm/common@52 -- $ sudo kill -TERM 2747832 00:01:49.654 20:55:05 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.654 20:55:05 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:49.654 20:55:05 -- pm/common@45 -- $ pid=2747833 00:01:49.654 20:55:05 -- pm/common@52 -- $ sudo kill -TERM 2747833 00:01:49.654 20:55:05 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.654 20:55:05 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:49.654 20:55:05 -- pm/common@45 -- $ pid=2747834 00:01:49.654 20:55:05 -- pm/common@52 -- $ sudo kill -TERM 2747834 00:01:49.654 20:55:05 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:01:49.654 20:55:05 -- nvmf/common.sh@7 -- # uname -s 00:01:49.654 20:55:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:49.654 20:55:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:49.654 20:55:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:49.654 20:55:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:49.654 20:55:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:49.654 20:55:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:49.654 20:55:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:49.654 20:55:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:49.654 20:55:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:49.654 20:55:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:49.654 20:55:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:01:49.654 20:55:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:01:49.654 20:55:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:49.654 20:55:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:49.654 20:55:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:49.654 20:55:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:49.654 20:55:05 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:49.654 20:55:05 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:49.654 20:55:05 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:49.654 20:55:05 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:49.654 20:55:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:49.654 20:55:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:49.654 20:55:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:49.654 20:55:05 -- paths/export.sh@5 -- # export PATH 00:01:49.654 20:55:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:49.654 20:55:05 -- nvmf/common.sh@47 -- # : 0 00:01:49.654 20:55:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:01:49.654 20:55:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:01:49.654 20:55:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:49.654 20:55:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:49.654 20:55:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:49.654 20:55:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:01:49.654 20:55:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:01:49.654 20:55:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:01:49.654 20:55:05 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:49.654 20:55:05 -- spdk/autotest.sh@32 -- # uname -s 00:01:49.654 20:55:05 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:49.654 20:55:05 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:49.654 20:55:05 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:49.654 20:55:05 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:49.654 20:55:05 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:49.654 20:55:05 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:49.654 20:55:05 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:49.654 20:55:05 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:49.654 20:55:05 -- spdk/autotest.sh@48 -- # udevadm_pid=2806705 00:01:49.654 20:55:05 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:49.654 20:55:05 -- pm/common@17 -- # local monitor 00:01:49.654 20:55:05 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:49.654 20:55:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.654 20:55:05 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=2806708 00:01:49.654 20:55:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.654 20:55:05 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=2806711 00:01:49.654 20:55:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.654 20:55:05 -- pm/common@21 -- # date +%s 00:01:49.654 20:55:05 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=2806714 00:01:49.654 20:55:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.654 20:55:05 -- pm/common@21 -- # date +%s 00:01:49.654 20:55:05 -- pm/common@21 -- # date +%s 00:01:49.654 20:55:05 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=2806719 00:01:49.654 20:55:05 -- pm/common@26 -- # sleep 1 00:01:49.654 20:55:05 -- pm/common@21 -- # date +%s 00:01:49.654 20:55:05 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713466505 00:01:49.654 20:55:05 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713466505 00:01:49.654 20:55:05 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713466505 00:01:49.654 20:55:05 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713466505 00:01:49.912 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713466505_collect-vmstat.pm.log 00:01:49.913 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713466505_collect-cpu-temp.pm.log 00:01:49.913 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713466505_collect-bmc-pm.bmc.pm.log 00:01:49.913 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713466505_collect-cpu-load.pm.log 00:01:50.851 20:55:06 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:50.851 20:55:06 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:01:50.851 20:55:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:01:50.851 20:55:06 -- common/autotest_common.sh@10 -- # set +x 00:01:50.851 20:55:06 -- spdk/autotest.sh@59 -- # create_test_list 00:01:50.851 20:55:06 -- common/autotest_common.sh@734 -- # xtrace_disable 00:01:50.851 20:55:06 -- common/autotest_common.sh@10 -- # set +x 00:01:50.851 20:55:06 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:01:50.851 20:55:06 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:50.851 20:55:06 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:50.851 20:55:06 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:50.851 20:55:06 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:50.851 20:55:06 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:01:50.851 20:55:06 -- common/autotest_common.sh@1441 -- # uname 00:01:50.851 20:55:06 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:01:50.851 20:55:06 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:01:50.851 20:55:06 -- common/autotest_common.sh@1461 -- # uname 00:01:50.851 20:55:06 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:01:50.851 20:55:06 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:01:50.851 20:55:06 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:01:50.851 20:55:06 -- spdk/autotest.sh@72 -- # hash lcov 00:01:50.851 20:55:06 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:50.851 20:55:06 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:01:50.851 --rc lcov_branch_coverage=1 00:01:50.851 --rc lcov_function_coverage=1 00:01:50.851 --rc genhtml_branch_coverage=1 00:01:50.851 --rc genhtml_function_coverage=1 00:01:50.851 --rc genhtml_legend=1 00:01:50.851 --rc geninfo_all_blocks=1 00:01:50.851 ' 00:01:50.851 20:55:06 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:01:50.851 --rc lcov_branch_coverage=1 00:01:50.851 --rc lcov_function_coverage=1 00:01:50.851 --rc genhtml_branch_coverage=1 00:01:50.851 --rc genhtml_function_coverage=1 00:01:50.851 --rc genhtml_legend=1 00:01:50.851 --rc geninfo_all_blocks=1 00:01:50.851 ' 00:01:50.851 20:55:06 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:01:50.851 --rc lcov_branch_coverage=1 00:01:50.851 --rc lcov_function_coverage=1 00:01:50.851 --rc genhtml_branch_coverage=1 00:01:50.851 --rc genhtml_function_coverage=1 00:01:50.851 --rc genhtml_legend=1 00:01:50.851 --rc geninfo_all_blocks=1 00:01:50.851 --no-external' 00:01:50.851 20:55:06 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:01:50.851 --rc lcov_branch_coverage=1 00:01:50.851 --rc lcov_function_coverage=1 00:01:50.851 --rc genhtml_branch_coverage=1 00:01:50.851 --rc genhtml_function_coverage=1 00:01:50.851 --rc genhtml_legend=1 00:01:50.851 --rc geninfo_all_blocks=1 00:01:50.851 --no-external' 00:01:50.851 20:55:06 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:50.851 lcov: LCOV version 1.14 00:01:50.851 20:55:06 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:01:59.008 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:01:59.008 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:01:59.266 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:01:59.266 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:01:59.266 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:01:59.266 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:01:59.266 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:01:59.266 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:11.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:11.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:11.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:11.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:11.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:11.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:11.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:11.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:11.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:11.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:11.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:11.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:11.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:11.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:11.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:11.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:11.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:11.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:11.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:11.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:11.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:11.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:11.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:11.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:11.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:11.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:11.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:11.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:11.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:11.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:11.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:11.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:11.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:11.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:11.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:11.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:11.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:11.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:11.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:11.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:11.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:11.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:11.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:11.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:11.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:11.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:11.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:11.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:11.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:11.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:11.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:11.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:11.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:11.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:11.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:11.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:11.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:11.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:11.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:11.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:11.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:11.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:11.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:11.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:11.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:11.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:11.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:11.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:11.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:11.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:11.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:11.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:11.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:11.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:11.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:11.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:11.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:11.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:11.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:11.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:11.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:11.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:11.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:11.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:11.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:11.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:11.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:11.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:11.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:11.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:11.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:11.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:11.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:11.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:11.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:11.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:11.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:11.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:11.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:11.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:11.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:11.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:11.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:11.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:11.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:11.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:11.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:11.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:11.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:11.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:11.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:11.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:11.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:11.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:11.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:11.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:11.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:11.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:11.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:11.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:11.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:11.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:11.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:11.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:11.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:11.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:11.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:11.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:11.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:11.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:11.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:11.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:11.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:11.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:11.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:11.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:11.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:11.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:11.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:11.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:11.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:11.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:11.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:11.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:11.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:11.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:11.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:11.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:11.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:11.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:11.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:11.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:11.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:11.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:11.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:11.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:11.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:11.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:11.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:11.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:11.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:11.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:11.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:11.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:11.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:11.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:11.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:11.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:11.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:11.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:11.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:11.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:11.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:11.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:11.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:11.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:11.478 20:55:27 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:11.478 20:55:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:02:11.478 20:55:27 -- common/autotest_common.sh@10 -- # set +x 00:02:11.478 20:55:27 -- spdk/autotest.sh@91 -- # rm -f 00:02:11.478 20:55:27 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:14.018 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:02:14.018 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:14.278 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:14.278 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:14.278 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:14.278 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:14.278 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:14.278 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:14.278 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:14.278 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:14.278 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:14.278 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:14.278 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:14.278 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:14.278 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:14.537 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:14.537 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:14.537 20:55:30 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:14.537 20:55:30 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:14.537 20:55:30 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:14.537 20:55:30 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:14.537 20:55:30 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:14.537 20:55:30 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:14.537 20:55:30 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:14.537 20:55:30 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:14.537 20:55:30 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:14.537 20:55:30 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:14.537 20:55:30 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:14.537 20:55:30 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:14.537 20:55:30 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:14.537 20:55:30 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:14.537 20:55:30 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:14.537 No valid GPT data, bailing 00:02:14.537 20:55:30 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:14.537 20:55:30 -- scripts/common.sh@391 -- # pt= 00:02:14.537 20:55:30 -- scripts/common.sh@392 -- # return 1 00:02:14.537 20:55:30 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:14.537 1+0 records in 00:02:14.537 1+0 records out 00:02:14.537 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00262765 s, 399 MB/s 00:02:14.537 20:55:30 -- spdk/autotest.sh@118 -- # sync 00:02:14.537 20:55:30 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:14.537 20:55:30 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:14.537 20:55:30 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:18.764 20:55:34 -- spdk/autotest.sh@124 -- # uname -s 00:02:18.764 20:55:34 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:18.764 20:55:34 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:18.764 20:55:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:18.764 20:55:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:18.764 20:55:34 -- common/autotest_common.sh@10 -- # set +x 00:02:18.764 ************************************ 00:02:18.764 START TEST setup.sh 00:02:18.764 ************************************ 00:02:18.764 20:55:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:19.024 * Looking for test storage... 00:02:19.024 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:19.024 20:55:34 -- setup/test-setup.sh@10 -- # uname -s 00:02:19.024 20:55:34 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:19.024 20:55:34 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:19.024 20:55:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:19.024 20:55:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:19.024 20:55:34 -- common/autotest_common.sh@10 -- # set +x 00:02:19.024 ************************************ 00:02:19.024 START TEST acl 00:02:19.024 ************************************ 00:02:19.024 20:55:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:19.024 * Looking for test storage... 00:02:19.024 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:19.024 20:55:34 -- setup/acl.sh@10 -- # get_zoned_devs 00:02:19.024 20:55:34 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:19.024 20:55:34 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:19.024 20:55:34 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:19.024 20:55:34 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:19.024 20:55:34 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:19.024 20:55:34 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:19.024 20:55:34 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:19.024 20:55:34 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:19.024 20:55:34 -- setup/acl.sh@12 -- # devs=() 00:02:19.024 20:55:34 -- setup/acl.sh@12 -- # declare -a devs 00:02:19.024 20:55:34 -- setup/acl.sh@13 -- # drivers=() 00:02:19.024 20:55:34 -- setup/acl.sh@13 -- # declare -A drivers 00:02:19.024 20:55:34 -- setup/acl.sh@51 -- # setup reset 00:02:19.024 20:55:34 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:19.024 20:55:34 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:23.221 20:55:38 -- setup/acl.sh@52 -- # collect_setup_devs 00:02:23.221 20:55:38 -- setup/acl.sh@16 -- # local dev driver 00:02:23.221 20:55:38 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.221 20:55:38 -- setup/acl.sh@15 -- # setup output status 00:02:23.221 20:55:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:23.221 20:55:38 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:25.127 Hugepages 00:02:25.127 node hugesize free / total 00:02:25.127 20:55:40 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:25.127 20:55:40 -- setup/acl.sh@19 -- # continue 00:02:25.127 20:55:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.127 20:55:40 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:25.127 20:55:40 -- setup/acl.sh@19 -- # continue 00:02:25.127 20:55:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.127 20:55:40 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:25.127 20:55:40 -- setup/acl.sh@19 -- # continue 00:02:25.127 20:55:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.127 00:02:25.127 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:25.127 20:55:40 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:25.127 20:55:40 -- setup/acl.sh@19 -- # continue 00:02:25.127 20:55:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.127 20:55:40 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:25.127 20:55:40 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.127 20:55:40 -- setup/acl.sh@20 -- # continue 00:02:25.127 20:55:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.127 20:55:40 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:25.127 20:55:40 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.127 20:55:40 -- setup/acl.sh@20 -- # continue 00:02:25.127 20:55:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.127 20:55:41 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:25.127 20:55:41 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.127 20:55:41 -- setup/acl.sh@20 -- # continue 00:02:25.127 20:55:41 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.127 20:55:41 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:25.127 20:55:41 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.127 20:55:41 -- setup/acl.sh@20 -- # continue 00:02:25.127 20:55:41 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.127 20:55:41 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:25.127 20:55:41 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.127 20:55:41 -- setup/acl.sh@20 -- # continue 00:02:25.127 20:55:41 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.127 20:55:41 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:25.127 20:55:41 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.127 20:55:41 -- setup/acl.sh@20 -- # continue 00:02:25.127 20:55:41 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.127 20:55:41 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:25.127 20:55:41 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.127 20:55:41 -- setup/acl.sh@20 -- # continue 00:02:25.127 20:55:41 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.127 20:55:41 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:25.127 20:55:41 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.127 20:55:41 -- setup/acl.sh@20 -- # continue 00:02:25.127 20:55:41 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.416 20:55:41 -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:02:25.417 20:55:41 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:25.417 20:55:41 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:02:25.417 20:55:41 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:25.417 20:55:41 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:25.417 20:55:41 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.417 20:55:41 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:25.417 20:55:41 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.417 20:55:41 -- setup/acl.sh@20 -- # continue 00:02:25.417 20:55:41 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.417 20:55:41 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:25.417 20:55:41 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.417 20:55:41 -- setup/acl.sh@20 -- # continue 00:02:25.417 20:55:41 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.417 20:55:41 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:25.417 20:55:41 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.417 20:55:41 -- setup/acl.sh@20 -- # continue 00:02:25.417 20:55:41 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.417 20:55:41 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:25.417 20:55:41 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.417 20:55:41 -- setup/acl.sh@20 -- # continue 00:02:25.417 20:55:41 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.417 20:55:41 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:25.417 20:55:41 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.417 20:55:41 -- setup/acl.sh@20 -- # continue 00:02:25.417 20:55:41 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.417 20:55:41 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:25.417 20:55:41 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.417 20:55:41 -- setup/acl.sh@20 -- # continue 00:02:25.417 20:55:41 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.417 20:55:41 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:25.417 20:55:41 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.417 20:55:41 -- setup/acl.sh@20 -- # continue 00:02:25.417 20:55:41 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.417 20:55:41 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:25.417 20:55:41 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.417 20:55:41 -- setup/acl.sh@20 -- # continue 00:02:25.417 20:55:41 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.417 20:55:41 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:25.417 20:55:41 -- setup/acl.sh@54 -- # run_test denied denied 00:02:25.417 20:55:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:25.417 20:55:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:25.417 20:55:41 -- common/autotest_common.sh@10 -- # set +x 00:02:25.417 ************************************ 00:02:25.417 START TEST denied 00:02:25.417 ************************************ 00:02:25.417 20:55:41 -- common/autotest_common.sh@1111 -- # denied 00:02:25.417 20:55:41 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:02:25.417 20:55:41 -- setup/acl.sh@38 -- # setup output config 00:02:25.417 20:55:41 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:02:25.417 20:55:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:25.417 20:55:41 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:28.723 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:02:28.723 20:55:44 -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:02:28.723 20:55:44 -- setup/acl.sh@28 -- # local dev driver 00:02:28.723 20:55:44 -- setup/acl.sh@30 -- # for dev in "$@" 00:02:28.723 20:55:44 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:02:28.723 20:55:44 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:02:28.723 20:55:44 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:28.723 20:55:44 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:28.723 20:55:44 -- setup/acl.sh@41 -- # setup reset 00:02:28.723 20:55:44 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:28.723 20:55:44 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:32.921 00:02:32.921 real 0m7.315s 00:02:32.921 user 0m2.438s 00:02:32.921 sys 0m4.237s 00:02:32.921 20:55:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:32.921 20:55:48 -- common/autotest_common.sh@10 -- # set +x 00:02:32.921 ************************************ 00:02:32.921 END TEST denied 00:02:32.921 ************************************ 00:02:32.921 20:55:48 -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:32.921 20:55:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:32.921 20:55:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:32.921 20:55:48 -- common/autotest_common.sh@10 -- # set +x 00:02:32.921 ************************************ 00:02:32.921 START TEST allowed 00:02:32.921 ************************************ 00:02:32.921 20:55:48 -- common/autotest_common.sh@1111 -- # allowed 00:02:32.921 20:55:48 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:02:32.921 20:55:48 -- setup/acl.sh@45 -- # setup output config 00:02:32.921 20:55:48 -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:02:32.921 20:55:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:32.921 20:55:48 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:37.123 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:02:37.123 20:55:52 -- setup/acl.sh@47 -- # verify 00:02:37.123 20:55:52 -- setup/acl.sh@28 -- # local dev driver 00:02:37.123 20:55:52 -- setup/acl.sh@48 -- # setup reset 00:02:37.123 20:55:52 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:37.123 20:55:52 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:40.414 00:02:40.414 real 0m7.113s 00:02:40.414 user 0m2.192s 00:02:40.414 sys 0m4.053s 00:02:40.414 20:55:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:40.414 20:55:55 -- common/autotest_common.sh@10 -- # set +x 00:02:40.414 ************************************ 00:02:40.414 END TEST allowed 00:02:40.414 ************************************ 00:02:40.414 00:02:40.414 real 0m21.051s 00:02:40.414 user 0m7.062s 00:02:40.414 sys 0m12.544s 00:02:40.414 20:55:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:40.414 20:55:55 -- common/autotest_common.sh@10 -- # set +x 00:02:40.414 ************************************ 00:02:40.414 END TEST acl 00:02:40.414 ************************************ 00:02:40.414 20:55:55 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:40.414 20:55:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:40.414 20:55:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:40.414 20:55:55 -- common/autotest_common.sh@10 -- # set +x 00:02:40.414 ************************************ 00:02:40.414 START TEST hugepages 00:02:40.414 ************************************ 00:02:40.414 20:55:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:40.414 * Looking for test storage... 00:02:40.414 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:40.414 20:55:56 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:40.414 20:55:56 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:40.414 20:55:56 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:40.414 20:55:56 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:40.414 20:55:56 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:40.414 20:55:56 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:40.414 20:55:56 -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:40.414 20:55:56 -- setup/common.sh@18 -- # local node= 00:02:40.414 20:55:56 -- setup/common.sh@19 -- # local var val 00:02:40.414 20:55:56 -- setup/common.sh@20 -- # local mem_f mem 00:02:40.414 20:55:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.414 20:55:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:40.414 20:55:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:40.414 20:55:56 -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.414 20:55:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.414 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.414 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.415 20:55:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 169730228 kB' 'MemAvailable: 173066084 kB' 'Buffers: 3888 kB' 'Cached: 13478552 kB' 'SwapCached: 0 kB' 'Active: 10319260 kB' 'Inactive: 3664944 kB' 'Active(anon): 9747892 kB' 'Inactive(anon): 0 kB' 'Active(file): 571368 kB' 'Inactive(file): 3664944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505044 kB' 'Mapped: 248872 kB' 'Shmem: 9246128 kB' 'KReclaimable: 487992 kB' 'Slab: 1129560 kB' 'SReclaimable: 487992 kB' 'SUnreclaim: 641568 kB' 'KernelStack: 20448 kB' 'PageTables: 9688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982032 kB' 'Committed_AS: 11204300 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318092 kB' 'VmallocChunk: 0 kB' 'Percpu: 93312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3632084 kB' 'DirectMap2M: 28553216 kB' 'DirectMap1G: 169869312 kB' 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.415 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.415 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.416 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.416 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.416 20:55:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.416 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.416 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.416 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.416 20:55:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.416 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.416 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.416 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.416 20:55:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.416 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.416 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.416 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.416 20:55:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.416 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.416 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.416 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.416 20:55:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.416 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.416 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.416 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.416 20:55:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.416 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.416 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.416 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.416 20:55:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.416 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.416 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.416 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.416 20:55:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.416 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.416 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.416 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.416 20:55:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.416 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.416 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.416 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.416 20:55:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.416 20:55:56 -- setup/common.sh@32 -- # continue 00:02:40.416 20:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:40.416 20:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:40.416 20:55:56 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:40.416 20:55:56 -- setup/common.sh@33 -- # echo 2048 00:02:40.416 20:55:56 -- setup/common.sh@33 -- # return 0 00:02:40.416 20:55:56 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:40.416 20:55:56 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:40.416 20:55:56 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:40.416 20:55:56 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:40.416 20:55:56 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:40.416 20:55:56 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:40.416 20:55:56 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:40.416 20:55:56 -- setup/hugepages.sh@207 -- # get_nodes 00:02:40.416 20:55:56 -- setup/hugepages.sh@27 -- # local node 00:02:40.416 20:55:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:40.416 20:55:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:40.416 20:55:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:40.416 20:55:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:40.416 20:55:56 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:40.416 20:55:56 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:40.416 20:55:56 -- setup/hugepages.sh@208 -- # clear_hp 00:02:40.416 20:55:56 -- setup/hugepages.sh@37 -- # local node hp 00:02:40.416 20:55:56 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:40.416 20:55:56 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:40.416 20:55:56 -- setup/hugepages.sh@41 -- # echo 0 00:02:40.416 20:55:56 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:40.416 20:55:56 -- setup/hugepages.sh@41 -- # echo 0 00:02:40.416 20:55:56 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:40.416 20:55:56 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:40.416 20:55:56 -- setup/hugepages.sh@41 -- # echo 0 00:02:40.416 20:55:56 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:40.416 20:55:56 -- setup/hugepages.sh@41 -- # echo 0 00:02:40.416 20:55:56 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:40.416 20:55:56 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:40.416 20:55:56 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:40.416 20:55:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:40.416 20:55:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:40.416 20:55:56 -- common/autotest_common.sh@10 -- # set +x 00:02:40.416 ************************************ 00:02:40.416 START TEST default_setup 00:02:40.416 ************************************ 00:02:40.416 20:55:56 -- common/autotest_common.sh@1111 -- # default_setup 00:02:40.416 20:55:56 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:40.416 20:55:56 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:40.416 20:55:56 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:40.416 20:55:56 -- setup/hugepages.sh@51 -- # shift 00:02:40.416 20:55:56 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:40.416 20:55:56 -- setup/hugepages.sh@52 -- # local node_ids 00:02:40.416 20:55:56 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:40.416 20:55:56 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:40.416 20:55:56 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:40.416 20:55:56 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:40.416 20:55:56 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:40.416 20:55:56 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:40.416 20:55:56 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:40.416 20:55:56 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:40.416 20:55:56 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:40.416 20:55:56 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:40.416 20:55:56 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:40.416 20:55:56 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:40.416 20:55:56 -- setup/hugepages.sh@73 -- # return 0 00:02:40.416 20:55:56 -- setup/hugepages.sh@137 -- # setup output 00:02:40.416 20:55:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:40.416 20:55:56 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:43.707 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:43.707 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:43.707 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:43.707 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:43.707 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:43.707 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:43.707 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:43.707 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:43.707 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:43.707 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:43.707 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:43.707 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:43.707 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:43.707 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:43.707 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:43.707 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:44.274 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:02:44.536 20:56:00 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:44.536 20:56:00 -- setup/hugepages.sh@89 -- # local node 00:02:44.536 20:56:00 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:44.536 20:56:00 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:44.536 20:56:00 -- setup/hugepages.sh@92 -- # local surp 00:02:44.536 20:56:00 -- setup/hugepages.sh@93 -- # local resv 00:02:44.536 20:56:00 -- setup/hugepages.sh@94 -- # local anon 00:02:44.537 20:56:00 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:44.537 20:56:00 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:44.537 20:56:00 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:44.537 20:56:00 -- setup/common.sh@18 -- # local node= 00:02:44.537 20:56:00 -- setup/common.sh@19 -- # local var val 00:02:44.537 20:56:00 -- setup/common.sh@20 -- # local mem_f mem 00:02:44.537 20:56:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.537 20:56:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.537 20:56:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.537 20:56:00 -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.537 20:56:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.537 20:56:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 171847144 kB' 'MemAvailable: 175183000 kB' 'Buffers: 3888 kB' 'Cached: 13478656 kB' 'SwapCached: 0 kB' 'Active: 10338452 kB' 'Inactive: 3664944 kB' 'Active(anon): 9767084 kB' 'Inactive(anon): 0 kB' 'Active(file): 571368 kB' 'Inactive(file): 3664944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523760 kB' 'Mapped: 248700 kB' 'Shmem: 9246232 kB' 'KReclaimable: 487992 kB' 'Slab: 1128164 kB' 'SReclaimable: 487992 kB' 'SUnreclaim: 640172 kB' 'KernelStack: 20464 kB' 'PageTables: 9728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 11214932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318060 kB' 'VmallocChunk: 0 kB' 'Percpu: 93312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3632084 kB' 'DirectMap2M: 28553216 kB' 'DirectMap1G: 169869312 kB' 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.537 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.537 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.538 20:56:00 -- setup/common.sh@33 -- # echo 0 00:02:44.538 20:56:00 -- setup/common.sh@33 -- # return 0 00:02:44.538 20:56:00 -- setup/hugepages.sh@97 -- # anon=0 00:02:44.538 20:56:00 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:44.538 20:56:00 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:44.538 20:56:00 -- setup/common.sh@18 -- # local node= 00:02:44.538 20:56:00 -- setup/common.sh@19 -- # local var val 00:02:44.538 20:56:00 -- setup/common.sh@20 -- # local mem_f mem 00:02:44.538 20:56:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.538 20:56:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.538 20:56:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.538 20:56:00 -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.538 20:56:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.538 20:56:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 171847956 kB' 'MemAvailable: 175183812 kB' 'Buffers: 3888 kB' 'Cached: 13478660 kB' 'SwapCached: 0 kB' 'Active: 10338376 kB' 'Inactive: 3664944 kB' 'Active(anon): 9767008 kB' 'Inactive(anon): 0 kB' 'Active(file): 571368 kB' 'Inactive(file): 3664944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523580 kB' 'Mapped: 248776 kB' 'Shmem: 9246236 kB' 'KReclaimable: 487992 kB' 'Slab: 1127988 kB' 'SReclaimable: 487992 kB' 'SUnreclaim: 639996 kB' 'KernelStack: 20400 kB' 'PageTables: 8920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 11213452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318012 kB' 'VmallocChunk: 0 kB' 'Percpu: 93312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3632084 kB' 'DirectMap2M: 28553216 kB' 'DirectMap1G: 169869312 kB' 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.538 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.538 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.539 20:56:00 -- setup/common.sh@33 -- # echo 0 00:02:44.539 20:56:00 -- setup/common.sh@33 -- # return 0 00:02:44.539 20:56:00 -- setup/hugepages.sh@99 -- # surp=0 00:02:44.539 20:56:00 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:44.539 20:56:00 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:44.539 20:56:00 -- setup/common.sh@18 -- # local node= 00:02:44.539 20:56:00 -- setup/common.sh@19 -- # local var val 00:02:44.539 20:56:00 -- setup/common.sh@20 -- # local mem_f mem 00:02:44.539 20:56:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.539 20:56:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.539 20:56:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.539 20:56:00 -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.539 20:56:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.539 20:56:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 171846320 kB' 'MemAvailable: 175182176 kB' 'Buffers: 3888 kB' 'Cached: 13478672 kB' 'SwapCached: 0 kB' 'Active: 10338664 kB' 'Inactive: 3664944 kB' 'Active(anon): 9767296 kB' 'Inactive(anon): 0 kB' 'Active(file): 571368 kB' 'Inactive(file): 3664944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524300 kB' 'Mapped: 248688 kB' 'Shmem: 9246248 kB' 'KReclaimable: 487992 kB' 'Slab: 1127980 kB' 'SReclaimable: 487992 kB' 'SUnreclaim: 639988 kB' 'KernelStack: 20640 kB' 'PageTables: 10260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 11214960 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318076 kB' 'VmallocChunk: 0 kB' 'Percpu: 93312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3632084 kB' 'DirectMap2M: 28553216 kB' 'DirectMap1G: 169869312 kB' 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.539 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.539 20:56:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.540 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.540 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.541 20:56:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.541 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.541 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.541 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.541 20:56:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.541 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.541 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.541 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.541 20:56:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.541 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.541 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.541 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.541 20:56:00 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.541 20:56:00 -- setup/common.sh@33 -- # echo 0 00:02:44.541 20:56:00 -- setup/common.sh@33 -- # return 0 00:02:44.541 20:56:00 -- setup/hugepages.sh@100 -- # resv=0 00:02:44.541 20:56:00 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:44.541 nr_hugepages=1024 00:02:44.541 20:56:00 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:44.541 resv_hugepages=0 00:02:44.541 20:56:00 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:44.541 surplus_hugepages=0 00:02:44.541 20:56:00 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:44.541 anon_hugepages=0 00:02:44.541 20:56:00 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:44.541 20:56:00 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:44.541 20:56:00 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:44.541 20:56:00 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:44.541 20:56:00 -- setup/common.sh@18 -- # local node= 00:02:44.541 20:56:00 -- setup/common.sh@19 -- # local var val 00:02:44.541 20:56:00 -- setup/common.sh@20 -- # local mem_f mem 00:02:44.541 20:56:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.541 20:56:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.541 20:56:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.541 20:56:00 -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.541 20:56:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.541 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.541 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.541 20:56:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 171846596 kB' 'MemAvailable: 175182452 kB' 'Buffers: 3888 kB' 'Cached: 13478684 kB' 'SwapCached: 0 kB' 'Active: 10339328 kB' 'Inactive: 3664944 kB' 'Active(anon): 9767960 kB' 'Inactive(anon): 0 kB' 'Active(file): 571368 kB' 'Inactive(file): 3664944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524824 kB' 'Mapped: 249208 kB' 'Shmem: 9246260 kB' 'KReclaimable: 487992 kB' 'Slab: 1127980 kB' 'SReclaimable: 487992 kB' 'SUnreclaim: 639988 kB' 'KernelStack: 20576 kB' 'PageTables: 9812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 11216328 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318108 kB' 'VmallocChunk: 0 kB' 'Percpu: 93312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3632084 kB' 'DirectMap2M: 28553216 kB' 'DirectMap1G: 169869312 kB' 00:02:44.541 20:56:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.541 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.541 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.541 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.541 20:56:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.541 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.541 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.541 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.541 20:56:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.541 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.541 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.541 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.541 20:56:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.541 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.541 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.541 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.541 20:56:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.541 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.541 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.541 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.541 20:56:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.541 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.541 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.541 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.541 20:56:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.541 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.541 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.541 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.541 20:56:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.541 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.541 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.541 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.541 20:56:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.541 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.541 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.541 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.541 20:56:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.541 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.541 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.541 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.541 20:56:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.541 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.541 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.541 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.541 20:56:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.541 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.541 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.541 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.541 20:56:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.541 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.541 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.541 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.541 20:56:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.541 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.541 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.541 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.541 20:56:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.541 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.541 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.541 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.541 20:56:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.541 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.541 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.541 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.541 20:56:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.541 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.541 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.542 20:56:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.542 20:56:00 -- setup/common.sh@33 -- # echo 1024 00:02:44.542 20:56:00 -- setup/common.sh@33 -- # return 0 00:02:44.542 20:56:00 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:44.542 20:56:00 -- setup/hugepages.sh@112 -- # get_nodes 00:02:44.542 20:56:00 -- setup/hugepages.sh@27 -- # local node 00:02:44.542 20:56:00 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:44.542 20:56:00 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:44.542 20:56:00 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:44.542 20:56:00 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:44.542 20:56:00 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:44.542 20:56:00 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:44.542 20:56:00 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:44.542 20:56:00 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:44.542 20:56:00 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:44.542 20:56:00 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:44.542 20:56:00 -- setup/common.sh@18 -- # local node=0 00:02:44.542 20:56:00 -- setup/common.sh@19 -- # local var val 00:02:44.542 20:56:00 -- setup/common.sh@20 -- # local mem_f mem 00:02:44.542 20:56:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.542 20:56:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:44.542 20:56:00 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:44.542 20:56:00 -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.542 20:56:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.542 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.543 20:56:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 84520016 kB' 'MemUsed: 13142668 kB' 'SwapCached: 0 kB' 'Active: 6471936 kB' 'Inactive: 3326740 kB' 'Active(anon): 6278264 kB' 'Inactive(anon): 0 kB' 'Active(file): 193672 kB' 'Inactive(file): 3326740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9500104 kB' 'Mapped: 156376 kB' 'AnonPages: 301756 kB' 'Shmem: 5979692 kB' 'KernelStack: 11368 kB' 'PageTables: 6168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 203872 kB' 'Slab: 498224 kB' 'SReclaimable: 203872 kB' 'SUnreclaim: 294352 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # continue 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.543 20:56:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.543 20:56:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.543 20:56:00 -- setup/common.sh@33 -- # echo 0 00:02:44.543 20:56:00 -- setup/common.sh@33 -- # return 0 00:02:44.543 20:56:00 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:44.543 20:56:00 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:44.543 20:56:00 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:44.543 20:56:00 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:44.543 20:56:00 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:44.543 node0=1024 expecting 1024 00:02:44.543 20:56:00 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:44.543 00:02:44.543 real 0m4.142s 00:02:44.543 user 0m1.311s 00:02:44.543 sys 0m2.093s 00:02:44.543 20:56:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:44.543 20:56:00 -- common/autotest_common.sh@10 -- # set +x 00:02:44.543 ************************************ 00:02:44.543 END TEST default_setup 00:02:44.543 ************************************ 00:02:44.803 20:56:00 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:44.803 20:56:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:44.803 20:56:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:44.803 20:56:00 -- common/autotest_common.sh@10 -- # set +x 00:02:44.803 ************************************ 00:02:44.803 START TEST per_node_1G_alloc 00:02:44.803 ************************************ 00:02:44.803 20:56:00 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:02:44.803 20:56:00 -- setup/hugepages.sh@143 -- # local IFS=, 00:02:44.803 20:56:00 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:44.803 20:56:00 -- setup/hugepages.sh@49 -- # local size=1048576 00:02:44.803 20:56:00 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:44.803 20:56:00 -- setup/hugepages.sh@51 -- # shift 00:02:44.803 20:56:00 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:44.803 20:56:00 -- setup/hugepages.sh@52 -- # local node_ids 00:02:44.803 20:56:00 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:44.803 20:56:00 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:44.803 20:56:00 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:44.803 20:56:00 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:44.803 20:56:00 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:44.803 20:56:00 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:44.803 20:56:00 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:44.803 20:56:00 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:44.803 20:56:00 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:44.803 20:56:00 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:44.803 20:56:00 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:44.803 20:56:00 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:44.803 20:56:00 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:44.803 20:56:00 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:44.803 20:56:00 -- setup/hugepages.sh@73 -- # return 0 00:02:44.803 20:56:00 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:44.803 20:56:00 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:44.803 20:56:00 -- setup/hugepages.sh@146 -- # setup output 00:02:44.803 20:56:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:44.803 20:56:00 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:48.095 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:48.095 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:48.095 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:48.095 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:48.095 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:48.095 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:48.095 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:48.095 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:48.095 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:48.095 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:48.095 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:48.095 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:48.095 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:48.095 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:48.095 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:48.095 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:48.095 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:48.095 20:56:03 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:48.095 20:56:03 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:48.095 20:56:03 -- setup/hugepages.sh@89 -- # local node 00:02:48.095 20:56:03 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:48.095 20:56:03 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:48.095 20:56:03 -- setup/hugepages.sh@92 -- # local surp 00:02:48.096 20:56:03 -- setup/hugepages.sh@93 -- # local resv 00:02:48.096 20:56:03 -- setup/hugepages.sh@94 -- # local anon 00:02:48.096 20:56:03 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:48.096 20:56:03 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:48.096 20:56:03 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:48.096 20:56:03 -- setup/common.sh@18 -- # local node= 00:02:48.096 20:56:03 -- setup/common.sh@19 -- # local var val 00:02:48.096 20:56:03 -- setup/common.sh@20 -- # local mem_f mem 00:02:48.096 20:56:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.096 20:56:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.096 20:56:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.096 20:56:03 -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.096 20:56:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 20:56:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 171849996 kB' 'MemAvailable: 175185852 kB' 'Buffers: 3888 kB' 'Cached: 13478764 kB' 'SwapCached: 0 kB' 'Active: 10337456 kB' 'Inactive: 3664944 kB' 'Active(anon): 9766088 kB' 'Inactive(anon): 0 kB' 'Active(file): 571368 kB' 'Inactive(file): 3664944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522520 kB' 'Mapped: 248780 kB' 'Shmem: 9246340 kB' 'KReclaimable: 487992 kB' 'Slab: 1127536 kB' 'SReclaimable: 487992 kB' 'SUnreclaim: 639544 kB' 'KernelStack: 20432 kB' 'PageTables: 9684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 11212648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318156 kB' 'VmallocChunk: 0 kB' 'Percpu: 93312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3632084 kB' 'DirectMap2M: 28553216 kB' 'DirectMap1G: 169869312 kB' 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.096 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.097 20:56:03 -- setup/common.sh@33 -- # echo 0 00:02:48.097 20:56:03 -- setup/common.sh@33 -- # return 0 00:02:48.097 20:56:03 -- setup/hugepages.sh@97 -- # anon=0 00:02:48.097 20:56:03 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:48.097 20:56:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:48.097 20:56:03 -- setup/common.sh@18 -- # local node= 00:02:48.097 20:56:03 -- setup/common.sh@19 -- # local var val 00:02:48.097 20:56:03 -- setup/common.sh@20 -- # local mem_f mem 00:02:48.097 20:56:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.097 20:56:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.097 20:56:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.097 20:56:03 -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.097 20:56:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 20:56:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 171851284 kB' 'MemAvailable: 175187140 kB' 'Buffers: 3888 kB' 'Cached: 13478764 kB' 'SwapCached: 0 kB' 'Active: 10337248 kB' 'Inactive: 3664944 kB' 'Active(anon): 9765880 kB' 'Inactive(anon): 0 kB' 'Active(file): 571368 kB' 'Inactive(file): 3664944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522356 kB' 'Mapped: 248780 kB' 'Shmem: 9246340 kB' 'KReclaimable: 487992 kB' 'Slab: 1127520 kB' 'SReclaimable: 487992 kB' 'SUnreclaim: 639528 kB' 'KernelStack: 20400 kB' 'PageTables: 9564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 11212660 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318108 kB' 'VmallocChunk: 0 kB' 'Percpu: 93312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3632084 kB' 'DirectMap2M: 28553216 kB' 'DirectMap1G: 169869312 kB' 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 20:56:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.098 20:56:03 -- setup/common.sh@33 -- # echo 0 00:02:48.098 20:56:03 -- setup/common.sh@33 -- # return 0 00:02:48.098 20:56:03 -- setup/hugepages.sh@99 -- # surp=0 00:02:48.098 20:56:03 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:48.098 20:56:03 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:48.098 20:56:03 -- setup/common.sh@18 -- # local node= 00:02:48.098 20:56:03 -- setup/common.sh@19 -- # local var val 00:02:48.098 20:56:03 -- setup/common.sh@20 -- # local mem_f mem 00:02:48.098 20:56:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.098 20:56:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.098 20:56:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.098 20:56:03 -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.098 20:56:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.098 20:56:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 171851288 kB' 'MemAvailable: 175187144 kB' 'Buffers: 3888 kB' 'Cached: 13478776 kB' 'SwapCached: 0 kB' 'Active: 10336956 kB' 'Inactive: 3664944 kB' 'Active(anon): 9765588 kB' 'Inactive(anon): 0 kB' 'Active(file): 571368 kB' 'Inactive(file): 3664944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522036 kB' 'Mapped: 248772 kB' 'Shmem: 9246352 kB' 'KReclaimable: 487992 kB' 'Slab: 1127496 kB' 'SReclaimable: 487992 kB' 'SUnreclaim: 639504 kB' 'KernelStack: 20400 kB' 'PageTables: 9556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 11212676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318124 kB' 'VmallocChunk: 0 kB' 'Percpu: 93312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3632084 kB' 'DirectMap2M: 28553216 kB' 'DirectMap1G: 169869312 kB' 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.098 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.098 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.099 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.099 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.100 20:56:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.100 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.100 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.100 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.100 20:56:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.100 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.100 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.100 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.100 20:56:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.100 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.100 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.100 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.100 20:56:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.100 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.100 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.100 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.100 20:56:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.100 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.100 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.100 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.100 20:56:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.100 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.100 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.100 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.100 20:56:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.100 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.100 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.100 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.100 20:56:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.100 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.100 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.100 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.100 20:56:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.100 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.100 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.100 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.100 20:56:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.100 20:56:03 -- setup/common.sh@33 -- # echo 0 00:02:48.100 20:56:03 -- setup/common.sh@33 -- # return 0 00:02:48.100 20:56:03 -- setup/hugepages.sh@100 -- # resv=0 00:02:48.100 20:56:03 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:48.100 nr_hugepages=1024 00:02:48.100 20:56:03 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:48.100 resv_hugepages=0 00:02:48.100 20:56:03 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:48.100 surplus_hugepages=0 00:02:48.100 20:56:03 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:48.100 anon_hugepages=0 00:02:48.100 20:56:03 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:48.100 20:56:03 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:48.100 20:56:03 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:48.100 20:56:03 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:48.100 20:56:03 -- setup/common.sh@18 -- # local node= 00:02:48.100 20:56:03 -- setup/common.sh@19 -- # local var val 00:02:48.100 20:56:03 -- setup/common.sh@20 -- # local mem_f mem 00:02:48.100 20:56:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.100 20:56:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.100 20:56:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.100 20:56:03 -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.100 20:56:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.100 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.100 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.100 20:56:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 171848140 kB' 'MemAvailable: 175183996 kB' 'Buffers: 3888 kB' 'Cached: 13478796 kB' 'SwapCached: 0 kB' 'Active: 10339168 kB' 'Inactive: 3664944 kB' 'Active(anon): 9767800 kB' 'Inactive(anon): 0 kB' 'Active(file): 571368 kB' 'Inactive(file): 3664944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524720 kB' 'Mapped: 249200 kB' 'Shmem: 9246372 kB' 'KReclaimable: 487992 kB' 'Slab: 1127468 kB' 'SReclaimable: 487992 kB' 'SUnreclaim: 639476 kB' 'KernelStack: 20384 kB' 'PageTables: 9512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 11216552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318092 kB' 'VmallocChunk: 0 kB' 'Percpu: 93312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3632084 kB' 'DirectMap2M: 28553216 kB' 'DirectMap1G: 169869312 kB' 00:02:48.100 20:56:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.100 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.100 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.100 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.100 20:56:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.100 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.100 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.100 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.100 20:56:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.100 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.100 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.100 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.100 20:56:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.100 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.100 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.100 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.100 20:56:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.100 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.100 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.100 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.100 20:56:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.100 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.100 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.100 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.100 20:56:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.100 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.100 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.100 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.100 20:56:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.100 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.100 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.100 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.100 20:56:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.100 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.100 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.100 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.100 20:56:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.100 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.100 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.100 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.100 20:56:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.100 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.100 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.100 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.100 20:56:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.100 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.100 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.101 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.101 20:56:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.101 20:56:03 -- setup/common.sh@33 -- # echo 1024 00:02:48.101 20:56:03 -- setup/common.sh@33 -- # return 0 00:02:48.101 20:56:03 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:48.101 20:56:03 -- setup/hugepages.sh@112 -- # get_nodes 00:02:48.101 20:56:03 -- setup/hugepages.sh@27 -- # local node 00:02:48.101 20:56:03 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:48.101 20:56:03 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:48.101 20:56:03 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:48.101 20:56:03 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:48.102 20:56:03 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:48.102 20:56:03 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:48.102 20:56:03 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:48.102 20:56:03 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:48.102 20:56:03 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:48.102 20:56:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:48.102 20:56:03 -- setup/common.sh@18 -- # local node=0 00:02:48.102 20:56:03 -- setup/common.sh@19 -- # local var val 00:02:48.102 20:56:03 -- setup/common.sh@20 -- # local mem_f mem 00:02:48.102 20:56:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.102 20:56:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:48.102 20:56:03 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:48.102 20:56:03 -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.102 20:56:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.102 20:56:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85582708 kB' 'MemUsed: 12079976 kB' 'SwapCached: 0 kB' 'Active: 6470844 kB' 'Inactive: 3326740 kB' 'Active(anon): 6277172 kB' 'Inactive(anon): 0 kB' 'Active(file): 193672 kB' 'Inactive(file): 3326740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9500180 kB' 'Mapped: 156396 kB' 'AnonPages: 300592 kB' 'Shmem: 5979768 kB' 'KernelStack: 11432 kB' 'PageTables: 6360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 203872 kB' 'Slab: 498232 kB' 'SReclaimable: 203872 kB' 'SUnreclaim: 294360 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.102 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.102 20:56:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.103 20:56:03 -- setup/common.sh@33 -- # echo 0 00:02:48.103 20:56:03 -- setup/common.sh@33 -- # return 0 00:02:48.103 20:56:03 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:48.103 20:56:03 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:48.103 20:56:03 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:48.103 20:56:03 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:48.103 20:56:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:48.103 20:56:03 -- setup/common.sh@18 -- # local node=1 00:02:48.103 20:56:03 -- setup/common.sh@19 -- # local var val 00:02:48.103 20:56:03 -- setup/common.sh@20 -- # local mem_f mem 00:02:48.103 20:56:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.103 20:56:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:48.103 20:56:03 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:48.103 20:56:03 -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.103 20:56:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.103 20:56:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718480 kB' 'MemFree: 86261400 kB' 'MemUsed: 7457080 kB' 'SwapCached: 0 kB' 'Active: 3865424 kB' 'Inactive: 338204 kB' 'Active(anon): 3487728 kB' 'Inactive(anon): 0 kB' 'Active(file): 377696 kB' 'Inactive(file): 338204 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3982516 kB' 'Mapped: 92712 kB' 'AnonPages: 221200 kB' 'Shmem: 3266616 kB' 'KernelStack: 8968 kB' 'PageTables: 3228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 284120 kB' 'Slab: 629236 kB' 'SReclaimable: 284120 kB' 'SUnreclaim: 345116 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.103 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.103 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.104 20:56:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.104 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.104 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.104 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.104 20:56:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.104 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.104 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.104 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.104 20:56:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.104 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.104 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.104 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.104 20:56:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.104 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.104 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.104 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.104 20:56:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.104 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.104 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.104 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.104 20:56:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.104 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.104 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.104 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.104 20:56:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.104 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.104 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.104 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.104 20:56:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.104 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.104 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.104 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.104 20:56:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.104 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.104 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.104 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.104 20:56:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.104 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.104 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.104 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.104 20:56:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.104 20:56:03 -- setup/common.sh@32 -- # continue 00:02:48.104 20:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.104 20:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.104 20:56:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.104 20:56:03 -- setup/common.sh@33 -- # echo 0 00:02:48.104 20:56:03 -- setup/common.sh@33 -- # return 0 00:02:48.104 20:56:03 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:48.104 20:56:03 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:48.104 20:56:03 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:48.104 20:56:03 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:48.104 20:56:03 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:48.104 node0=512 expecting 512 00:02:48.104 20:56:03 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:48.104 20:56:03 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:48.104 20:56:03 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:48.104 20:56:03 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:48.104 node1=512 expecting 512 00:02:48.104 20:56:03 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:48.104 00:02:48.104 real 0m3.154s 00:02:48.104 user 0m1.244s 00:02:48.104 sys 0m1.967s 00:02:48.104 20:56:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:48.104 20:56:03 -- common/autotest_common.sh@10 -- # set +x 00:02:48.104 ************************************ 00:02:48.104 END TEST per_node_1G_alloc 00:02:48.104 ************************************ 00:02:48.104 20:56:03 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:48.104 20:56:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:48.104 20:56:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:48.104 20:56:03 -- common/autotest_common.sh@10 -- # set +x 00:02:48.104 ************************************ 00:02:48.104 START TEST even_2G_alloc 00:02:48.104 ************************************ 00:02:48.104 20:56:03 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:02:48.104 20:56:03 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:48.104 20:56:03 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:48.104 20:56:03 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:48.104 20:56:03 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:48.104 20:56:03 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:48.104 20:56:03 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:48.104 20:56:03 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:48.104 20:56:03 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:48.104 20:56:03 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:48.104 20:56:03 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:48.104 20:56:03 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:48.104 20:56:03 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:48.104 20:56:03 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:48.104 20:56:03 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:48.104 20:56:03 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:48.104 20:56:03 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:48.104 20:56:03 -- setup/hugepages.sh@83 -- # : 512 00:02:48.104 20:56:03 -- setup/hugepages.sh@84 -- # : 1 00:02:48.104 20:56:03 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:48.104 20:56:03 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:48.104 20:56:03 -- setup/hugepages.sh@83 -- # : 0 00:02:48.104 20:56:03 -- setup/hugepages.sh@84 -- # : 0 00:02:48.104 20:56:03 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:48.104 20:56:03 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:48.104 20:56:03 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:48.104 20:56:03 -- setup/hugepages.sh@153 -- # setup output 00:02:48.104 20:56:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:48.104 20:56:03 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:51.428 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:51.429 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:51.429 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:51.429 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:51.429 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:51.429 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:51.429 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:51.429 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:51.429 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:51.429 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:51.429 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:51.429 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:51.429 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:51.429 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:51.429 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:51.429 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:51.429 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:51.429 20:56:07 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:02:51.429 20:56:07 -- setup/hugepages.sh@89 -- # local node 00:02:51.429 20:56:07 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:51.429 20:56:07 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:51.429 20:56:07 -- setup/hugepages.sh@92 -- # local surp 00:02:51.429 20:56:07 -- setup/hugepages.sh@93 -- # local resv 00:02:51.429 20:56:07 -- setup/hugepages.sh@94 -- # local anon 00:02:51.429 20:56:07 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:51.429 20:56:07 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:51.429 20:56:07 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:51.429 20:56:07 -- setup/common.sh@18 -- # local node= 00:02:51.429 20:56:07 -- setup/common.sh@19 -- # local var val 00:02:51.429 20:56:07 -- setup/common.sh@20 -- # local mem_f mem 00:02:51.429 20:56:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.429 20:56:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.429 20:56:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.429 20:56:07 -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.429 20:56:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.429 20:56:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 171910436 kB' 'MemAvailable: 175246292 kB' 'Buffers: 3888 kB' 'Cached: 13478888 kB' 'SwapCached: 0 kB' 'Active: 10337604 kB' 'Inactive: 3664944 kB' 'Active(anon): 9766236 kB' 'Inactive(anon): 0 kB' 'Active(file): 571368 kB' 'Inactive(file): 3664944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522368 kB' 'Mapped: 247708 kB' 'Shmem: 9246464 kB' 'KReclaimable: 487992 kB' 'Slab: 1126732 kB' 'SReclaimable: 487992 kB' 'SUnreclaim: 638740 kB' 'KernelStack: 20416 kB' 'PageTables: 9208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 11202980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318044 kB' 'VmallocChunk: 0 kB' 'Percpu: 93312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3632084 kB' 'DirectMap2M: 28553216 kB' 'DirectMap1G: 169869312 kB' 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.429 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.429 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.430 20:56:07 -- setup/common.sh@33 -- # echo 0 00:02:51.430 20:56:07 -- setup/common.sh@33 -- # return 0 00:02:51.430 20:56:07 -- setup/hugepages.sh@97 -- # anon=0 00:02:51.430 20:56:07 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:51.430 20:56:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:51.430 20:56:07 -- setup/common.sh@18 -- # local node= 00:02:51.430 20:56:07 -- setup/common.sh@19 -- # local var val 00:02:51.430 20:56:07 -- setup/common.sh@20 -- # local mem_f mem 00:02:51.430 20:56:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.430 20:56:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.430 20:56:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.430 20:56:07 -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.430 20:56:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.430 20:56:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 171913716 kB' 'MemAvailable: 175249572 kB' 'Buffers: 3888 kB' 'Cached: 13478888 kB' 'SwapCached: 0 kB' 'Active: 10336448 kB' 'Inactive: 3664944 kB' 'Active(anon): 9765080 kB' 'Inactive(anon): 0 kB' 'Active(file): 571368 kB' 'Inactive(file): 3664944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521680 kB' 'Mapped: 247640 kB' 'Shmem: 9246464 kB' 'KReclaimable: 487992 kB' 'Slab: 1126720 kB' 'SReclaimable: 487992 kB' 'SUnreclaim: 638728 kB' 'KernelStack: 20432 kB' 'PageTables: 9692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 11204500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317996 kB' 'VmallocChunk: 0 kB' 'Percpu: 93312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3632084 kB' 'DirectMap2M: 28553216 kB' 'DirectMap1G: 169869312 kB' 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.430 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.430 20:56:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.431 20:56:07 -- setup/common.sh@33 -- # echo 0 00:02:51.431 20:56:07 -- setup/common.sh@33 -- # return 0 00:02:51.431 20:56:07 -- setup/hugepages.sh@99 -- # surp=0 00:02:51.431 20:56:07 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:51.431 20:56:07 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:51.431 20:56:07 -- setup/common.sh@18 -- # local node= 00:02:51.431 20:56:07 -- setup/common.sh@19 -- # local var val 00:02:51.431 20:56:07 -- setup/common.sh@20 -- # local mem_f mem 00:02:51.431 20:56:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.431 20:56:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.431 20:56:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.431 20:56:07 -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.431 20:56:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.431 20:56:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 171912188 kB' 'MemAvailable: 175248044 kB' 'Buffers: 3888 kB' 'Cached: 13478900 kB' 'SwapCached: 0 kB' 'Active: 10335708 kB' 'Inactive: 3664944 kB' 'Active(anon): 9764340 kB' 'Inactive(anon): 0 kB' 'Active(file): 571368 kB' 'Inactive(file): 3664944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521120 kB' 'Mapped: 247640 kB' 'Shmem: 9246476 kB' 'KReclaimable: 487992 kB' 'Slab: 1126668 kB' 'SReclaimable: 487992 kB' 'SUnreclaim: 638676 kB' 'KernelStack: 20544 kB' 'PageTables: 9824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 11204512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318092 kB' 'VmallocChunk: 0 kB' 'Percpu: 93312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3632084 kB' 'DirectMap2M: 28553216 kB' 'DirectMap1G: 169869312 kB' 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.431 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.431 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.432 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.432 20:56:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.433 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.433 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.433 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.433 20:56:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.433 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.433 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.433 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.433 20:56:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.433 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.433 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.433 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.433 20:56:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.433 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.433 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.433 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.433 20:56:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.433 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.433 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.433 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.433 20:56:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.433 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.433 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.433 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.433 20:56:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.433 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.433 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.433 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.433 20:56:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.433 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.433 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.433 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.433 20:56:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.433 20:56:07 -- setup/common.sh@33 -- # echo 0 00:02:51.433 20:56:07 -- setup/common.sh@33 -- # return 0 00:02:51.433 20:56:07 -- setup/hugepages.sh@100 -- # resv=0 00:02:51.433 20:56:07 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:51.433 nr_hugepages=1024 00:02:51.433 20:56:07 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:51.433 resv_hugepages=0 00:02:51.433 20:56:07 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:51.433 surplus_hugepages=0 00:02:51.433 20:56:07 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:51.433 anon_hugepages=0 00:02:51.433 20:56:07 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:51.433 20:56:07 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:51.433 20:56:07 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:51.433 20:56:07 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:51.433 20:56:07 -- setup/common.sh@18 -- # local node= 00:02:51.433 20:56:07 -- setup/common.sh@19 -- # local var val 00:02:51.433 20:56:07 -- setup/common.sh@20 -- # local mem_f mem 00:02:51.433 20:56:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.433 20:56:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.433 20:56:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.433 20:56:07 -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.433 20:56:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.433 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.433 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.433 20:56:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 171917136 kB' 'MemAvailable: 175252992 kB' 'Buffers: 3888 kB' 'Cached: 13478916 kB' 'SwapCached: 0 kB' 'Active: 10335924 kB' 'Inactive: 3664944 kB' 'Active(anon): 9764556 kB' 'Inactive(anon): 0 kB' 'Active(file): 571368 kB' 'Inactive(file): 3664944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521276 kB' 'Mapped: 247640 kB' 'Shmem: 9246492 kB' 'KReclaimable: 487992 kB' 'Slab: 1126564 kB' 'SReclaimable: 487992 kB' 'SUnreclaim: 638572 kB' 'KernelStack: 20560 kB' 'PageTables: 9776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 11204528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318108 kB' 'VmallocChunk: 0 kB' 'Percpu: 93312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3632084 kB' 'DirectMap2M: 28553216 kB' 'DirectMap1G: 169869312 kB' 00:02:51.433 20:56:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.433 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.433 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.433 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.433 20:56:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.433 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.433 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.433 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.433 20:56:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.433 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.433 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.433 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.433 20:56:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.433 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.433 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.433 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.433 20:56:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.433 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.433 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.433 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.433 20:56:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.433 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.433 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.433 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.433 20:56:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.433 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.433 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.433 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.433 20:56:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.433 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.433 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.433 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.433 20:56:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.433 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.433 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.433 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.433 20:56:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.433 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.433 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.433 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.433 20:56:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.433 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.433 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.433 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.433 20:56:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.433 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.433 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.433 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.433 20:56:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.433 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.433 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.433 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.433 20:56:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.433 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.433 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.433 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.433 20:56:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.433 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.434 20:56:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.434 20:56:07 -- setup/common.sh@33 -- # echo 1024 00:02:51.434 20:56:07 -- setup/common.sh@33 -- # return 0 00:02:51.434 20:56:07 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:51.434 20:56:07 -- setup/hugepages.sh@112 -- # get_nodes 00:02:51.434 20:56:07 -- setup/hugepages.sh@27 -- # local node 00:02:51.434 20:56:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:51.434 20:56:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:51.434 20:56:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:51.434 20:56:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:51.434 20:56:07 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:51.434 20:56:07 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:51.434 20:56:07 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:51.434 20:56:07 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:51.434 20:56:07 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:51.434 20:56:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:51.434 20:56:07 -- setup/common.sh@18 -- # local node=0 00:02:51.434 20:56:07 -- setup/common.sh@19 -- # local var val 00:02:51.434 20:56:07 -- setup/common.sh@20 -- # local mem_f mem 00:02:51.434 20:56:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.434 20:56:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:51.434 20:56:07 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:51.434 20:56:07 -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.434 20:56:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.434 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.435 20:56:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85640184 kB' 'MemUsed: 12022500 kB' 'SwapCached: 0 kB' 'Active: 6471208 kB' 'Inactive: 3326740 kB' 'Active(anon): 6277536 kB' 'Inactive(anon): 0 kB' 'Active(file): 193672 kB' 'Inactive(file): 3326740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9500288 kB' 'Mapped: 155340 kB' 'AnonPages: 300900 kB' 'Shmem: 5979876 kB' 'KernelStack: 11384 kB' 'PageTables: 6140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 203872 kB' 'Slab: 497744 kB' 'SReclaimable: 203872 kB' 'SUnreclaim: 293872 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.435 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.435 20:56:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.435 20:56:07 -- setup/common.sh@33 -- # echo 0 00:02:51.435 20:56:07 -- setup/common.sh@33 -- # return 0 00:02:51.435 20:56:07 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:51.435 20:56:07 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:51.435 20:56:07 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:51.435 20:56:07 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:51.435 20:56:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:51.435 20:56:07 -- setup/common.sh@18 -- # local node=1 00:02:51.435 20:56:07 -- setup/common.sh@19 -- # local var val 00:02:51.435 20:56:07 -- setup/common.sh@20 -- # local mem_f mem 00:02:51.436 20:56:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.436 20:56:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:51.436 20:56:07 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:51.436 20:56:07 -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.436 20:56:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.436 20:56:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718480 kB' 'MemFree: 86276852 kB' 'MemUsed: 7441628 kB' 'SwapCached: 0 kB' 'Active: 3864244 kB' 'Inactive: 338204 kB' 'Active(anon): 3486548 kB' 'Inactive(anon): 0 kB' 'Active(file): 377696 kB' 'Inactive(file): 338204 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3982532 kB' 'Mapped: 92300 kB' 'AnonPages: 219940 kB' 'Shmem: 3266632 kB' 'KernelStack: 9112 kB' 'PageTables: 3372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 284120 kB' 'Slab: 628820 kB' 'SReclaimable: 284120 kB' 'SUnreclaim: 344700 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # continue 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.436 20:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.436 20:56:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.436 20:56:07 -- setup/common.sh@33 -- # echo 0 00:02:51.436 20:56:07 -- setup/common.sh@33 -- # return 0 00:02:51.437 20:56:07 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:51.437 20:56:07 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:51.437 20:56:07 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:51.437 20:56:07 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:51.437 20:56:07 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:51.437 node0=512 expecting 512 00:02:51.437 20:56:07 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:51.437 20:56:07 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:51.437 20:56:07 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:51.437 20:56:07 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:51.437 node1=512 expecting 512 00:02:51.437 20:56:07 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:51.437 00:02:51.437 real 0m3.299s 00:02:51.437 user 0m1.293s 00:02:51.437 sys 0m2.077s 00:02:51.437 20:56:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:51.437 20:56:07 -- common/autotest_common.sh@10 -- # set +x 00:02:51.437 ************************************ 00:02:51.437 END TEST even_2G_alloc 00:02:51.437 ************************************ 00:02:51.437 20:56:07 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:02:51.437 20:56:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:51.437 20:56:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:51.437 20:56:07 -- common/autotest_common.sh@10 -- # set +x 00:02:51.437 ************************************ 00:02:51.437 START TEST odd_alloc 00:02:51.437 ************************************ 00:02:51.437 20:56:07 -- common/autotest_common.sh@1111 -- # odd_alloc 00:02:51.437 20:56:07 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:02:51.437 20:56:07 -- setup/hugepages.sh@49 -- # local size=2098176 00:02:51.437 20:56:07 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:51.437 20:56:07 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:51.437 20:56:07 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:02:51.437 20:56:07 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:51.437 20:56:07 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:51.437 20:56:07 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:51.437 20:56:07 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:02:51.437 20:56:07 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:51.437 20:56:07 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:51.437 20:56:07 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:51.696 20:56:07 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:51.696 20:56:07 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:51.696 20:56:07 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:51.696 20:56:07 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:51.696 20:56:07 -- setup/hugepages.sh@83 -- # : 513 00:02:51.696 20:56:07 -- setup/hugepages.sh@84 -- # : 1 00:02:51.696 20:56:07 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:51.696 20:56:07 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:02:51.696 20:56:07 -- setup/hugepages.sh@83 -- # : 0 00:02:51.696 20:56:07 -- setup/hugepages.sh@84 -- # : 0 00:02:51.696 20:56:07 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:51.696 20:56:07 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:02:51.696 20:56:07 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:02:51.696 20:56:07 -- setup/hugepages.sh@160 -- # setup output 00:02:51.696 20:56:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:51.696 20:56:07 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:54.991 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:54.991 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:54.991 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:54.991 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:54.991 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:54.991 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:54.991 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:54.991 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:54.991 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:54.991 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:54.991 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:54.991 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:54.991 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:54.991 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:54.991 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:54.991 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:54.991 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:54.991 20:56:10 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:02:54.991 20:56:10 -- setup/hugepages.sh@89 -- # local node 00:02:54.991 20:56:10 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:54.991 20:56:10 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:54.991 20:56:10 -- setup/hugepages.sh@92 -- # local surp 00:02:54.991 20:56:10 -- setup/hugepages.sh@93 -- # local resv 00:02:54.991 20:56:10 -- setup/hugepages.sh@94 -- # local anon 00:02:54.991 20:56:10 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:54.991 20:56:10 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:54.991 20:56:10 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:54.991 20:56:10 -- setup/common.sh@18 -- # local node= 00:02:54.991 20:56:10 -- setup/common.sh@19 -- # local var val 00:02:54.991 20:56:10 -- setup/common.sh@20 -- # local mem_f mem 00:02:54.991 20:56:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.991 20:56:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:54.991 20:56:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:54.991 20:56:10 -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.991 20:56:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.991 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.991 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.991 20:56:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 171895884 kB' 'MemAvailable: 175231740 kB' 'Buffers: 3888 kB' 'Cached: 13479008 kB' 'SwapCached: 0 kB' 'Active: 10336992 kB' 'Inactive: 3664944 kB' 'Active(anon): 9765624 kB' 'Inactive(anon): 0 kB' 'Active(file): 571368 kB' 'Inactive(file): 3664944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521928 kB' 'Mapped: 247756 kB' 'Shmem: 9246584 kB' 'KReclaimable: 487992 kB' 'Slab: 1127076 kB' 'SReclaimable: 487992 kB' 'SUnreclaim: 639084 kB' 'KernelStack: 20368 kB' 'PageTables: 9384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029584 kB' 'Committed_AS: 11202180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318028 kB' 'VmallocChunk: 0 kB' 'Percpu: 93312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3632084 kB' 'DirectMap2M: 28553216 kB' 'DirectMap1G: 169869312 kB' 00:02:54.991 20:56:10 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.991 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.991 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.991 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.991 20:56:10 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.991 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.991 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.991 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.991 20:56:10 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.991 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.991 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.991 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.991 20:56:10 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.991 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.991 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.991 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.991 20:56:10 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.991 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.991 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.991 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.991 20:56:10 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.991 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.991 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.991 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.991 20:56:10 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.992 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.992 20:56:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.992 20:56:10 -- setup/common.sh@33 -- # echo 0 00:02:54.992 20:56:10 -- setup/common.sh@33 -- # return 0 00:02:54.992 20:56:10 -- setup/hugepages.sh@97 -- # anon=0 00:02:54.992 20:56:10 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:54.992 20:56:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:54.992 20:56:10 -- setup/common.sh@18 -- # local node= 00:02:54.992 20:56:10 -- setup/common.sh@19 -- # local var val 00:02:54.992 20:56:10 -- setup/common.sh@20 -- # local mem_f mem 00:02:54.992 20:56:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.992 20:56:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:54.992 20:56:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:54.992 20:56:10 -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.992 20:56:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.993 20:56:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 171896880 kB' 'MemAvailable: 175232736 kB' 'Buffers: 3888 kB' 'Cached: 13479012 kB' 'SwapCached: 0 kB' 'Active: 10336060 kB' 'Inactive: 3664944 kB' 'Active(anon): 9764692 kB' 'Inactive(anon): 0 kB' 'Active(file): 571368 kB' 'Inactive(file): 3664944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521480 kB' 'Mapped: 247664 kB' 'Shmem: 9246588 kB' 'KReclaimable: 487992 kB' 'Slab: 1126988 kB' 'SReclaimable: 487992 kB' 'SUnreclaim: 638996 kB' 'KernelStack: 20368 kB' 'PageTables: 9372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029584 kB' 'Committed_AS: 11202192 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317996 kB' 'VmallocChunk: 0 kB' 'Percpu: 93312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3632084 kB' 'DirectMap2M: 28553216 kB' 'DirectMap1G: 169869312 kB' 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.993 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.993 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.994 20:56:10 -- setup/common.sh@33 -- # echo 0 00:02:54.994 20:56:10 -- setup/common.sh@33 -- # return 0 00:02:54.994 20:56:10 -- setup/hugepages.sh@99 -- # surp=0 00:02:54.994 20:56:10 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:54.994 20:56:10 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:54.994 20:56:10 -- setup/common.sh@18 -- # local node= 00:02:54.994 20:56:10 -- setup/common.sh@19 -- # local var val 00:02:54.994 20:56:10 -- setup/common.sh@20 -- # local mem_f mem 00:02:54.994 20:56:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.994 20:56:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:54.994 20:56:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:54.994 20:56:10 -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.994 20:56:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.994 20:56:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 171896880 kB' 'MemAvailable: 175232736 kB' 'Buffers: 3888 kB' 'Cached: 13479024 kB' 'SwapCached: 0 kB' 'Active: 10335792 kB' 'Inactive: 3664944 kB' 'Active(anon): 9764424 kB' 'Inactive(anon): 0 kB' 'Active(file): 571368 kB' 'Inactive(file): 3664944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521164 kB' 'Mapped: 247664 kB' 'Shmem: 9246600 kB' 'KReclaimable: 487992 kB' 'Slab: 1126988 kB' 'SReclaimable: 487992 kB' 'SUnreclaim: 638996 kB' 'KernelStack: 20352 kB' 'PageTables: 9320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029584 kB' 'Committed_AS: 11202208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317996 kB' 'VmallocChunk: 0 kB' 'Percpu: 93312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3632084 kB' 'DirectMap2M: 28553216 kB' 'DirectMap1G: 169869312 kB' 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.994 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.994 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.995 20:56:10 -- setup/common.sh@33 -- # echo 0 00:02:54.995 20:56:10 -- setup/common.sh@33 -- # return 0 00:02:54.995 20:56:10 -- setup/hugepages.sh@100 -- # resv=0 00:02:54.995 20:56:10 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:02:54.995 nr_hugepages=1025 00:02:54.995 20:56:10 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:54.995 resv_hugepages=0 00:02:54.995 20:56:10 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:54.995 surplus_hugepages=0 00:02:54.995 20:56:10 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:54.995 anon_hugepages=0 00:02:54.995 20:56:10 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:54.995 20:56:10 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:02:54.995 20:56:10 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:54.995 20:56:10 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:54.995 20:56:10 -- setup/common.sh@18 -- # local node= 00:02:54.995 20:56:10 -- setup/common.sh@19 -- # local var val 00:02:54.995 20:56:10 -- setup/common.sh@20 -- # local mem_f mem 00:02:54.995 20:56:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.995 20:56:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:54.995 20:56:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:54.995 20:56:10 -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.995 20:56:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.995 20:56:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 171896628 kB' 'MemAvailable: 175232484 kB' 'Buffers: 3888 kB' 'Cached: 13479024 kB' 'SwapCached: 0 kB' 'Active: 10336176 kB' 'Inactive: 3664944 kB' 'Active(anon): 9764808 kB' 'Inactive(anon): 0 kB' 'Active(file): 571368 kB' 'Inactive(file): 3664944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521012 kB' 'Mapped: 247664 kB' 'Shmem: 9246600 kB' 'KReclaimable: 487992 kB' 'Slab: 1126988 kB' 'SReclaimable: 487992 kB' 'SUnreclaim: 638996 kB' 'KernelStack: 20352 kB' 'PageTables: 9320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029584 kB' 'Committed_AS: 11202220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317996 kB' 'VmallocChunk: 0 kB' 'Percpu: 93312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3632084 kB' 'DirectMap2M: 28553216 kB' 'DirectMap1G: 169869312 kB' 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.995 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.995 20:56:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.996 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.996 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.997 20:56:10 -- setup/common.sh@33 -- # echo 1025 00:02:54.997 20:56:10 -- setup/common.sh@33 -- # return 0 00:02:54.997 20:56:10 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:54.997 20:56:10 -- setup/hugepages.sh@112 -- # get_nodes 00:02:54.997 20:56:10 -- setup/hugepages.sh@27 -- # local node 00:02:54.997 20:56:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:54.997 20:56:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:54.997 20:56:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:54.997 20:56:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:02:54.997 20:56:10 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:54.997 20:56:10 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:54.997 20:56:10 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:54.997 20:56:10 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:54.997 20:56:10 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:54.997 20:56:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:54.997 20:56:10 -- setup/common.sh@18 -- # local node=0 00:02:54.997 20:56:10 -- setup/common.sh@19 -- # local var val 00:02:54.997 20:56:10 -- setup/common.sh@20 -- # local mem_f mem 00:02:54.997 20:56:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.997 20:56:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:54.997 20:56:10 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:54.997 20:56:10 -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.997 20:56:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.997 20:56:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85607272 kB' 'MemUsed: 12055412 kB' 'SwapCached: 0 kB' 'Active: 6470852 kB' 'Inactive: 3326740 kB' 'Active(anon): 6277180 kB' 'Inactive(anon): 0 kB' 'Active(file): 193672 kB' 'Inactive(file): 3326740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9500388 kB' 'Mapped: 155364 kB' 'AnonPages: 300540 kB' 'Shmem: 5979976 kB' 'KernelStack: 11400 kB' 'PageTables: 6188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 203872 kB' 'Slab: 497816 kB' 'SReclaimable: 203872 kB' 'SUnreclaim: 293944 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.997 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.997 20:56:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.998 20:56:10 -- setup/common.sh@33 -- # echo 0 00:02:54.998 20:56:10 -- setup/common.sh@33 -- # return 0 00:02:54.998 20:56:10 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:54.998 20:56:10 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:54.998 20:56:10 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:54.998 20:56:10 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:54.998 20:56:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:54.998 20:56:10 -- setup/common.sh@18 -- # local node=1 00:02:54.998 20:56:10 -- setup/common.sh@19 -- # local var val 00:02:54.998 20:56:10 -- setup/common.sh@20 -- # local mem_f mem 00:02:54.998 20:56:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.998 20:56:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:54.998 20:56:10 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:54.998 20:56:10 -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.998 20:56:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.998 20:56:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718480 kB' 'MemFree: 86289288 kB' 'MemUsed: 7429192 kB' 'SwapCached: 0 kB' 'Active: 3864716 kB' 'Inactive: 338204 kB' 'Active(anon): 3487020 kB' 'Inactive(anon): 0 kB' 'Active(file): 377696 kB' 'Inactive(file): 338204 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3982552 kB' 'Mapped: 92300 kB' 'AnonPages: 220380 kB' 'Shmem: 3266652 kB' 'KernelStack: 8920 kB' 'PageTables: 3004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 284120 kB' 'Slab: 629172 kB' 'SReclaimable: 284120 kB' 'SUnreclaim: 345052 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.998 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.998 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.999 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.999 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.999 20:56:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.999 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.999 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.999 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.999 20:56:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.999 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.999 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.999 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.999 20:56:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.999 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.999 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.999 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.999 20:56:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.999 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.999 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.999 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.999 20:56:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.999 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.999 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.999 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.999 20:56:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.999 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.999 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.999 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.999 20:56:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.999 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.999 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.999 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.999 20:56:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.999 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.999 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.999 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.999 20:56:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.999 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.999 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.999 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.999 20:56:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.999 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.999 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.999 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.999 20:56:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.999 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.999 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.999 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.999 20:56:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.999 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.999 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.999 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.999 20:56:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.999 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.999 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.999 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.999 20:56:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.999 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.999 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.999 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.999 20:56:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.999 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.999 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.999 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.999 20:56:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.999 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.999 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.999 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.999 20:56:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.999 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.999 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.999 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.999 20:56:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.999 20:56:10 -- setup/common.sh@32 -- # continue 00:02:54.999 20:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.999 20:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.999 20:56:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.999 20:56:10 -- setup/common.sh@33 -- # echo 0 00:02:54.999 20:56:10 -- setup/common.sh@33 -- # return 0 00:02:54.999 20:56:10 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:54.999 20:56:10 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:54.999 20:56:10 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:54.999 20:56:10 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:54.999 20:56:10 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:02:54.999 node0=512 expecting 513 00:02:54.999 20:56:10 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:54.999 20:56:10 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:54.999 20:56:10 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:54.999 20:56:10 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:02:54.999 node1=513 expecting 512 00:02:54.999 20:56:10 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:02:54.999 00:02:54.999 real 0m3.232s 00:02:54.999 user 0m1.283s 00:02:54.999 sys 0m2.024s 00:02:54.999 20:56:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:54.999 20:56:10 -- common/autotest_common.sh@10 -- # set +x 00:02:54.999 ************************************ 00:02:54.999 END TEST odd_alloc 00:02:54.999 ************************************ 00:02:54.999 20:56:10 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:02:54.999 20:56:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:54.999 20:56:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:54.999 20:56:10 -- common/autotest_common.sh@10 -- # set +x 00:02:54.999 ************************************ 00:02:54.999 START TEST custom_alloc 00:02:54.999 ************************************ 00:02:54.999 20:56:10 -- common/autotest_common.sh@1111 -- # custom_alloc 00:02:54.999 20:56:10 -- setup/hugepages.sh@167 -- # local IFS=, 00:02:54.999 20:56:10 -- setup/hugepages.sh@169 -- # local node 00:02:54.999 20:56:10 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:02:54.999 20:56:10 -- setup/hugepages.sh@170 -- # local nodes_hp 00:02:54.999 20:56:10 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:02:54.999 20:56:10 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:02:54.999 20:56:10 -- setup/hugepages.sh@49 -- # local size=1048576 00:02:54.999 20:56:10 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:54.999 20:56:10 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:54.999 20:56:10 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:54.999 20:56:10 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:54.999 20:56:10 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:54.999 20:56:10 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:54.999 20:56:10 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:54.999 20:56:10 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:54.999 20:56:10 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:54.999 20:56:10 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:54.999 20:56:10 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:54.999 20:56:10 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:54.999 20:56:10 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:54.999 20:56:10 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:54.999 20:56:10 -- setup/hugepages.sh@83 -- # : 256 00:02:54.999 20:56:10 -- setup/hugepages.sh@84 -- # : 1 00:02:54.999 20:56:10 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:54.999 20:56:10 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:54.999 20:56:10 -- setup/hugepages.sh@83 -- # : 0 00:02:54.999 20:56:10 -- setup/hugepages.sh@84 -- # : 0 00:02:54.999 20:56:10 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:54.999 20:56:10 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:02:54.999 20:56:10 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:02:54.999 20:56:10 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:02:54.999 20:56:10 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:54.999 20:56:10 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:54.999 20:56:10 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:54.999 20:56:10 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:54.999 20:56:10 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:54.999 20:56:10 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:54.999 20:56:10 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:54.999 20:56:10 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:54.999 20:56:10 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:54.999 20:56:10 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:54.999 20:56:10 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:54.999 20:56:10 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:54.999 20:56:10 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:02:54.999 20:56:10 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:54.999 20:56:10 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:54.999 20:56:10 -- setup/hugepages.sh@78 -- # return 0 00:02:54.999 20:56:10 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:02:54.999 20:56:10 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:54.999 20:56:10 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:54.999 20:56:10 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:54.999 20:56:10 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:54.999 20:56:10 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:54.999 20:56:10 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:54.999 20:56:10 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:02:54.999 20:56:10 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:54.999 20:56:10 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:54.999 20:56:10 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:54.999 20:56:10 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:54.999 20:56:10 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:54.999 20:56:10 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:54.999 20:56:10 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:54.999 20:56:10 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:02:55.000 20:56:10 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:55.000 20:56:10 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:55.000 20:56:10 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:55.000 20:56:10 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:02:55.000 20:56:10 -- setup/hugepages.sh@78 -- # return 0 00:02:55.000 20:56:10 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:02:55.000 20:56:10 -- setup/hugepages.sh@187 -- # setup output 00:02:55.000 20:56:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:55.000 20:56:10 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:58.293 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:58.293 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:58.293 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:58.293 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:58.293 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:58.293 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:58.293 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:58.293 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:58.293 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:58.294 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:58.294 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:58.294 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:58.294 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:58.294 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:58.294 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:58.294 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:58.294 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:58.294 20:56:13 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:02:58.294 20:56:13 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:02:58.294 20:56:13 -- setup/hugepages.sh@89 -- # local node 00:02:58.294 20:56:13 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:58.294 20:56:13 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:58.294 20:56:13 -- setup/hugepages.sh@92 -- # local surp 00:02:58.294 20:56:13 -- setup/hugepages.sh@93 -- # local resv 00:02:58.294 20:56:13 -- setup/hugepages.sh@94 -- # local anon 00:02:58.294 20:56:13 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:58.294 20:56:13 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:58.294 20:56:13 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:58.294 20:56:13 -- setup/common.sh@18 -- # local node= 00:02:58.294 20:56:13 -- setup/common.sh@19 -- # local var val 00:02:58.294 20:56:13 -- setup/common.sh@20 -- # local mem_f mem 00:02:58.294 20:56:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.294 20:56:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:58.294 20:56:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:58.294 20:56:13 -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.294 20:56:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.294 20:56:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 170843560 kB' 'MemAvailable: 174179416 kB' 'Buffers: 3888 kB' 'Cached: 13479136 kB' 'SwapCached: 0 kB' 'Active: 10336084 kB' 'Inactive: 3664944 kB' 'Active(anon): 9764716 kB' 'Inactive(anon): 0 kB' 'Active(file): 571368 kB' 'Inactive(file): 3664944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521216 kB' 'Mapped: 247724 kB' 'Shmem: 9246712 kB' 'KReclaimable: 487992 kB' 'Slab: 1127660 kB' 'SReclaimable: 487992 kB' 'SUnreclaim: 639668 kB' 'KernelStack: 20384 kB' 'PageTables: 9428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506320 kB' 'Committed_AS: 11203112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318188 kB' 'VmallocChunk: 0 kB' 'Percpu: 93312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3632084 kB' 'DirectMap2M: 28553216 kB' 'DirectMap1G: 169869312 kB' 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.294 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.294 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.295 20:56:13 -- setup/common.sh@33 -- # echo 0 00:02:58.295 20:56:13 -- setup/common.sh@33 -- # return 0 00:02:58.295 20:56:13 -- setup/hugepages.sh@97 -- # anon=0 00:02:58.295 20:56:13 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:58.295 20:56:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:58.295 20:56:13 -- setup/common.sh@18 -- # local node= 00:02:58.295 20:56:13 -- setup/common.sh@19 -- # local var val 00:02:58.295 20:56:13 -- setup/common.sh@20 -- # local mem_f mem 00:02:58.295 20:56:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.295 20:56:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:58.295 20:56:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:58.295 20:56:13 -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.295 20:56:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 20:56:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 170846080 kB' 'MemAvailable: 174181936 kB' 'Buffers: 3888 kB' 'Cached: 13479140 kB' 'SwapCached: 0 kB' 'Active: 10336396 kB' 'Inactive: 3664944 kB' 'Active(anon): 9765028 kB' 'Inactive(anon): 0 kB' 'Active(file): 571368 kB' 'Inactive(file): 3664944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521576 kB' 'Mapped: 247724 kB' 'Shmem: 9246716 kB' 'KReclaimable: 487992 kB' 'Slab: 1127660 kB' 'SReclaimable: 487992 kB' 'SUnreclaim: 639668 kB' 'KernelStack: 20368 kB' 'PageTables: 9364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506320 kB' 'Committed_AS: 11203124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318140 kB' 'VmallocChunk: 0 kB' 'Percpu: 93312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3632084 kB' 'DirectMap2M: 28553216 kB' 'DirectMap1G: 169869312 kB' 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.295 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.295 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 20:56:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.296 20:56:13 -- setup/common.sh@33 -- # echo 0 00:02:58.296 20:56:13 -- setup/common.sh@33 -- # return 0 00:02:58.296 20:56:13 -- setup/hugepages.sh@99 -- # surp=0 00:02:58.296 20:56:13 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:58.296 20:56:13 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:58.296 20:56:13 -- setup/common.sh@18 -- # local node= 00:02:58.296 20:56:13 -- setup/common.sh@19 -- # local var val 00:02:58.296 20:56:13 -- setup/common.sh@20 -- # local mem_f mem 00:02:58.296 20:56:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.296 20:56:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:58.296 20:56:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:58.296 20:56:13 -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.296 20:56:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.297 20:56:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 170846292 kB' 'MemAvailable: 174182148 kB' 'Buffers: 3888 kB' 'Cached: 13479148 kB' 'SwapCached: 0 kB' 'Active: 10336732 kB' 'Inactive: 3664944 kB' 'Active(anon): 9765364 kB' 'Inactive(anon): 0 kB' 'Active(file): 571368 kB' 'Inactive(file): 3664944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521972 kB' 'Mapped: 247692 kB' 'Shmem: 9246724 kB' 'KReclaimable: 487992 kB' 'Slab: 1127744 kB' 'SReclaimable: 487992 kB' 'SUnreclaim: 639752 kB' 'KernelStack: 20368 kB' 'PageTables: 9376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506320 kB' 'Committed_AS: 11225688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318140 kB' 'VmallocChunk: 0 kB' 'Percpu: 93312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3632084 kB' 'DirectMap2M: 28553216 kB' 'DirectMap1G: 169869312 kB' 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.297 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.297 20:56:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.298 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.298 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.298 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.298 20:56:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.298 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.298 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.298 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.298 20:56:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.298 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.298 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.298 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.298 20:56:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.298 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.298 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.298 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.298 20:56:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.298 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.298 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.298 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.298 20:56:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.298 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.298 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.298 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.298 20:56:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.298 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.298 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.298 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.298 20:56:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.298 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.298 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.298 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.298 20:56:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.298 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.298 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.298 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.298 20:56:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.298 20:56:13 -- setup/common.sh@33 -- # echo 0 00:02:58.298 20:56:13 -- setup/common.sh@33 -- # return 0 00:02:58.298 20:56:13 -- setup/hugepages.sh@100 -- # resv=0 00:02:58.298 20:56:13 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:02:58.298 nr_hugepages=1536 00:02:58.298 20:56:13 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:58.298 resv_hugepages=0 00:02:58.298 20:56:13 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:58.298 surplus_hugepages=0 00:02:58.298 20:56:13 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:58.298 anon_hugepages=0 00:02:58.298 20:56:13 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:58.298 20:56:13 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:02:58.298 20:56:13 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:58.298 20:56:13 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:58.298 20:56:13 -- setup/common.sh@18 -- # local node= 00:02:58.298 20:56:13 -- setup/common.sh@19 -- # local var val 00:02:58.298 20:56:13 -- setup/common.sh@20 -- # local mem_f mem 00:02:58.298 20:56:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.298 20:56:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:58.298 20:56:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:58.298 20:56:13 -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.298 20:56:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.298 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.298 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.298 20:56:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 170846116 kB' 'MemAvailable: 174181972 kB' 'Buffers: 3888 kB' 'Cached: 13479152 kB' 'SwapCached: 0 kB' 'Active: 10337152 kB' 'Inactive: 3664944 kB' 'Active(anon): 9765784 kB' 'Inactive(anon): 0 kB' 'Active(file): 571368 kB' 'Inactive(file): 3664944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522384 kB' 'Mapped: 247712 kB' 'Shmem: 9246728 kB' 'KReclaimable: 487992 kB' 'Slab: 1127744 kB' 'SReclaimable: 487992 kB' 'SUnreclaim: 639752 kB' 'KernelStack: 20320 kB' 'PageTables: 9232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506320 kB' 'Committed_AS: 11204092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318124 kB' 'VmallocChunk: 0 kB' 'Percpu: 93312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3632084 kB' 'DirectMap2M: 28553216 kB' 'DirectMap1G: 169869312 kB' 00:02:58.298 20:56:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.298 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.298 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.298 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.298 20:56:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.298 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.298 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.298 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.298 20:56:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.298 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.298 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.298 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.298 20:56:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.298 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.298 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.298 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.298 20:56:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.298 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.298 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.298 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.298 20:56:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.298 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.298 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.298 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.298 20:56:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.298 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.298 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.298 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.298 20:56:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.298 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.298 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.298 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.298 20:56:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.298 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.298 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.298 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.298 20:56:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.298 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.298 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.298 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.298 20:56:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.298 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.298 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.298 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.298 20:56:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.298 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.298 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.298 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.298 20:56:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.298 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.298 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.298 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.298 20:56:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.298 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.298 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.299 20:56:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.299 20:56:13 -- setup/common.sh@33 -- # echo 1536 00:02:58.299 20:56:13 -- setup/common.sh@33 -- # return 0 00:02:58.299 20:56:13 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:58.299 20:56:13 -- setup/hugepages.sh@112 -- # get_nodes 00:02:58.299 20:56:13 -- setup/hugepages.sh@27 -- # local node 00:02:58.299 20:56:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:58.299 20:56:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:58.299 20:56:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:58.299 20:56:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:58.299 20:56:13 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:58.299 20:56:13 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:58.299 20:56:13 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:58.299 20:56:13 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:58.299 20:56:13 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:58.299 20:56:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:58.299 20:56:13 -- setup/common.sh@18 -- # local node=0 00:02:58.299 20:56:13 -- setup/common.sh@19 -- # local var val 00:02:58.299 20:56:13 -- setup/common.sh@20 -- # local mem_f mem 00:02:58.299 20:56:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.299 20:56:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:58.299 20:56:13 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:58.299 20:56:13 -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.299 20:56:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.299 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.300 20:56:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85601496 kB' 'MemUsed: 12061188 kB' 'SwapCached: 0 kB' 'Active: 6471620 kB' 'Inactive: 3326740 kB' 'Active(anon): 6277948 kB' 'Inactive(anon): 0 kB' 'Active(file): 193672 kB' 'Inactive(file): 3326740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9500480 kB' 'Mapped: 155392 kB' 'AnonPages: 301104 kB' 'Shmem: 5980068 kB' 'KernelStack: 11544 kB' 'PageTables: 6380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 203872 kB' 'Slab: 498544 kB' 'SReclaimable: 203872 kB' 'SUnreclaim: 294672 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.300 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.300 20:56:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.300 20:56:13 -- setup/common.sh@33 -- # echo 0 00:02:58.300 20:56:13 -- setup/common.sh@33 -- # return 0 00:02:58.300 20:56:13 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:58.300 20:56:13 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:58.300 20:56:13 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:58.300 20:56:13 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:58.300 20:56:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:58.300 20:56:13 -- setup/common.sh@18 -- # local node=1 00:02:58.300 20:56:13 -- setup/common.sh@19 -- # local var val 00:02:58.300 20:56:13 -- setup/common.sh@20 -- # local mem_f mem 00:02:58.300 20:56:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.300 20:56:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:58.301 20:56:13 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:58.301 20:56:13 -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.301 20:56:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.301 20:56:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718480 kB' 'MemFree: 85247592 kB' 'MemUsed: 8470888 kB' 'SwapCached: 0 kB' 'Active: 3865240 kB' 'Inactive: 338204 kB' 'Active(anon): 3487544 kB' 'Inactive(anon): 0 kB' 'Active(file): 377696 kB' 'Inactive(file): 338204 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3982588 kB' 'Mapped: 92320 kB' 'AnonPages: 220888 kB' 'Shmem: 3266688 kB' 'KernelStack: 8904 kB' 'PageTables: 3004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 284120 kB' 'Slab: 629168 kB' 'SReclaimable: 284120 kB' 'SUnreclaim: 345048 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:58.301 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.301 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.301 20:56:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.301 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.301 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.301 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.301 20:56:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.301 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.301 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.301 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.301 20:56:13 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.301 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.301 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.301 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.301 20:56:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.301 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.301 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.301 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.301 20:56:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.301 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.301 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.301 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.301 20:56:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.301 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.301 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.301 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.301 20:56:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.301 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.301 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.301 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.301 20:56:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.301 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.301 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.301 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.301 20:56:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.301 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.301 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.301 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.301 20:56:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.301 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.301 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.301 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.301 20:56:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.301 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.301 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.301 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.301 20:56:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.301 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.301 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.301 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.301 20:56:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.301 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.301 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.301 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.301 20:56:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.301 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.301 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.301 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.301 20:56:13 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.301 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.301 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.301 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.301 20:56:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.301 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.301 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.301 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.301 20:56:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.301 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.301 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.301 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.301 20:56:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.301 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.301 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.301 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.301 20:56:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.301 20:56:13 -- setup/common.sh@32 -- # continue 00:02:58.301 20:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.301 20:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.301 20:56:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.301 20:56:14 -- setup/common.sh@32 -- # continue 00:02:58.301 20:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.301 20:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.301 20:56:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.301 20:56:14 -- setup/common.sh@32 -- # continue 00:02:58.301 20:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.301 20:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.301 20:56:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.301 20:56:14 -- setup/common.sh@32 -- # continue 00:02:58.301 20:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.301 20:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.301 20:56:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.301 20:56:14 -- setup/common.sh@32 -- # continue 00:02:58.301 20:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.301 20:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.301 20:56:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.301 20:56:14 -- setup/common.sh@32 -- # continue 00:02:58.301 20:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.301 20:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.301 20:56:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.301 20:56:14 -- setup/common.sh@32 -- # continue 00:02:58.301 20:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.301 20:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.301 20:56:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.301 20:56:14 -- setup/common.sh@32 -- # continue 00:02:58.301 20:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.301 20:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.301 20:56:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.301 20:56:14 -- setup/common.sh@32 -- # continue 00:02:58.301 20:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.301 20:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.301 20:56:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.301 20:56:14 -- setup/common.sh@32 -- # continue 00:02:58.301 20:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.301 20:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.301 20:56:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.301 20:56:14 -- setup/common.sh@32 -- # continue 00:02:58.301 20:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.301 20:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.301 20:56:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.301 20:56:14 -- setup/common.sh@32 -- # continue 00:02:58.301 20:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.301 20:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.301 20:56:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.301 20:56:14 -- setup/common.sh@32 -- # continue 00:02:58.301 20:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.301 20:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.301 20:56:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.301 20:56:14 -- setup/common.sh@32 -- # continue 00:02:58.301 20:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.301 20:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.301 20:56:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.301 20:56:14 -- setup/common.sh@32 -- # continue 00:02:58.301 20:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.301 20:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.301 20:56:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.301 20:56:14 -- setup/common.sh@32 -- # continue 00:02:58.301 20:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.301 20:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.301 20:56:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.301 20:56:14 -- setup/common.sh@32 -- # continue 00:02:58.301 20:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.301 20:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.301 20:56:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.301 20:56:14 -- setup/common.sh@32 -- # continue 00:02:58.301 20:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.301 20:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.301 20:56:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.301 20:56:14 -- setup/common.sh@33 -- # echo 0 00:02:58.301 20:56:14 -- setup/common.sh@33 -- # return 0 00:02:58.302 20:56:14 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:58.302 20:56:14 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:58.302 20:56:14 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:58.302 20:56:14 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:58.302 20:56:14 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:58.302 node0=512 expecting 512 00:02:58.302 20:56:14 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:58.302 20:56:14 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:58.302 20:56:14 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:58.302 20:56:14 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:02:58.302 node1=1024 expecting 1024 00:02:58.302 20:56:14 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:02:58.302 00:02:58.302 real 0m3.278s 00:02:58.302 user 0m1.345s 00:02:58.302 sys 0m2.008s 00:02:58.302 20:56:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:58.302 20:56:14 -- common/autotest_common.sh@10 -- # set +x 00:02:58.302 ************************************ 00:02:58.302 END TEST custom_alloc 00:02:58.302 ************************************ 00:02:58.302 20:56:14 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:02:58.302 20:56:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:58.302 20:56:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:58.302 20:56:14 -- common/autotest_common.sh@10 -- # set +x 00:02:58.302 ************************************ 00:02:58.302 START TEST no_shrink_alloc 00:02:58.302 ************************************ 00:02:58.302 20:56:14 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:02:58.302 20:56:14 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:02:58.302 20:56:14 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:58.302 20:56:14 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:58.302 20:56:14 -- setup/hugepages.sh@51 -- # shift 00:02:58.302 20:56:14 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:58.302 20:56:14 -- setup/hugepages.sh@52 -- # local node_ids 00:02:58.302 20:56:14 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:58.302 20:56:14 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:58.302 20:56:14 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:58.302 20:56:14 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:58.302 20:56:14 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:58.302 20:56:14 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:58.302 20:56:14 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:58.302 20:56:14 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:58.302 20:56:14 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:58.302 20:56:14 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:58.302 20:56:14 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:58.302 20:56:14 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:58.302 20:56:14 -- setup/hugepages.sh@73 -- # return 0 00:02:58.302 20:56:14 -- setup/hugepages.sh@198 -- # setup output 00:02:58.302 20:56:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:58.302 20:56:14 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:01.595 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:01.595 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:01.595 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:01.595 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:01.595 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:01.595 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:01.595 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:01.595 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:01.595 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:01.595 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:01.595 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:01.595 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:01.595 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:01.595 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:01.595 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:01.595 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:01.595 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:01.595 20:56:17 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:01.595 20:56:17 -- setup/hugepages.sh@89 -- # local node 00:03:01.595 20:56:17 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:01.595 20:56:17 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:01.595 20:56:17 -- setup/hugepages.sh@92 -- # local surp 00:03:01.595 20:56:17 -- setup/hugepages.sh@93 -- # local resv 00:03:01.595 20:56:17 -- setup/hugepages.sh@94 -- # local anon 00:03:01.595 20:56:17 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:01.595 20:56:17 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:01.595 20:56:17 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:01.595 20:56:17 -- setup/common.sh@18 -- # local node= 00:03:01.595 20:56:17 -- setup/common.sh@19 -- # local var val 00:03:01.595 20:56:17 -- setup/common.sh@20 -- # local mem_f mem 00:03:01.595 20:56:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.595 20:56:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.595 20:56:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.595 20:56:17 -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.595 20:56:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.595 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.595 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.595 20:56:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 171808600 kB' 'MemAvailable: 175144456 kB' 'Buffers: 3888 kB' 'Cached: 13479256 kB' 'SwapCached: 0 kB' 'Active: 10337344 kB' 'Inactive: 3664944 kB' 'Active(anon): 9765976 kB' 'Inactive(anon): 0 kB' 'Active(file): 571368 kB' 'Inactive(file): 3664944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522364 kB' 'Mapped: 247728 kB' 'Shmem: 9246832 kB' 'KReclaimable: 487992 kB' 'Slab: 1127808 kB' 'SReclaimable: 487992 kB' 'SUnreclaim: 639816 kB' 'KernelStack: 20368 kB' 'PageTables: 9384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 11203312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318220 kB' 'VmallocChunk: 0 kB' 'Percpu: 93312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3632084 kB' 'DirectMap2M: 28553216 kB' 'DirectMap1G: 169869312 kB' 00:03:01.595 20:56:17 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.595 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.595 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.595 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.595 20:56:17 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.595 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.595 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.595 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.595 20:56:17 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.595 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.595 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.595 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.595 20:56:17 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.595 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.595 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.595 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.595 20:56:17 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.595 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.595 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.595 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.595 20:56:17 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.595 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.595 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.595 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.595 20:56:17 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.595 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.595 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.595 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.595 20:56:17 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.595 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.595 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.595 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.595 20:56:17 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.595 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.595 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.595 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.595 20:56:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.595 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.595 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.595 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.595 20:56:17 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.595 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.595 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.595 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.595 20:56:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.595 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.596 20:56:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.596 20:56:17 -- setup/common.sh@33 -- # echo 0 00:03:01.596 20:56:17 -- setup/common.sh@33 -- # return 0 00:03:01.596 20:56:17 -- setup/hugepages.sh@97 -- # anon=0 00:03:01.596 20:56:17 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:01.596 20:56:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:01.596 20:56:17 -- setup/common.sh@18 -- # local node= 00:03:01.596 20:56:17 -- setup/common.sh@19 -- # local var val 00:03:01.596 20:56:17 -- setup/common.sh@20 -- # local mem_f mem 00:03:01.596 20:56:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.596 20:56:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.596 20:56:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.596 20:56:17 -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.596 20:56:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.596 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.597 20:56:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 171812452 kB' 'MemAvailable: 175148308 kB' 'Buffers: 3888 kB' 'Cached: 13479256 kB' 'SwapCached: 0 kB' 'Active: 10337700 kB' 'Inactive: 3664944 kB' 'Active(anon): 9766332 kB' 'Inactive(anon): 0 kB' 'Active(file): 571368 kB' 'Inactive(file): 3664944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522800 kB' 'Mapped: 247728 kB' 'Shmem: 9246832 kB' 'KReclaimable: 487992 kB' 'Slab: 1127800 kB' 'SReclaimable: 487992 kB' 'SUnreclaim: 639808 kB' 'KernelStack: 20320 kB' 'PageTables: 9216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 11203324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318188 kB' 'VmallocChunk: 0 kB' 'Percpu: 93312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3632084 kB' 'DirectMap2M: 28553216 kB' 'DirectMap1G: 169869312 kB' 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.597 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.597 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.598 20:56:17 -- setup/common.sh@33 -- # echo 0 00:03:01.598 20:56:17 -- setup/common.sh@33 -- # return 0 00:03:01.598 20:56:17 -- setup/hugepages.sh@99 -- # surp=0 00:03:01.598 20:56:17 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:01.598 20:56:17 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:01.598 20:56:17 -- setup/common.sh@18 -- # local node= 00:03:01.598 20:56:17 -- setup/common.sh@19 -- # local var val 00:03:01.598 20:56:17 -- setup/common.sh@20 -- # local mem_f mem 00:03:01.598 20:56:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.598 20:56:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.598 20:56:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.598 20:56:17 -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.598 20:56:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.598 20:56:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 171811648 kB' 'MemAvailable: 175147504 kB' 'Buffers: 3888 kB' 'Cached: 13479272 kB' 'SwapCached: 0 kB' 'Active: 10336924 kB' 'Inactive: 3664944 kB' 'Active(anon): 9765556 kB' 'Inactive(anon): 0 kB' 'Active(file): 571368 kB' 'Inactive(file): 3664944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522004 kB' 'Mapped: 247724 kB' 'Shmem: 9246848 kB' 'KReclaimable: 487992 kB' 'Slab: 1127816 kB' 'SReclaimable: 487992 kB' 'SUnreclaim: 639824 kB' 'KernelStack: 20336 kB' 'PageTables: 9264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 11203340 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318188 kB' 'VmallocChunk: 0 kB' 'Percpu: 93312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3632084 kB' 'DirectMap2M: 28553216 kB' 'DirectMap1G: 169869312 kB' 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.598 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.598 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.599 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.599 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.600 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.600 20:56:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.600 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.600 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.600 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.600 20:56:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.600 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.600 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.600 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.600 20:56:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.600 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.600 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.600 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.600 20:56:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.600 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.600 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.600 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.600 20:56:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.600 20:56:17 -- setup/common.sh@33 -- # echo 0 00:03:01.600 20:56:17 -- setup/common.sh@33 -- # return 0 00:03:01.600 20:56:17 -- setup/hugepages.sh@100 -- # resv=0 00:03:01.600 20:56:17 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:01.600 nr_hugepages=1024 00:03:01.600 20:56:17 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:01.600 resv_hugepages=0 00:03:01.600 20:56:17 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:01.600 surplus_hugepages=0 00:03:01.600 20:56:17 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:01.600 anon_hugepages=0 00:03:01.600 20:56:17 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:01.600 20:56:17 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:01.600 20:56:17 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:01.600 20:56:17 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:01.600 20:56:17 -- setup/common.sh@18 -- # local node= 00:03:01.600 20:56:17 -- setup/common.sh@19 -- # local var val 00:03:01.600 20:56:17 -- setup/common.sh@20 -- # local mem_f mem 00:03:01.600 20:56:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.600 20:56:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.600 20:56:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.600 20:56:17 -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.600 20:56:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.600 20:56:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 171811648 kB' 'MemAvailable: 175147504 kB' 'Buffers: 3888 kB' 'Cached: 13479296 kB' 'SwapCached: 0 kB' 'Active: 10336608 kB' 'Inactive: 3664944 kB' 'Active(anon): 9765240 kB' 'Inactive(anon): 0 kB' 'Active(file): 571368 kB' 'Inactive(file): 3664944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521660 kB' 'Mapped: 247724 kB' 'Shmem: 9246872 kB' 'KReclaimable: 487992 kB' 'Slab: 1127816 kB' 'SReclaimable: 487992 kB' 'SUnreclaim: 639824 kB' 'KernelStack: 20352 kB' 'PageTables: 9308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 11203352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318188 kB' 'VmallocChunk: 0 kB' 'Percpu: 93312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3632084 kB' 'DirectMap2M: 28553216 kB' 'DirectMap1G: 169869312 kB' 00:03:01.600 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.600 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.600 20:56:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.600 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.600 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.600 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.600 20:56:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.600 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.600 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.600 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.600 20:56:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.600 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.600 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.600 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.600 20:56:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.600 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.600 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.600 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.600 20:56:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.600 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.600 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.600 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.600 20:56:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.600 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.600 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.600 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.600 20:56:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.600 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.600 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.600 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.600 20:56:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.600 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.600 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.600 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.600 20:56:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.600 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.600 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.600 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.600 20:56:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.600 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.600 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.600 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.600 20:56:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.600 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.600 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.600 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.600 20:56:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.600 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.600 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.600 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.600 20:56:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.600 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.600 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.600 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.600 20:56:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.600 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.600 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.600 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.600 20:56:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.600 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.600 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.600 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.600 20:56:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.600 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.600 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.600 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.600 20:56:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.600 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.600 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.600 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.600 20:56:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.600 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.600 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.600 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.601 20:56:17 -- setup/common.sh@33 -- # echo 1024 00:03:01.601 20:56:17 -- setup/common.sh@33 -- # return 0 00:03:01.601 20:56:17 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:01.601 20:56:17 -- setup/hugepages.sh@112 -- # get_nodes 00:03:01.601 20:56:17 -- setup/hugepages.sh@27 -- # local node 00:03:01.601 20:56:17 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:01.601 20:56:17 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:01.601 20:56:17 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:01.601 20:56:17 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:01.601 20:56:17 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:01.601 20:56:17 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:01.601 20:56:17 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:01.601 20:56:17 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:01.601 20:56:17 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:01.601 20:56:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:01.601 20:56:17 -- setup/common.sh@18 -- # local node=0 00:03:01.601 20:56:17 -- setup/common.sh@19 -- # local var val 00:03:01.601 20:56:17 -- setup/common.sh@20 -- # local mem_f mem 00:03:01.601 20:56:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.601 20:56:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:01.601 20:56:17 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:01.601 20:56:17 -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.601 20:56:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.601 20:56:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 84534108 kB' 'MemUsed: 13128576 kB' 'SwapCached: 0 kB' 'Active: 6470944 kB' 'Inactive: 3326740 kB' 'Active(anon): 6277272 kB' 'Inactive(anon): 0 kB' 'Active(file): 193672 kB' 'Inactive(file): 3326740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9500568 kB' 'Mapped: 155424 kB' 'AnonPages: 300288 kB' 'Shmem: 5980156 kB' 'KernelStack: 11384 kB' 'PageTables: 6084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 203872 kB' 'Slab: 498444 kB' 'SReclaimable: 203872 kB' 'SUnreclaim: 294572 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.601 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.601 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # continue 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.602 20:56:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.602 20:56:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.602 20:56:17 -- setup/common.sh@33 -- # echo 0 00:03:01.602 20:56:17 -- setup/common.sh@33 -- # return 0 00:03:01.602 20:56:17 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:01.602 20:56:17 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:01.602 20:56:17 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:01.602 20:56:17 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:01.602 20:56:17 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:01.602 node0=1024 expecting 1024 00:03:01.602 20:56:17 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:01.602 20:56:17 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:01.602 20:56:17 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:01.602 20:56:17 -- setup/hugepages.sh@202 -- # setup output 00:03:01.602 20:56:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:01.602 20:56:17 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:04.891 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:04.891 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:04.891 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:04.891 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:04.891 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:04.891 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:04.891 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:04.891 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:04.891 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:04.891 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:04.891 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:04.891 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:04.891 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:04.891 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:04.891 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:04.891 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:04.891 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:04.891 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:04.892 20:56:20 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:04.892 20:56:20 -- setup/hugepages.sh@89 -- # local node 00:03:04.892 20:56:20 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:04.892 20:56:20 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:04.892 20:56:20 -- setup/hugepages.sh@92 -- # local surp 00:03:04.892 20:56:20 -- setup/hugepages.sh@93 -- # local resv 00:03:04.892 20:56:20 -- setup/hugepages.sh@94 -- # local anon 00:03:04.892 20:56:20 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:04.892 20:56:20 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:04.892 20:56:20 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:04.892 20:56:20 -- setup/common.sh@18 -- # local node= 00:03:04.892 20:56:20 -- setup/common.sh@19 -- # local var val 00:03:04.892 20:56:20 -- setup/common.sh@20 -- # local mem_f mem 00:03:04.892 20:56:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.892 20:56:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:04.892 20:56:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:04.892 20:56:20 -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.892 20:56:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.892 20:56:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 171830996 kB' 'MemAvailable: 175166852 kB' 'Buffers: 3888 kB' 'Cached: 13479364 kB' 'SwapCached: 0 kB' 'Active: 10338432 kB' 'Inactive: 3664944 kB' 'Active(anon): 9767064 kB' 'Inactive(anon): 0 kB' 'Active(file): 571368 kB' 'Inactive(file): 3664944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523240 kB' 'Mapped: 247824 kB' 'Shmem: 9246940 kB' 'KReclaimable: 487992 kB' 'Slab: 1127656 kB' 'SReclaimable: 487992 kB' 'SUnreclaim: 639664 kB' 'KernelStack: 20320 kB' 'PageTables: 9316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 11204068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318156 kB' 'VmallocChunk: 0 kB' 'Percpu: 93312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3632084 kB' 'DirectMap2M: 28553216 kB' 'DirectMap1G: 169869312 kB' 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.892 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.892 20:56:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.893 20:56:20 -- setup/common.sh@33 -- # echo 0 00:03:04.893 20:56:20 -- setup/common.sh@33 -- # return 0 00:03:04.893 20:56:20 -- setup/hugepages.sh@97 -- # anon=0 00:03:04.893 20:56:20 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:04.893 20:56:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:04.893 20:56:20 -- setup/common.sh@18 -- # local node= 00:03:04.893 20:56:20 -- setup/common.sh@19 -- # local var val 00:03:04.893 20:56:20 -- setup/common.sh@20 -- # local mem_f mem 00:03:04.893 20:56:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.893 20:56:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:04.893 20:56:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:04.893 20:56:20 -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.893 20:56:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.893 20:56:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 171830692 kB' 'MemAvailable: 175166548 kB' 'Buffers: 3888 kB' 'Cached: 13479368 kB' 'SwapCached: 0 kB' 'Active: 10337868 kB' 'Inactive: 3664944 kB' 'Active(anon): 9766500 kB' 'Inactive(anon): 0 kB' 'Active(file): 571368 kB' 'Inactive(file): 3664944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523104 kB' 'Mapped: 247728 kB' 'Shmem: 9246944 kB' 'KReclaimable: 487992 kB' 'Slab: 1127616 kB' 'SReclaimable: 487992 kB' 'SUnreclaim: 639624 kB' 'KernelStack: 20352 kB' 'PageTables: 9372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 11204080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318108 kB' 'VmallocChunk: 0 kB' 'Percpu: 93312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3632084 kB' 'DirectMap2M: 28553216 kB' 'DirectMap1G: 169869312 kB' 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.893 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.893 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.894 20:56:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.894 20:56:20 -- setup/common.sh@33 -- # echo 0 00:03:04.894 20:56:20 -- setup/common.sh@33 -- # return 0 00:03:04.894 20:56:20 -- setup/hugepages.sh@99 -- # surp=0 00:03:04.894 20:56:20 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:04.894 20:56:20 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:04.894 20:56:20 -- setup/common.sh@18 -- # local node= 00:03:04.894 20:56:20 -- setup/common.sh@19 -- # local var val 00:03:04.894 20:56:20 -- setup/common.sh@20 -- # local mem_f mem 00:03:04.894 20:56:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.894 20:56:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:04.894 20:56:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:04.894 20:56:20 -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.894 20:56:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.894 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.895 20:56:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 171830408 kB' 'MemAvailable: 175166264 kB' 'Buffers: 3888 kB' 'Cached: 13479380 kB' 'SwapCached: 0 kB' 'Active: 10337524 kB' 'Inactive: 3664944 kB' 'Active(anon): 9766156 kB' 'Inactive(anon): 0 kB' 'Active(file): 571368 kB' 'Inactive(file): 3664944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522684 kB' 'Mapped: 247728 kB' 'Shmem: 9246956 kB' 'KReclaimable: 487992 kB' 'Slab: 1127616 kB' 'SReclaimable: 487992 kB' 'SUnreclaim: 639624 kB' 'KernelStack: 20336 kB' 'PageTables: 9320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 11204096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318108 kB' 'VmallocChunk: 0 kB' 'Percpu: 93312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3632084 kB' 'DirectMap2M: 28553216 kB' 'DirectMap1G: 169869312 kB' 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.895 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.895 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.896 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.896 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.896 20:56:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.896 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.896 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.896 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.896 20:56:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.896 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.896 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.896 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.896 20:56:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.896 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.896 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.896 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.896 20:56:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.896 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.896 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.896 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.896 20:56:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.896 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.896 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.896 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.896 20:56:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.896 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.896 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.896 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.896 20:56:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.896 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.896 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.896 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.896 20:56:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.896 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.896 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.896 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.896 20:56:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.896 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.896 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.896 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.896 20:56:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.896 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.896 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.896 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.896 20:56:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.896 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.896 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.896 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.896 20:56:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.896 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.896 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.896 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.896 20:56:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.896 20:56:20 -- setup/common.sh@33 -- # echo 0 00:03:04.896 20:56:20 -- setup/common.sh@33 -- # return 0 00:03:04.896 20:56:20 -- setup/hugepages.sh@100 -- # resv=0 00:03:04.896 20:56:20 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:04.896 nr_hugepages=1024 00:03:04.896 20:56:20 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:04.896 resv_hugepages=0 00:03:04.896 20:56:20 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:04.896 surplus_hugepages=0 00:03:04.896 20:56:20 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:04.896 anon_hugepages=0 00:03:04.896 20:56:20 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:04.896 20:56:20 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:04.896 20:56:20 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:04.896 20:56:20 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:04.896 20:56:20 -- setup/common.sh@18 -- # local node= 00:03:04.896 20:56:20 -- setup/common.sh@19 -- # local var val 00:03:04.896 20:56:20 -- setup/common.sh@20 -- # local mem_f mem 00:03:04.896 20:56:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.896 20:56:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:04.896 20:56:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:04.896 20:56:20 -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.896 20:56:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.896 20:56:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 171830844 kB' 'MemAvailable: 175166700 kB' 'Buffers: 3888 kB' 'Cached: 13479404 kB' 'SwapCached: 0 kB' 'Active: 10337092 kB' 'Inactive: 3664944 kB' 'Active(anon): 9765724 kB' 'Inactive(anon): 0 kB' 'Active(file): 571368 kB' 'Inactive(file): 3664944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522244 kB' 'Mapped: 247728 kB' 'Shmem: 9246980 kB' 'KReclaimable: 487992 kB' 'Slab: 1127616 kB' 'SReclaimable: 487992 kB' 'SUnreclaim: 639624 kB' 'KernelStack: 20336 kB' 'PageTables: 9320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 11204108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318108 kB' 'VmallocChunk: 0 kB' 'Percpu: 93312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3632084 kB' 'DirectMap2M: 28553216 kB' 'DirectMap1G: 169869312 kB' 00:03:04.896 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.896 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.896 20:56:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.896 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.896 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.896 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.896 20:56:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.896 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.896 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.896 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.896 20:56:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.896 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.896 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.896 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.896 20:56:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.896 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.896 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.896 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.896 20:56:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.896 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.896 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.896 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.896 20:56:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.896 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.896 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.897 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.897 20:56:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.898 20:56:20 -- setup/common.sh@33 -- # echo 1024 00:03:04.898 20:56:20 -- setup/common.sh@33 -- # return 0 00:03:04.898 20:56:20 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:04.898 20:56:20 -- setup/hugepages.sh@112 -- # get_nodes 00:03:04.898 20:56:20 -- setup/hugepages.sh@27 -- # local node 00:03:04.898 20:56:20 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:04.898 20:56:20 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:04.898 20:56:20 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:04.898 20:56:20 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:04.898 20:56:20 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:04.898 20:56:20 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:04.898 20:56:20 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:04.898 20:56:20 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:04.898 20:56:20 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:04.898 20:56:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:04.898 20:56:20 -- setup/common.sh@18 -- # local node=0 00:03:04.898 20:56:20 -- setup/common.sh@19 -- # local var val 00:03:04.898 20:56:20 -- setup/common.sh@20 -- # local mem_f mem 00:03:04.898 20:56:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.898 20:56:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:04.898 20:56:20 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:04.898 20:56:20 -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.898 20:56:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.898 20:56:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 84527644 kB' 'MemUsed: 13135040 kB' 'SwapCached: 0 kB' 'Active: 6470588 kB' 'Inactive: 3326740 kB' 'Active(anon): 6276916 kB' 'Inactive(anon): 0 kB' 'Active(file): 193672 kB' 'Inactive(file): 3326740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9500640 kB' 'Mapped: 155428 kB' 'AnonPages: 300084 kB' 'Shmem: 5980228 kB' 'KernelStack: 11368 kB' 'PageTables: 6040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 203872 kB' 'Slab: 498064 kB' 'SReclaimable: 203872 kB' 'SUnreclaim: 294192 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.898 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.898 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.899 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.899 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.899 20:56:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.899 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.899 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.899 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.899 20:56:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.899 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.899 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.899 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.899 20:56:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.899 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.899 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.899 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.899 20:56:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.899 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.899 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.899 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.899 20:56:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.899 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.899 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.899 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.899 20:56:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.899 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.899 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.899 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.899 20:56:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.899 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.899 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.899 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.899 20:56:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.899 20:56:20 -- setup/common.sh@32 -- # continue 00:03:04.899 20:56:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.899 20:56:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.899 20:56:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.899 20:56:20 -- setup/common.sh@33 -- # echo 0 00:03:04.899 20:56:20 -- setup/common.sh@33 -- # return 0 00:03:04.899 20:56:20 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:04.899 20:56:20 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:04.899 20:56:20 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:04.899 20:56:20 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:04.899 20:56:20 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:04.899 node0=1024 expecting 1024 00:03:04.899 20:56:20 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:04.899 00:03:04.899 real 0m6.264s 00:03:04.899 user 0m2.536s 00:03:04.899 sys 0m3.868s 00:03:04.899 20:56:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:04.899 20:56:20 -- common/autotest_common.sh@10 -- # set +x 00:03:04.899 ************************************ 00:03:04.899 END TEST no_shrink_alloc 00:03:04.899 ************************************ 00:03:04.899 20:56:20 -- setup/hugepages.sh@217 -- # clear_hp 00:03:04.899 20:56:20 -- setup/hugepages.sh@37 -- # local node hp 00:03:04.899 20:56:20 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:04.899 20:56:20 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:04.899 20:56:20 -- setup/hugepages.sh@41 -- # echo 0 00:03:04.899 20:56:20 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:04.899 20:56:20 -- setup/hugepages.sh@41 -- # echo 0 00:03:04.899 20:56:20 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:04.899 20:56:20 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:04.899 20:56:20 -- setup/hugepages.sh@41 -- # echo 0 00:03:04.899 20:56:20 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:04.899 20:56:20 -- setup/hugepages.sh@41 -- # echo 0 00:03:04.899 20:56:20 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:04.899 20:56:20 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:04.899 00:03:04.899 real 0m24.434s 00:03:04.899 user 0m9.451s 00:03:04.899 sys 0m14.610s 00:03:04.899 20:56:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:04.899 20:56:20 -- common/autotest_common.sh@10 -- # set +x 00:03:04.899 ************************************ 00:03:04.899 END TEST hugepages 00:03:04.899 ************************************ 00:03:04.899 20:56:20 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:04.899 20:56:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:04.899 20:56:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:04.899 20:56:20 -- common/autotest_common.sh@10 -- # set +x 00:03:04.899 ************************************ 00:03:04.899 START TEST driver 00:03:04.899 ************************************ 00:03:04.899 20:56:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:04.899 * Looking for test storage... 00:03:04.899 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:04.899 20:56:20 -- setup/driver.sh@68 -- # setup reset 00:03:04.899 20:56:20 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:04.899 20:56:20 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:09.093 20:56:24 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:09.093 20:56:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:09.093 20:56:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:09.093 20:56:24 -- common/autotest_common.sh@10 -- # set +x 00:03:09.093 ************************************ 00:03:09.093 START TEST guess_driver 00:03:09.093 ************************************ 00:03:09.093 20:56:24 -- common/autotest_common.sh@1111 -- # guess_driver 00:03:09.093 20:56:24 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:09.093 20:56:24 -- setup/driver.sh@47 -- # local fail=0 00:03:09.093 20:56:24 -- setup/driver.sh@49 -- # pick_driver 00:03:09.093 20:56:24 -- setup/driver.sh@36 -- # vfio 00:03:09.093 20:56:24 -- setup/driver.sh@21 -- # local iommu_grups 00:03:09.093 20:56:24 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:09.093 20:56:24 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:09.093 20:56:24 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:09.093 20:56:24 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:09.093 20:56:24 -- setup/driver.sh@29 -- # (( 222 > 0 )) 00:03:09.093 20:56:24 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:09.093 20:56:24 -- setup/driver.sh@14 -- # mod vfio_pci 00:03:09.093 20:56:24 -- setup/driver.sh@12 -- # dep vfio_pci 00:03:09.093 20:56:24 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:09.093 20:56:24 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:09.093 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:09.093 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:09.093 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:09.093 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:09.093 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:09.093 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:09.093 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:09.093 20:56:24 -- setup/driver.sh@30 -- # return 0 00:03:09.093 20:56:24 -- setup/driver.sh@37 -- # echo vfio-pci 00:03:09.093 20:56:24 -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:09.093 20:56:24 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:09.093 20:56:24 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:09.093 Looking for driver=vfio-pci 00:03:09.093 20:56:24 -- setup/driver.sh@45 -- # setup output config 00:03:09.093 20:56:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:09.093 20:56:24 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:09.093 20:56:24 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:12.382 20:56:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.382 20:56:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.382 20:56:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.382 20:56:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.382 20:56:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.382 20:56:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.382 20:56:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.382 20:56:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.382 20:56:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.382 20:56:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.382 20:56:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.382 20:56:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.382 20:56:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.382 20:56:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.382 20:56:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.382 20:56:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.382 20:56:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.382 20:56:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.382 20:56:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.382 20:56:28 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.382 20:56:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.382 20:56:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.382 20:56:28 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.382 20:56:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.382 20:56:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.382 20:56:28 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.382 20:56:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.382 20:56:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.382 20:56:28 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.382 20:56:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.382 20:56:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.382 20:56:28 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.382 20:56:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.382 20:56:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.382 20:56:28 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.382 20:56:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.382 20:56:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.382 20:56:28 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.382 20:56:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.382 20:56:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.382 20:56:28 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.382 20:56:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.382 20:56:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.382 20:56:28 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.382 20:56:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.382 20:56:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.382 20:56:28 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.382 20:56:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.986 20:56:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.986 20:56:28 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.986 20:56:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:13.254 20:56:28 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:13.254 20:56:28 -- setup/driver.sh@65 -- # setup reset 00:03:13.254 20:56:28 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:13.254 20:56:28 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:17.463 00:03:17.463 real 0m8.179s 00:03:17.463 user 0m2.409s 00:03:17.463 sys 0m4.313s 00:03:17.463 20:56:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:17.463 20:56:33 -- common/autotest_common.sh@10 -- # set +x 00:03:17.463 ************************************ 00:03:17.463 END TEST guess_driver 00:03:17.463 ************************************ 00:03:17.463 00:03:17.463 real 0m12.518s 00:03:17.463 user 0m3.680s 00:03:17.463 sys 0m6.615s 00:03:17.463 20:56:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:17.463 20:56:33 -- common/autotest_common.sh@10 -- # set +x 00:03:17.463 ************************************ 00:03:17.463 END TEST driver 00:03:17.463 ************************************ 00:03:17.463 20:56:33 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:17.463 20:56:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:17.463 20:56:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:17.463 20:56:33 -- common/autotest_common.sh@10 -- # set +x 00:03:17.463 ************************************ 00:03:17.463 START TEST devices 00:03:17.463 ************************************ 00:03:17.463 20:56:33 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:17.463 * Looking for test storage... 00:03:17.463 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:17.721 20:56:33 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:17.721 20:56:33 -- setup/devices.sh@192 -- # setup reset 00:03:17.721 20:56:33 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:17.721 20:56:33 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:21.006 20:56:36 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:21.006 20:56:36 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:21.006 20:56:36 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:21.006 20:56:36 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:21.007 20:56:36 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:21.007 20:56:36 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:21.007 20:56:36 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:21.007 20:56:36 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:21.007 20:56:36 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:21.007 20:56:36 -- setup/devices.sh@196 -- # blocks=() 00:03:21.007 20:56:36 -- setup/devices.sh@196 -- # declare -a blocks 00:03:21.007 20:56:36 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:21.007 20:56:36 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:21.007 20:56:36 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:21.007 20:56:36 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:21.007 20:56:36 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:21.007 20:56:36 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:21.007 20:56:36 -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:03:21.007 20:56:36 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:21.007 20:56:36 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:21.007 20:56:36 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:21.007 20:56:36 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:21.007 No valid GPT data, bailing 00:03:21.007 20:56:36 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:21.007 20:56:36 -- scripts/common.sh@391 -- # pt= 00:03:21.007 20:56:36 -- scripts/common.sh@392 -- # return 1 00:03:21.007 20:56:36 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:21.007 20:56:36 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:21.007 20:56:36 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:21.007 20:56:36 -- setup/common.sh@80 -- # echo 1000204886016 00:03:21.007 20:56:36 -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:21.007 20:56:36 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:21.007 20:56:36 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:03:21.007 20:56:36 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:21.007 20:56:36 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:21.007 20:56:36 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:21.007 20:56:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:21.007 20:56:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:21.007 20:56:36 -- common/autotest_common.sh@10 -- # set +x 00:03:21.007 ************************************ 00:03:21.007 START TEST nvme_mount 00:03:21.007 ************************************ 00:03:21.007 20:56:36 -- common/autotest_common.sh@1111 -- # nvme_mount 00:03:21.007 20:56:36 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:21.007 20:56:36 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:21.007 20:56:36 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:21.007 20:56:36 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:21.007 20:56:36 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:21.007 20:56:36 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:21.007 20:56:36 -- setup/common.sh@40 -- # local part_no=1 00:03:21.007 20:56:36 -- setup/common.sh@41 -- # local size=1073741824 00:03:21.007 20:56:36 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:21.007 20:56:36 -- setup/common.sh@44 -- # parts=() 00:03:21.007 20:56:36 -- setup/common.sh@44 -- # local parts 00:03:21.007 20:56:36 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:21.007 20:56:36 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:21.007 20:56:36 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:21.007 20:56:36 -- setup/common.sh@46 -- # (( part++ )) 00:03:21.007 20:56:36 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:21.007 20:56:36 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:21.007 20:56:36 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:21.007 20:56:36 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:21.944 Creating new GPT entries in memory. 00:03:21.944 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:21.944 other utilities. 00:03:21.944 20:56:37 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:21.944 20:56:37 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:21.944 20:56:37 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:21.944 20:56:37 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:21.944 20:56:37 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:23.323 Creating new GPT entries in memory. 00:03:23.323 The operation has completed successfully. 00:03:23.323 20:56:38 -- setup/common.sh@57 -- # (( part++ )) 00:03:23.323 20:56:38 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:23.323 20:56:38 -- setup/common.sh@62 -- # wait 2840497 00:03:23.323 20:56:38 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:23.323 20:56:38 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:23.323 20:56:38 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:23.323 20:56:38 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:23.323 20:56:38 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:23.323 20:56:38 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:23.323 20:56:38 -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:23.323 20:56:38 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:23.323 20:56:38 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:23.323 20:56:38 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:23.323 20:56:38 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:23.323 20:56:38 -- setup/devices.sh@53 -- # local found=0 00:03:23.323 20:56:38 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:23.323 20:56:38 -- setup/devices.sh@56 -- # : 00:03:23.323 20:56:38 -- setup/devices.sh@59 -- # local pci status 00:03:23.323 20:56:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.323 20:56:38 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:23.323 20:56:38 -- setup/devices.sh@47 -- # setup output config 00:03:23.323 20:56:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:23.323 20:56:38 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:26.613 20:56:41 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.613 20:56:41 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:26.613 20:56:41 -- setup/devices.sh@63 -- # found=1 00:03:26.613 20:56:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.613 20:56:41 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.613 20:56:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.613 20:56:41 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.613 20:56:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.613 20:56:41 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.613 20:56:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.613 20:56:41 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.613 20:56:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.613 20:56:41 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.613 20:56:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.613 20:56:41 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.613 20:56:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.613 20:56:41 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.613 20:56:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.613 20:56:41 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.613 20:56:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.613 20:56:41 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.613 20:56:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.613 20:56:41 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.613 20:56:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.613 20:56:41 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.613 20:56:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.613 20:56:41 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.613 20:56:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.613 20:56:41 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.613 20:56:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.613 20:56:41 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.613 20:56:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.613 20:56:41 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.613 20:56:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.613 20:56:41 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.613 20:56:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.613 20:56:41 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:26.613 20:56:41 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:26.613 20:56:41 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:26.613 20:56:41 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:26.613 20:56:41 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:26.613 20:56:41 -- setup/devices.sh@110 -- # cleanup_nvme 00:03:26.613 20:56:41 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:26.613 20:56:41 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:26.613 20:56:41 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:26.613 20:56:41 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:26.613 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:26.613 20:56:42 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:26.613 20:56:42 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:26.613 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:26.613 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:26.613 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:26.613 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:26.613 20:56:42 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:26.613 20:56:42 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:26.613 20:56:42 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:26.613 20:56:42 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:26.613 20:56:42 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:26.613 20:56:42 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:26.613 20:56:42 -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:26.613 20:56:42 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:26.613 20:56:42 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:26.613 20:56:42 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:26.613 20:56:42 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:26.613 20:56:42 -- setup/devices.sh@53 -- # local found=0 00:03:26.613 20:56:42 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:26.613 20:56:42 -- setup/devices.sh@56 -- # : 00:03:26.613 20:56:42 -- setup/devices.sh@59 -- # local pci status 00:03:26.613 20:56:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.613 20:56:42 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:26.613 20:56:42 -- setup/devices.sh@47 -- # setup output config 00:03:26.613 20:56:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.613 20:56:42 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:29.902 20:56:45 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.902 20:56:45 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:29.902 20:56:45 -- setup/devices.sh@63 -- # found=1 00:03:29.902 20:56:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.902 20:56:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.902 20:56:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.902 20:56:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.902 20:56:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.902 20:56:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.902 20:56:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.902 20:56:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.902 20:56:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.902 20:56:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.902 20:56:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.902 20:56:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.902 20:56:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.902 20:56:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.902 20:56:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.902 20:56:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.902 20:56:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.902 20:56:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.902 20:56:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.902 20:56:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.902 20:56:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.902 20:56:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.902 20:56:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.902 20:56:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.902 20:56:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.902 20:56:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.902 20:56:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.902 20:56:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.902 20:56:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.902 20:56:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.902 20:56:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.902 20:56:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.902 20:56:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.902 20:56:45 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:29.902 20:56:45 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:29.902 20:56:45 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:29.902 20:56:45 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:29.902 20:56:45 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:29.902 20:56:45 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:29.902 20:56:45 -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:03:29.902 20:56:45 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:29.902 20:56:45 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:29.902 20:56:45 -- setup/devices.sh@50 -- # local mount_point= 00:03:29.902 20:56:45 -- setup/devices.sh@51 -- # local test_file= 00:03:29.902 20:56:45 -- setup/devices.sh@53 -- # local found=0 00:03:29.902 20:56:45 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:29.902 20:56:45 -- setup/devices.sh@59 -- # local pci status 00:03:29.902 20:56:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.902 20:56:45 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:29.902 20:56:45 -- setup/devices.sh@47 -- # setup output config 00:03:29.902 20:56:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:29.902 20:56:45 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:32.436 20:56:48 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.436 20:56:48 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:32.436 20:56:48 -- setup/devices.sh@63 -- # found=1 00:03:32.436 20:56:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.436 20:56:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.436 20:56:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.436 20:56:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.436 20:56:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.436 20:56:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.437 20:56:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.437 20:56:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.437 20:56:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.437 20:56:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.437 20:56:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.437 20:56:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.437 20:56:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.437 20:56:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.437 20:56:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.437 20:56:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.437 20:56:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.437 20:56:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.437 20:56:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.437 20:56:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.437 20:56:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.437 20:56:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.437 20:56:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.437 20:56:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.437 20:56:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.437 20:56:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.437 20:56:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.437 20:56:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.437 20:56:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.437 20:56:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.437 20:56:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.437 20:56:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.437 20:56:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.437 20:56:48 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:32.437 20:56:48 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:32.437 20:56:48 -- setup/devices.sh@68 -- # return 0 00:03:32.437 20:56:48 -- setup/devices.sh@128 -- # cleanup_nvme 00:03:32.437 20:56:48 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:32.437 20:56:48 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:32.437 20:56:48 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:32.437 20:56:48 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:32.437 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:32.437 00:03:32.437 real 0m11.463s 00:03:32.437 user 0m3.323s 00:03:32.437 sys 0m5.909s 00:03:32.437 20:56:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:32.437 20:56:48 -- common/autotest_common.sh@10 -- # set +x 00:03:32.437 ************************************ 00:03:32.437 END TEST nvme_mount 00:03:32.437 ************************************ 00:03:32.437 20:56:48 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:32.437 20:56:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:32.437 20:56:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:32.437 20:56:48 -- common/autotest_common.sh@10 -- # set +x 00:03:32.696 ************************************ 00:03:32.696 START TEST dm_mount 00:03:32.696 ************************************ 00:03:32.696 20:56:48 -- common/autotest_common.sh@1111 -- # dm_mount 00:03:32.696 20:56:48 -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:32.696 20:56:48 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:32.696 20:56:48 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:32.696 20:56:48 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:32.696 20:56:48 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:32.696 20:56:48 -- setup/common.sh@40 -- # local part_no=2 00:03:32.696 20:56:48 -- setup/common.sh@41 -- # local size=1073741824 00:03:32.696 20:56:48 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:32.696 20:56:48 -- setup/common.sh@44 -- # parts=() 00:03:32.696 20:56:48 -- setup/common.sh@44 -- # local parts 00:03:32.696 20:56:48 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:32.696 20:56:48 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:32.696 20:56:48 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:32.696 20:56:48 -- setup/common.sh@46 -- # (( part++ )) 00:03:32.696 20:56:48 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:32.696 20:56:48 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:32.696 20:56:48 -- setup/common.sh@46 -- # (( part++ )) 00:03:32.696 20:56:48 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:32.696 20:56:48 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:32.696 20:56:48 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:32.696 20:56:48 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:33.632 Creating new GPT entries in memory. 00:03:33.632 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:33.632 other utilities. 00:03:33.632 20:56:49 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:33.632 20:56:49 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:33.632 20:56:49 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:33.632 20:56:49 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:33.632 20:56:49 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:34.568 Creating new GPT entries in memory. 00:03:34.569 The operation has completed successfully. 00:03:34.569 20:56:50 -- setup/common.sh@57 -- # (( part++ )) 00:03:34.569 20:56:50 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:34.569 20:56:50 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:34.569 20:56:50 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:34.569 20:56:50 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:35.967 The operation has completed successfully. 00:03:35.967 20:56:51 -- setup/common.sh@57 -- # (( part++ )) 00:03:35.967 20:56:51 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:35.967 20:56:51 -- setup/common.sh@62 -- # wait 2844992 00:03:35.967 20:56:51 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:35.967 20:56:51 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:35.967 20:56:51 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:35.967 20:56:51 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:35.967 20:56:51 -- setup/devices.sh@160 -- # for t in {1..5} 00:03:35.967 20:56:51 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:35.967 20:56:51 -- setup/devices.sh@161 -- # break 00:03:35.967 20:56:51 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:35.967 20:56:51 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:35.967 20:56:51 -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:03:35.967 20:56:51 -- setup/devices.sh@166 -- # dm=dm-2 00:03:35.967 20:56:51 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:03:35.967 20:56:51 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:03:35.967 20:56:51 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:35.967 20:56:51 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:35.967 20:56:51 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:35.967 20:56:51 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:35.967 20:56:51 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:35.967 20:56:51 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:35.967 20:56:51 -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:35.967 20:56:51 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:35.967 20:56:51 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:35.967 20:56:51 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:35.967 20:56:51 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:35.967 20:56:51 -- setup/devices.sh@53 -- # local found=0 00:03:35.967 20:56:51 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:35.967 20:56:51 -- setup/devices.sh@56 -- # : 00:03:35.967 20:56:51 -- setup/devices.sh@59 -- # local pci status 00:03:35.967 20:56:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.967 20:56:51 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:35.967 20:56:51 -- setup/devices.sh@47 -- # setup output config 00:03:35.967 20:56:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.967 20:56:51 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:38.515 20:56:54 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.515 20:56:54 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:38.515 20:56:54 -- setup/devices.sh@63 -- # found=1 00:03:38.515 20:56:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.515 20:56:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.515 20:56:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.515 20:56:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.516 20:56:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.516 20:56:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.516 20:56:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.516 20:56:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.516 20:56:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.516 20:56:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.516 20:56:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.516 20:56:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.516 20:56:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.516 20:56:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.516 20:56:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.516 20:56:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.516 20:56:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.516 20:56:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.516 20:56:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.516 20:56:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.516 20:56:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.516 20:56:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.516 20:56:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.516 20:56:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.516 20:56:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.516 20:56:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.516 20:56:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.516 20:56:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.516 20:56:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.516 20:56:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.516 20:56:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.516 20:56:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.516 20:56:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.516 20:56:54 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:38.516 20:56:54 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:38.516 20:56:54 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:38.516 20:56:54 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:38.516 20:56:54 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:38.516 20:56:54 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:38.516 20:56:54 -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:03:38.516 20:56:54 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:38.516 20:56:54 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:03:38.516 20:56:54 -- setup/devices.sh@50 -- # local mount_point= 00:03:38.516 20:56:54 -- setup/devices.sh@51 -- # local test_file= 00:03:38.516 20:56:54 -- setup/devices.sh@53 -- # local found=0 00:03:38.516 20:56:54 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:38.516 20:56:54 -- setup/devices.sh@59 -- # local pci status 00:03:38.516 20:56:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.516 20:56:54 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:38.516 20:56:54 -- setup/devices.sh@47 -- # setup output config 00:03:38.516 20:56:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:38.516 20:56:54 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:41.804 20:56:57 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:41.804 20:56:57 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:03:41.804 20:56:57 -- setup/devices.sh@63 -- # found=1 00:03:41.804 20:56:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.804 20:56:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:41.804 20:56:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.804 20:56:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:41.804 20:56:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.804 20:56:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:41.804 20:56:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.804 20:56:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:41.804 20:56:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.804 20:56:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:41.804 20:56:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.804 20:56:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:41.804 20:56:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.804 20:56:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:41.804 20:56:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.804 20:56:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:41.804 20:56:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.804 20:56:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:41.804 20:56:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.804 20:56:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:41.804 20:56:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.804 20:56:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:41.804 20:56:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.804 20:56:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:41.804 20:56:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.804 20:56:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:41.804 20:56:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.804 20:56:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:41.804 20:56:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.804 20:56:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:41.804 20:56:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.804 20:56:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:41.804 20:56:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.804 20:56:57 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:41.804 20:56:57 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:41.804 20:56:57 -- setup/devices.sh@68 -- # return 0 00:03:41.804 20:56:57 -- setup/devices.sh@187 -- # cleanup_dm 00:03:41.804 20:56:57 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:41.804 20:56:57 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:41.804 20:56:57 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:41.804 20:56:57 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:41.804 20:56:57 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:41.804 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:41.804 20:56:57 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:41.804 20:56:57 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:41.804 00:03:41.804 real 0m8.927s 00:03:41.804 user 0m2.118s 00:03:41.804 sys 0m3.692s 00:03:41.804 20:56:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:41.804 20:56:57 -- common/autotest_common.sh@10 -- # set +x 00:03:41.804 ************************************ 00:03:41.804 END TEST dm_mount 00:03:41.804 ************************************ 00:03:41.804 20:56:57 -- setup/devices.sh@1 -- # cleanup 00:03:41.804 20:56:57 -- setup/devices.sh@11 -- # cleanup_nvme 00:03:41.804 20:56:57 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:41.804 20:56:57 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:41.804 20:56:57 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:41.804 20:56:57 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:41.804 20:56:57 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:41.804 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:41.804 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:41.804 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:41.804 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:41.804 20:56:57 -- setup/devices.sh@12 -- # cleanup_dm 00:03:41.804 20:56:57 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:41.804 20:56:57 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:41.804 20:56:57 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:41.804 20:56:57 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:41.804 20:56:57 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:41.804 20:56:57 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:41.804 00:03:41.804 real 0m24.372s 00:03:41.804 user 0m6.827s 00:03:41.804 sys 0m12.036s 00:03:41.804 20:56:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:41.804 20:56:57 -- common/autotest_common.sh@10 -- # set +x 00:03:41.804 ************************************ 00:03:41.804 END TEST devices 00:03:41.804 ************************************ 00:03:41.804 00:03:41.804 real 1m23.111s 00:03:41.804 user 0m27.289s 00:03:41.804 sys 0m46.229s 00:03:41.804 20:56:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:41.804 20:56:57 -- common/autotest_common.sh@10 -- # set +x 00:03:41.804 ************************************ 00:03:41.804 END TEST setup.sh 00:03:41.804 ************************************ 00:03:42.064 20:56:57 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:45.359 Hugepages 00:03:45.359 node hugesize free / total 00:03:45.359 node0 1048576kB 0 / 0 00:03:45.359 node0 2048kB 2048 / 2048 00:03:45.359 node1 1048576kB 0 / 0 00:03:45.359 node1 2048kB 0 / 0 00:03:45.359 00:03:45.359 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:45.359 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:45.359 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:45.359 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:45.359 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:45.359 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:45.359 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:45.359 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:45.359 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:45.359 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:45.359 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:45.359 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:45.359 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:45.359 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:45.359 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:45.359 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:45.359 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:45.359 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:45.360 20:57:00 -- spdk/autotest.sh@130 -- # uname -s 00:03:45.360 20:57:00 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:45.360 20:57:00 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:45.360 20:57:00 -- common/autotest_common.sh@1517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:47.889 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:47.889 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:47.889 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:47.889 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:47.889 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:47.889 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:47.889 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:47.889 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:47.889 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:47.889 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:47.889 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:47.889 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:47.889 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:47.889 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:47.889 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:47.889 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:48.825 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:48.825 20:57:04 -- common/autotest_common.sh@1518 -- # sleep 1 00:03:49.761 20:57:05 -- common/autotest_common.sh@1519 -- # bdfs=() 00:03:49.761 20:57:05 -- common/autotest_common.sh@1519 -- # local bdfs 00:03:49.761 20:57:05 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:49.761 20:57:05 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:49.761 20:57:05 -- common/autotest_common.sh@1499 -- # bdfs=() 00:03:49.761 20:57:05 -- common/autotest_common.sh@1499 -- # local bdfs 00:03:49.761 20:57:05 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:49.761 20:57:05 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:49.761 20:57:05 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:03:49.762 20:57:05 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:03:49.762 20:57:05 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:5e:00.0 00:03:49.762 20:57:05 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:52.288 Waiting for block devices as requested 00:03:52.546 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:52.546 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:52.546 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:52.803 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:52.803 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:52.803 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:52.803 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:53.061 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:53.061 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:53.061 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:53.319 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:53.319 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:53.319 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:53.319 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:53.577 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:53.577 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:53.577 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:53.835 20:57:09 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:53.835 20:57:09 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:53.835 20:57:09 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 00:03:53.835 20:57:09 -- common/autotest_common.sh@1488 -- # grep 0000:5e:00.0/nvme/nvme 00:03:53.835 20:57:09 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:53.835 20:57:09 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:53.835 20:57:09 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:53.835 20:57:09 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:03:53.835 20:57:09 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:53.835 20:57:09 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:53.835 20:57:09 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:53.835 20:57:09 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:53.835 20:57:09 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:53.835 20:57:09 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:03:53.835 20:57:09 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:53.835 20:57:09 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:53.835 20:57:09 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:53.835 20:57:09 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:53.835 20:57:09 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:53.835 20:57:09 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:53.835 20:57:09 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:53.835 20:57:09 -- common/autotest_common.sh@1543 -- # continue 00:03:53.835 20:57:09 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:53.835 20:57:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:53.835 20:57:09 -- common/autotest_common.sh@10 -- # set +x 00:03:53.835 20:57:09 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:53.835 20:57:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:53.835 20:57:09 -- common/autotest_common.sh@10 -- # set +x 00:03:53.835 20:57:09 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:56.371 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:56.371 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:56.371 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:56.371 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:56.371 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:56.371 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:56.371 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:56.371 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:56.648 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:56.648 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:56.648 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:56.648 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:56.648 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:56.648 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:56.648 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:56.648 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:57.228 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:57.486 20:57:13 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:57.486 20:57:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:57.486 20:57:13 -- common/autotest_common.sh@10 -- # set +x 00:03:57.486 20:57:13 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:57.486 20:57:13 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:03:57.486 20:57:13 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:03:57.486 20:57:13 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:57.486 20:57:13 -- common/autotest_common.sh@1563 -- # local bdfs 00:03:57.486 20:57:13 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:03:57.486 20:57:13 -- common/autotest_common.sh@1499 -- # bdfs=() 00:03:57.486 20:57:13 -- common/autotest_common.sh@1499 -- # local bdfs 00:03:57.486 20:57:13 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:57.486 20:57:13 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:57.486 20:57:13 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:03:57.486 20:57:13 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:03:57.486 20:57:13 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:5e:00.0 00:03:57.486 20:57:13 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:03:57.486 20:57:13 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:03:57.486 20:57:13 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:57.486 20:57:13 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:57.486 20:57:13 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:57.486 20:57:13 -- common/autotest_common.sh@1572 -- # printf '%s\n' 0000:5e:00.0 00:03:57.486 20:57:13 -- common/autotest_common.sh@1578 -- # [[ -z 0000:5e:00.0 ]] 00:03:57.486 20:57:13 -- common/autotest_common.sh@1583 -- # spdk_tgt_pid=2855108 00:03:57.486 20:57:13 -- common/autotest_common.sh@1584 -- # waitforlisten 2855108 00:03:57.486 20:57:13 -- common/autotest_common.sh@1582 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:57.486 20:57:13 -- common/autotest_common.sh@817 -- # '[' -z 2855108 ']' 00:03:57.486 20:57:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:57.486 20:57:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:57.486 20:57:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:57.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:57.486 20:57:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:57.486 20:57:13 -- common/autotest_common.sh@10 -- # set +x 00:03:57.743 [2024-04-18 20:57:13.458914] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:03:57.743 [2024-04-18 20:57:13.458964] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2855108 ] 00:03:57.743 EAL: No free 2048 kB hugepages reported on node 1 00:03:57.743 [2024-04-18 20:57:13.520343] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:57.743 [2024-04-18 20:57:13.593741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:58.680 20:57:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:58.680 20:57:14 -- common/autotest_common.sh@850 -- # return 0 00:03:58.680 20:57:14 -- common/autotest_common.sh@1586 -- # bdf_id=0 00:03:58.680 20:57:14 -- common/autotest_common.sh@1587 -- # for bdf in "${bdfs[@]}" 00:03:58.680 20:57:14 -- common/autotest_common.sh@1588 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:01.966 nvme0n1 00:04:01.966 20:57:17 -- common/autotest_common.sh@1590 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:01.966 [2024-04-18 20:57:17.406316] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:01.966 request: 00:04:01.966 { 00:04:01.966 "nvme_ctrlr_name": "nvme0", 00:04:01.966 "password": "test", 00:04:01.966 "method": "bdev_nvme_opal_revert", 00:04:01.966 "req_id": 1 00:04:01.966 } 00:04:01.966 Got JSON-RPC error response 00:04:01.966 response: 00:04:01.966 { 00:04:01.966 "code": -32602, 00:04:01.966 "message": "Invalid parameters" 00:04:01.966 } 00:04:01.966 20:57:17 -- common/autotest_common.sh@1590 -- # true 00:04:01.966 20:57:17 -- common/autotest_common.sh@1591 -- # (( ++bdf_id )) 00:04:01.966 20:57:17 -- common/autotest_common.sh@1594 -- # killprocess 2855108 00:04:01.966 20:57:17 -- common/autotest_common.sh@936 -- # '[' -z 2855108 ']' 00:04:01.966 20:57:17 -- common/autotest_common.sh@940 -- # kill -0 2855108 00:04:01.966 20:57:17 -- common/autotest_common.sh@941 -- # uname 00:04:01.966 20:57:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:01.966 20:57:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2855108 00:04:01.966 20:57:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:01.966 20:57:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:01.966 20:57:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2855108' 00:04:01.966 killing process with pid 2855108 00:04:01.966 20:57:17 -- common/autotest_common.sh@955 -- # kill 2855108 00:04:01.966 20:57:17 -- common/autotest_common.sh@960 -- # wait 2855108 00:04:03.344 20:57:19 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:03.344 20:57:19 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:03.344 20:57:19 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:03.344 20:57:19 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:03.344 20:57:19 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:03.344 20:57:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:03.344 20:57:19 -- common/autotest_common.sh@10 -- # set +x 00:04:03.344 20:57:19 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:03.344 20:57:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:03.344 20:57:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:03.344 20:57:19 -- common/autotest_common.sh@10 -- # set +x 00:04:03.344 ************************************ 00:04:03.344 START TEST env 00:04:03.344 ************************************ 00:04:03.344 20:57:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:03.604 * Looking for test storage... 00:04:03.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:03.604 20:57:19 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:03.604 20:57:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:03.604 20:57:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:03.604 20:57:19 -- common/autotest_common.sh@10 -- # set +x 00:04:03.604 ************************************ 00:04:03.604 START TEST env_memory 00:04:03.604 ************************************ 00:04:03.604 20:57:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:03.604 00:04:03.604 00:04:03.604 CUnit - A unit testing framework for C - Version 2.1-3 00:04:03.604 http://cunit.sourceforge.net/ 00:04:03.604 00:04:03.604 00:04:03.604 Suite: memory 00:04:03.604 Test: alloc and free memory map ...[2024-04-18 20:57:19.468985] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:03.604 passed 00:04:03.604 Test: mem map translation ...[2024-04-18 20:57:19.487106] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:03.604 [2024-04-18 20:57:19.487121] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:03.604 [2024-04-18 20:57:19.487155] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:03.604 [2024-04-18 20:57:19.487160] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:03.604 passed 00:04:03.604 Test: mem map registration ...[2024-04-18 20:57:19.523923] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:03.604 [2024-04-18 20:57:19.523937] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:03.864 passed 00:04:03.864 Test: mem map adjacent registrations ...passed 00:04:03.864 00:04:03.864 Run Summary: Type Total Ran Passed Failed Inactive 00:04:03.864 suites 1 1 n/a 0 0 00:04:03.864 tests 4 4 4 0 0 00:04:03.864 asserts 152 152 152 0 n/a 00:04:03.864 00:04:03.864 Elapsed time = 0.127 seconds 00:04:03.864 00:04:03.864 real 0m0.132s 00:04:03.864 user 0m0.126s 00:04:03.864 sys 0m0.006s 00:04:03.864 20:57:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:03.864 20:57:19 -- common/autotest_common.sh@10 -- # set +x 00:04:03.864 ************************************ 00:04:03.864 END TEST env_memory 00:04:03.864 ************************************ 00:04:03.864 20:57:19 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:03.864 20:57:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:03.864 20:57:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:03.864 20:57:19 -- common/autotest_common.sh@10 -- # set +x 00:04:03.864 ************************************ 00:04:03.864 START TEST env_vtophys 00:04:03.864 ************************************ 00:04:03.864 20:57:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:03.864 EAL: lib.eal log level changed from notice to debug 00:04:03.864 EAL: Detected lcore 0 as core 0 on socket 0 00:04:03.864 EAL: Detected lcore 1 as core 1 on socket 0 00:04:03.864 EAL: Detected lcore 2 as core 2 on socket 0 00:04:03.864 EAL: Detected lcore 3 as core 3 on socket 0 00:04:03.864 EAL: Detected lcore 4 as core 4 on socket 0 00:04:03.864 EAL: Detected lcore 5 as core 5 on socket 0 00:04:03.864 EAL: Detected lcore 6 as core 6 on socket 0 00:04:03.864 EAL: Detected lcore 7 as core 8 on socket 0 00:04:03.864 EAL: Detected lcore 8 as core 9 on socket 0 00:04:03.864 EAL: Detected lcore 9 as core 10 on socket 0 00:04:03.864 EAL: Detected lcore 10 as core 11 on socket 0 00:04:03.864 EAL: Detected lcore 11 as core 12 on socket 0 00:04:03.864 EAL: Detected lcore 12 as core 13 on socket 0 00:04:03.864 EAL: Detected lcore 13 as core 16 on socket 0 00:04:03.864 EAL: Detected lcore 14 as core 17 on socket 0 00:04:03.864 EAL: Detected lcore 15 as core 18 on socket 0 00:04:03.864 EAL: Detected lcore 16 as core 19 on socket 0 00:04:03.864 EAL: Detected lcore 17 as core 20 on socket 0 00:04:03.864 EAL: Detected lcore 18 as core 21 on socket 0 00:04:03.864 EAL: Detected lcore 19 as core 25 on socket 0 00:04:03.864 EAL: Detected lcore 20 as core 26 on socket 0 00:04:03.864 EAL: Detected lcore 21 as core 27 on socket 0 00:04:03.864 EAL: Detected lcore 22 as core 28 on socket 0 00:04:03.864 EAL: Detected lcore 23 as core 29 on socket 0 00:04:03.864 EAL: Detected lcore 24 as core 0 on socket 1 00:04:03.864 EAL: Detected lcore 25 as core 1 on socket 1 00:04:03.864 EAL: Detected lcore 26 as core 2 on socket 1 00:04:03.864 EAL: Detected lcore 27 as core 3 on socket 1 00:04:03.864 EAL: Detected lcore 28 as core 4 on socket 1 00:04:03.864 EAL: Detected lcore 29 as core 5 on socket 1 00:04:03.864 EAL: Detected lcore 30 as core 6 on socket 1 00:04:03.864 EAL: Detected lcore 31 as core 9 on socket 1 00:04:03.864 EAL: Detected lcore 32 as core 10 on socket 1 00:04:03.864 EAL: Detected lcore 33 as core 11 on socket 1 00:04:03.864 EAL: Detected lcore 34 as core 12 on socket 1 00:04:03.864 EAL: Detected lcore 35 as core 13 on socket 1 00:04:03.864 EAL: Detected lcore 36 as core 16 on socket 1 00:04:03.864 EAL: Detected lcore 37 as core 17 on socket 1 00:04:03.864 EAL: Detected lcore 38 as core 18 on socket 1 00:04:03.864 EAL: Detected lcore 39 as core 19 on socket 1 00:04:03.864 EAL: Detected lcore 40 as core 20 on socket 1 00:04:03.864 EAL: Detected lcore 41 as core 21 on socket 1 00:04:03.864 EAL: Detected lcore 42 as core 24 on socket 1 00:04:03.864 EAL: Detected lcore 43 as core 25 on socket 1 00:04:03.864 EAL: Detected lcore 44 as core 26 on socket 1 00:04:03.864 EAL: Detected lcore 45 as core 27 on socket 1 00:04:03.864 EAL: Detected lcore 46 as core 28 on socket 1 00:04:03.864 EAL: Detected lcore 47 as core 29 on socket 1 00:04:03.864 EAL: Detected lcore 48 as core 0 on socket 0 00:04:03.864 EAL: Detected lcore 49 as core 1 on socket 0 00:04:03.864 EAL: Detected lcore 50 as core 2 on socket 0 00:04:03.864 EAL: Detected lcore 51 as core 3 on socket 0 00:04:03.864 EAL: Detected lcore 52 as core 4 on socket 0 00:04:03.864 EAL: Detected lcore 53 as core 5 on socket 0 00:04:03.864 EAL: Detected lcore 54 as core 6 on socket 0 00:04:03.864 EAL: Detected lcore 55 as core 8 on socket 0 00:04:03.864 EAL: Detected lcore 56 as core 9 on socket 0 00:04:03.864 EAL: Detected lcore 57 as core 10 on socket 0 00:04:03.864 EAL: Detected lcore 58 as core 11 on socket 0 00:04:03.864 EAL: Detected lcore 59 as core 12 on socket 0 00:04:03.864 EAL: Detected lcore 60 as core 13 on socket 0 00:04:03.864 EAL: Detected lcore 61 as core 16 on socket 0 00:04:03.864 EAL: Detected lcore 62 as core 17 on socket 0 00:04:03.864 EAL: Detected lcore 63 as core 18 on socket 0 00:04:03.864 EAL: Detected lcore 64 as core 19 on socket 0 00:04:03.864 EAL: Detected lcore 65 as core 20 on socket 0 00:04:03.864 EAL: Detected lcore 66 as core 21 on socket 0 00:04:03.864 EAL: Detected lcore 67 as core 25 on socket 0 00:04:03.864 EAL: Detected lcore 68 as core 26 on socket 0 00:04:03.864 EAL: Detected lcore 69 as core 27 on socket 0 00:04:03.864 EAL: Detected lcore 70 as core 28 on socket 0 00:04:03.864 EAL: Detected lcore 71 as core 29 on socket 0 00:04:03.864 EAL: Detected lcore 72 as core 0 on socket 1 00:04:03.864 EAL: Detected lcore 73 as core 1 on socket 1 00:04:03.864 EAL: Detected lcore 74 as core 2 on socket 1 00:04:03.864 EAL: Detected lcore 75 as core 3 on socket 1 00:04:03.864 EAL: Detected lcore 76 as core 4 on socket 1 00:04:03.864 EAL: Detected lcore 77 as core 5 on socket 1 00:04:03.864 EAL: Detected lcore 78 as core 6 on socket 1 00:04:03.864 EAL: Detected lcore 79 as core 9 on socket 1 00:04:03.864 EAL: Detected lcore 80 as core 10 on socket 1 00:04:03.864 EAL: Detected lcore 81 as core 11 on socket 1 00:04:03.864 EAL: Detected lcore 82 as core 12 on socket 1 00:04:03.864 EAL: Detected lcore 83 as core 13 on socket 1 00:04:03.864 EAL: Detected lcore 84 as core 16 on socket 1 00:04:03.864 EAL: Detected lcore 85 as core 17 on socket 1 00:04:03.864 EAL: Detected lcore 86 as core 18 on socket 1 00:04:03.864 EAL: Detected lcore 87 as core 19 on socket 1 00:04:03.864 EAL: Detected lcore 88 as core 20 on socket 1 00:04:03.864 EAL: Detected lcore 89 as core 21 on socket 1 00:04:03.864 EAL: Detected lcore 90 as core 24 on socket 1 00:04:03.864 EAL: Detected lcore 91 as core 25 on socket 1 00:04:03.864 EAL: Detected lcore 92 as core 26 on socket 1 00:04:03.864 EAL: Detected lcore 93 as core 27 on socket 1 00:04:03.864 EAL: Detected lcore 94 as core 28 on socket 1 00:04:03.864 EAL: Detected lcore 95 as core 29 on socket 1 00:04:03.864 EAL: Maximum logical cores by configuration: 128 00:04:03.864 EAL: Detected CPU lcores: 96 00:04:03.864 EAL: Detected NUMA nodes: 2 00:04:03.864 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:03.864 EAL: Detected shared linkage of DPDK 00:04:03.864 EAL: No shared files mode enabled, IPC will be disabled 00:04:03.864 EAL: Bus pci wants IOVA as 'DC' 00:04:03.864 EAL: Buses did not request a specific IOVA mode. 00:04:03.864 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:03.864 EAL: Selected IOVA mode 'VA' 00:04:03.865 EAL: No free 2048 kB hugepages reported on node 1 00:04:03.865 EAL: Probing VFIO support... 00:04:03.865 EAL: IOMMU type 1 (Type 1) is supported 00:04:03.865 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:03.865 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:03.865 EAL: VFIO support initialized 00:04:03.865 EAL: Ask a virtual area of 0x2e000 bytes 00:04:03.865 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:03.865 EAL: Setting up physically contiguous memory... 00:04:03.865 EAL: Setting maximum number of open files to 524288 00:04:03.865 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:03.865 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:03.865 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:03.865 EAL: Ask a virtual area of 0x61000 bytes 00:04:03.865 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:03.865 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:03.865 EAL: Ask a virtual area of 0x400000000 bytes 00:04:03.865 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:03.865 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:03.865 EAL: Ask a virtual area of 0x61000 bytes 00:04:03.865 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:03.865 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:03.865 EAL: Ask a virtual area of 0x400000000 bytes 00:04:03.865 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:03.865 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:03.865 EAL: Ask a virtual area of 0x61000 bytes 00:04:03.865 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:03.865 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:03.865 EAL: Ask a virtual area of 0x400000000 bytes 00:04:03.865 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:03.865 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:03.865 EAL: Ask a virtual area of 0x61000 bytes 00:04:03.865 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:03.865 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:03.865 EAL: Ask a virtual area of 0x400000000 bytes 00:04:03.865 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:03.865 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:03.865 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:03.865 EAL: Ask a virtual area of 0x61000 bytes 00:04:03.865 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:03.865 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:03.865 EAL: Ask a virtual area of 0x400000000 bytes 00:04:03.865 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:03.865 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:03.865 EAL: Ask a virtual area of 0x61000 bytes 00:04:03.865 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:03.865 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:03.865 EAL: Ask a virtual area of 0x400000000 bytes 00:04:03.865 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:03.865 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:03.865 EAL: Ask a virtual area of 0x61000 bytes 00:04:03.865 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:03.865 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:03.865 EAL: Ask a virtual area of 0x400000000 bytes 00:04:03.865 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:03.865 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:03.865 EAL: Ask a virtual area of 0x61000 bytes 00:04:03.865 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:03.865 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:03.865 EAL: Ask a virtual area of 0x400000000 bytes 00:04:03.865 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:03.865 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:03.865 EAL: Hugepages will be freed exactly as allocated. 00:04:03.865 EAL: No shared files mode enabled, IPC is disabled 00:04:03.865 EAL: No shared files mode enabled, IPC is disabled 00:04:03.865 EAL: TSC frequency is ~2300000 KHz 00:04:03.865 EAL: Main lcore 0 is ready (tid=7f4112b7aa00;cpuset=[0]) 00:04:03.865 EAL: Trying to obtain current memory policy. 00:04:03.865 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.865 EAL: Restoring previous memory policy: 0 00:04:03.865 EAL: request: mp_malloc_sync 00:04:03.865 EAL: No shared files mode enabled, IPC is disabled 00:04:03.865 EAL: Heap on socket 0 was expanded by 2MB 00:04:03.865 EAL: No shared files mode enabled, IPC is disabled 00:04:04.124 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:04.124 EAL: Mem event callback 'spdk:(nil)' registered 00:04:04.124 00:04:04.124 00:04:04.124 CUnit - A unit testing framework for C - Version 2.1-3 00:04:04.124 http://cunit.sourceforge.net/ 00:04:04.124 00:04:04.124 00:04:04.124 Suite: components_suite 00:04:04.124 Test: vtophys_malloc_test ...passed 00:04:04.124 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:04.124 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.124 EAL: Restoring previous memory policy: 4 00:04:04.124 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.124 EAL: request: mp_malloc_sync 00:04:04.124 EAL: No shared files mode enabled, IPC is disabled 00:04:04.124 EAL: Heap on socket 0 was expanded by 4MB 00:04:04.124 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.124 EAL: request: mp_malloc_sync 00:04:04.124 EAL: No shared files mode enabled, IPC is disabled 00:04:04.124 EAL: Heap on socket 0 was shrunk by 4MB 00:04:04.124 EAL: Trying to obtain current memory policy. 00:04:04.124 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.124 EAL: Restoring previous memory policy: 4 00:04:04.124 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.124 EAL: request: mp_malloc_sync 00:04:04.124 EAL: No shared files mode enabled, IPC is disabled 00:04:04.124 EAL: Heap on socket 0 was expanded by 6MB 00:04:04.124 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.124 EAL: request: mp_malloc_sync 00:04:04.124 EAL: No shared files mode enabled, IPC is disabled 00:04:04.124 EAL: Heap on socket 0 was shrunk by 6MB 00:04:04.124 EAL: Trying to obtain current memory policy. 00:04:04.124 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.124 EAL: Restoring previous memory policy: 4 00:04:04.124 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.124 EAL: request: mp_malloc_sync 00:04:04.124 EAL: No shared files mode enabled, IPC is disabled 00:04:04.124 EAL: Heap on socket 0 was expanded by 10MB 00:04:04.124 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.124 EAL: request: mp_malloc_sync 00:04:04.124 EAL: No shared files mode enabled, IPC is disabled 00:04:04.124 EAL: Heap on socket 0 was shrunk by 10MB 00:04:04.124 EAL: Trying to obtain current memory policy. 00:04:04.124 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.124 EAL: Restoring previous memory policy: 4 00:04:04.124 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.124 EAL: request: mp_malloc_sync 00:04:04.124 EAL: No shared files mode enabled, IPC is disabled 00:04:04.124 EAL: Heap on socket 0 was expanded by 18MB 00:04:04.124 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.124 EAL: request: mp_malloc_sync 00:04:04.124 EAL: No shared files mode enabled, IPC is disabled 00:04:04.124 EAL: Heap on socket 0 was shrunk by 18MB 00:04:04.124 EAL: Trying to obtain current memory policy. 00:04:04.124 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.124 EAL: Restoring previous memory policy: 4 00:04:04.124 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.124 EAL: request: mp_malloc_sync 00:04:04.124 EAL: No shared files mode enabled, IPC is disabled 00:04:04.124 EAL: Heap on socket 0 was expanded by 34MB 00:04:04.124 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.124 EAL: request: mp_malloc_sync 00:04:04.124 EAL: No shared files mode enabled, IPC is disabled 00:04:04.124 EAL: Heap on socket 0 was shrunk by 34MB 00:04:04.124 EAL: Trying to obtain current memory policy. 00:04:04.125 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.125 EAL: Restoring previous memory policy: 4 00:04:04.125 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.125 EAL: request: mp_malloc_sync 00:04:04.125 EAL: No shared files mode enabled, IPC is disabled 00:04:04.125 EAL: Heap on socket 0 was expanded by 66MB 00:04:04.125 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.125 EAL: request: mp_malloc_sync 00:04:04.125 EAL: No shared files mode enabled, IPC is disabled 00:04:04.125 EAL: Heap on socket 0 was shrunk by 66MB 00:04:04.125 EAL: Trying to obtain current memory policy. 00:04:04.125 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.125 EAL: Restoring previous memory policy: 4 00:04:04.125 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.125 EAL: request: mp_malloc_sync 00:04:04.125 EAL: No shared files mode enabled, IPC is disabled 00:04:04.125 EAL: Heap on socket 0 was expanded by 130MB 00:04:04.125 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.125 EAL: request: mp_malloc_sync 00:04:04.125 EAL: No shared files mode enabled, IPC is disabled 00:04:04.125 EAL: Heap on socket 0 was shrunk by 130MB 00:04:04.125 EAL: Trying to obtain current memory policy. 00:04:04.125 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.125 EAL: Restoring previous memory policy: 4 00:04:04.125 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.125 EAL: request: mp_malloc_sync 00:04:04.125 EAL: No shared files mode enabled, IPC is disabled 00:04:04.125 EAL: Heap on socket 0 was expanded by 258MB 00:04:04.125 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.125 EAL: request: mp_malloc_sync 00:04:04.125 EAL: No shared files mode enabled, IPC is disabled 00:04:04.125 EAL: Heap on socket 0 was shrunk by 258MB 00:04:04.125 EAL: Trying to obtain current memory policy. 00:04:04.125 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.384 EAL: Restoring previous memory policy: 4 00:04:04.384 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.384 EAL: request: mp_malloc_sync 00:04:04.384 EAL: No shared files mode enabled, IPC is disabled 00:04:04.384 EAL: Heap on socket 0 was expanded by 514MB 00:04:04.384 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.384 EAL: request: mp_malloc_sync 00:04:04.384 EAL: No shared files mode enabled, IPC is disabled 00:04:04.384 EAL: Heap on socket 0 was shrunk by 514MB 00:04:04.384 EAL: Trying to obtain current memory policy. 00:04:04.384 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.643 EAL: Restoring previous memory policy: 4 00:04:04.643 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.643 EAL: request: mp_malloc_sync 00:04:04.643 EAL: No shared files mode enabled, IPC is disabled 00:04:04.643 EAL: Heap on socket 0 was expanded by 1026MB 00:04:04.902 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.902 EAL: request: mp_malloc_sync 00:04:04.902 EAL: No shared files mode enabled, IPC is disabled 00:04:04.902 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:04.902 passed 00:04:04.902 00:04:04.902 Run Summary: Type Total Ran Passed Failed Inactive 00:04:04.902 suites 1 1 n/a 0 0 00:04:04.902 tests 2 2 2 0 0 00:04:04.902 asserts 497 497 497 0 n/a 00:04:04.902 00:04:04.902 Elapsed time = 0.975 seconds 00:04:04.902 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.902 EAL: request: mp_malloc_sync 00:04:04.902 EAL: No shared files mode enabled, IPC is disabled 00:04:04.902 EAL: Heap on socket 0 was shrunk by 2MB 00:04:04.902 EAL: No shared files mode enabled, IPC is disabled 00:04:04.902 EAL: No shared files mode enabled, IPC is disabled 00:04:04.902 EAL: No shared files mode enabled, IPC is disabled 00:04:04.902 00:04:04.902 real 0m1.084s 00:04:04.902 user 0m0.627s 00:04:04.902 sys 0m0.425s 00:04:04.902 20:57:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:04.902 20:57:20 -- common/autotest_common.sh@10 -- # set +x 00:04:04.902 ************************************ 00:04:04.902 END TEST env_vtophys 00:04:04.902 ************************************ 00:04:05.160 20:57:20 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:05.160 20:57:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:05.160 20:57:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:05.160 20:57:20 -- common/autotest_common.sh@10 -- # set +x 00:04:05.160 ************************************ 00:04:05.160 START TEST env_pci 00:04:05.160 ************************************ 00:04:05.160 20:57:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:05.160 00:04:05.160 00:04:05.160 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.160 http://cunit.sourceforge.net/ 00:04:05.160 00:04:05.160 00:04:05.160 Suite: pci 00:04:05.160 Test: pci_hook ...[2024-04-18 20:57:20.990877] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2856440 has claimed it 00:04:05.160 EAL: Cannot find device (10000:00:01.0) 00:04:05.160 EAL: Failed to attach device on primary process 00:04:05.160 passed 00:04:05.160 00:04:05.160 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.160 suites 1 1 n/a 0 0 00:04:05.160 tests 1 1 1 0 0 00:04:05.160 asserts 25 25 25 0 n/a 00:04:05.160 00:04:05.160 Elapsed time = 0.031 seconds 00:04:05.160 00:04:05.160 real 0m0.051s 00:04:05.160 user 0m0.015s 00:04:05.160 sys 0m0.036s 00:04:05.160 20:57:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:05.160 20:57:21 -- common/autotest_common.sh@10 -- # set +x 00:04:05.160 ************************************ 00:04:05.160 END TEST env_pci 00:04:05.160 ************************************ 00:04:05.160 20:57:21 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:05.160 20:57:21 -- env/env.sh@15 -- # uname 00:04:05.160 20:57:21 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:05.160 20:57:21 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:05.160 20:57:21 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:05.160 20:57:21 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:04:05.160 20:57:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:05.160 20:57:21 -- common/autotest_common.sh@10 -- # set +x 00:04:05.417 ************************************ 00:04:05.417 START TEST env_dpdk_post_init 00:04:05.417 ************************************ 00:04:05.417 20:57:21 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:05.417 EAL: Detected CPU lcores: 96 00:04:05.417 EAL: Detected NUMA nodes: 2 00:04:05.417 EAL: Detected shared linkage of DPDK 00:04:05.417 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:05.417 EAL: Selected IOVA mode 'VA' 00:04:05.417 EAL: No free 2048 kB hugepages reported on node 1 00:04:05.417 EAL: VFIO support initialized 00:04:05.417 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:05.417 EAL: Using IOMMU type 1 (Type 1) 00:04:05.417 EAL: Ignore mapping IO port bar(1) 00:04:05.417 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:05.417 EAL: Ignore mapping IO port bar(1) 00:04:05.417 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:05.417 EAL: Ignore mapping IO port bar(1) 00:04:05.417 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:05.417 EAL: Ignore mapping IO port bar(1) 00:04:05.417 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:05.676 EAL: Ignore mapping IO port bar(1) 00:04:05.676 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:05.676 EAL: Ignore mapping IO port bar(1) 00:04:05.676 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:05.676 EAL: Ignore mapping IO port bar(1) 00:04:05.676 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:05.676 EAL: Ignore mapping IO port bar(1) 00:04:05.676 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:06.243 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:06.243 EAL: Ignore mapping IO port bar(1) 00:04:06.243 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:06.243 EAL: Ignore mapping IO port bar(1) 00:04:06.243 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:06.243 EAL: Ignore mapping IO port bar(1) 00:04:06.243 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:06.243 EAL: Ignore mapping IO port bar(1) 00:04:06.243 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:06.501 EAL: Ignore mapping IO port bar(1) 00:04:06.501 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:06.501 EAL: Ignore mapping IO port bar(1) 00:04:06.501 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:06.501 EAL: Ignore mapping IO port bar(1) 00:04:06.501 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:06.501 EAL: Ignore mapping IO port bar(1) 00:04:06.501 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:09.779 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:09.779 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:09.779 Starting DPDK initialization... 00:04:09.779 Starting SPDK post initialization... 00:04:09.779 SPDK NVMe probe 00:04:09.779 Attaching to 0000:5e:00.0 00:04:09.779 Attached to 0000:5e:00.0 00:04:09.779 Cleaning up... 00:04:09.779 00:04:09.779 real 0m4.346s 00:04:09.779 user 0m3.298s 00:04:09.779 sys 0m0.123s 00:04:09.779 20:57:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:09.779 20:57:25 -- common/autotest_common.sh@10 -- # set +x 00:04:09.779 ************************************ 00:04:09.779 END TEST env_dpdk_post_init 00:04:09.779 ************************************ 00:04:09.779 20:57:25 -- env/env.sh@26 -- # uname 00:04:09.779 20:57:25 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:09.779 20:57:25 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:09.779 20:57:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:09.779 20:57:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:09.779 20:57:25 -- common/autotest_common.sh@10 -- # set +x 00:04:09.779 ************************************ 00:04:09.779 START TEST env_mem_callbacks 00:04:09.779 ************************************ 00:04:09.779 20:57:25 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:09.779 EAL: Detected CPU lcores: 96 00:04:09.779 EAL: Detected NUMA nodes: 2 00:04:09.779 EAL: Detected shared linkage of DPDK 00:04:10.037 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:10.037 EAL: Selected IOVA mode 'VA' 00:04:10.037 EAL: No free 2048 kB hugepages reported on node 1 00:04:10.037 EAL: VFIO support initialized 00:04:10.037 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:10.037 00:04:10.037 00:04:10.037 CUnit - A unit testing framework for C - Version 2.1-3 00:04:10.037 http://cunit.sourceforge.net/ 00:04:10.037 00:04:10.037 00:04:10.037 Suite: memory 00:04:10.037 Test: test ... 00:04:10.037 register 0x200000200000 2097152 00:04:10.037 malloc 3145728 00:04:10.037 register 0x200000400000 4194304 00:04:10.037 buf 0x200000500000 len 3145728 PASSED 00:04:10.037 malloc 64 00:04:10.037 buf 0x2000004fff40 len 64 PASSED 00:04:10.037 malloc 4194304 00:04:10.037 register 0x200000800000 6291456 00:04:10.037 buf 0x200000a00000 len 4194304 PASSED 00:04:10.037 free 0x200000500000 3145728 00:04:10.038 free 0x2000004fff40 64 00:04:10.038 unregister 0x200000400000 4194304 PASSED 00:04:10.038 free 0x200000a00000 4194304 00:04:10.038 unregister 0x200000800000 6291456 PASSED 00:04:10.038 malloc 8388608 00:04:10.038 register 0x200000400000 10485760 00:04:10.038 buf 0x200000600000 len 8388608 PASSED 00:04:10.038 free 0x200000600000 8388608 00:04:10.038 unregister 0x200000400000 10485760 PASSED 00:04:10.038 passed 00:04:10.038 00:04:10.038 Run Summary: Type Total Ran Passed Failed Inactive 00:04:10.038 suites 1 1 n/a 0 0 00:04:10.038 tests 1 1 1 0 0 00:04:10.038 asserts 15 15 15 0 n/a 00:04:10.038 00:04:10.038 Elapsed time = 0.005 seconds 00:04:10.038 00:04:10.038 real 0m0.055s 00:04:10.038 user 0m0.018s 00:04:10.038 sys 0m0.037s 00:04:10.038 20:57:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:10.038 20:57:25 -- common/autotest_common.sh@10 -- # set +x 00:04:10.038 ************************************ 00:04:10.038 END TEST env_mem_callbacks 00:04:10.038 ************************************ 00:04:10.038 00:04:10.038 real 0m6.555s 00:04:10.038 user 0m4.426s 00:04:10.038 sys 0m1.115s 00:04:10.038 20:57:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:10.038 20:57:25 -- common/autotest_common.sh@10 -- # set +x 00:04:10.038 ************************************ 00:04:10.038 END TEST env 00:04:10.038 ************************************ 00:04:10.038 20:57:25 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:10.038 20:57:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:10.038 20:57:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:10.038 20:57:25 -- common/autotest_common.sh@10 -- # set +x 00:04:10.038 ************************************ 00:04:10.038 START TEST rpc 00:04:10.038 ************************************ 00:04:10.038 20:57:25 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:10.295 * Looking for test storage... 00:04:10.295 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:10.295 20:57:26 -- rpc/rpc.sh@65 -- # spdk_pid=2857496 00:04:10.295 20:57:26 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:10.295 20:57:26 -- rpc/rpc.sh@67 -- # waitforlisten 2857496 00:04:10.295 20:57:26 -- common/autotest_common.sh@817 -- # '[' -z 2857496 ']' 00:04:10.295 20:57:26 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:10.295 20:57:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:10.295 20:57:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:10.295 20:57:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:10.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:10.295 20:57:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:10.295 20:57:26 -- common/autotest_common.sh@10 -- # set +x 00:04:10.296 [2024-04-18 20:57:26.049065] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:04:10.296 [2024-04-18 20:57:26.049111] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2857496 ] 00:04:10.296 EAL: No free 2048 kB hugepages reported on node 1 00:04:10.296 [2024-04-18 20:57:26.107670] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.296 [2024-04-18 20:57:26.185918] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:10.296 [2024-04-18 20:57:26.185954] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2857496' to capture a snapshot of events at runtime. 00:04:10.296 [2024-04-18 20:57:26.185962] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:10.296 [2024-04-18 20:57:26.185968] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:10.296 [2024-04-18 20:57:26.185974] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2857496 for offline analysis/debug. 00:04:10.296 [2024-04-18 20:57:26.185992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.228 20:57:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:11.228 20:57:26 -- common/autotest_common.sh@850 -- # return 0 00:04:11.228 20:57:26 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:11.228 20:57:26 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:11.228 20:57:26 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:11.228 20:57:26 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:11.228 20:57:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:11.228 20:57:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:11.228 20:57:26 -- common/autotest_common.sh@10 -- # set +x 00:04:11.228 ************************************ 00:04:11.228 START TEST rpc_integrity 00:04:11.228 ************************************ 00:04:11.228 20:57:26 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:04:11.228 20:57:26 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:11.228 20:57:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:11.228 20:57:26 -- common/autotest_common.sh@10 -- # set +x 00:04:11.228 20:57:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:11.228 20:57:26 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:11.228 20:57:26 -- rpc/rpc.sh@13 -- # jq length 00:04:11.228 20:57:27 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:11.228 20:57:27 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:11.228 20:57:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:11.228 20:57:27 -- common/autotest_common.sh@10 -- # set +x 00:04:11.228 20:57:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:11.228 20:57:27 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:11.228 20:57:27 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:11.228 20:57:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:11.228 20:57:27 -- common/autotest_common.sh@10 -- # set +x 00:04:11.228 20:57:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:11.228 20:57:27 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:11.228 { 00:04:11.228 "name": "Malloc0", 00:04:11.228 "aliases": [ 00:04:11.228 "6831a0a6-deaf-4b1d-a0b3-93a7b608335c" 00:04:11.228 ], 00:04:11.228 "product_name": "Malloc disk", 00:04:11.228 "block_size": 512, 00:04:11.228 "num_blocks": 16384, 00:04:11.228 "uuid": "6831a0a6-deaf-4b1d-a0b3-93a7b608335c", 00:04:11.228 "assigned_rate_limits": { 00:04:11.228 "rw_ios_per_sec": 0, 00:04:11.228 "rw_mbytes_per_sec": 0, 00:04:11.228 "r_mbytes_per_sec": 0, 00:04:11.228 "w_mbytes_per_sec": 0 00:04:11.228 }, 00:04:11.228 "claimed": false, 00:04:11.228 "zoned": false, 00:04:11.228 "supported_io_types": { 00:04:11.228 "read": true, 00:04:11.228 "write": true, 00:04:11.228 "unmap": true, 00:04:11.228 "write_zeroes": true, 00:04:11.228 "flush": true, 00:04:11.228 "reset": true, 00:04:11.228 "compare": false, 00:04:11.228 "compare_and_write": false, 00:04:11.228 "abort": true, 00:04:11.228 "nvme_admin": false, 00:04:11.228 "nvme_io": false 00:04:11.228 }, 00:04:11.228 "memory_domains": [ 00:04:11.228 { 00:04:11.228 "dma_device_id": "system", 00:04:11.228 "dma_device_type": 1 00:04:11.228 }, 00:04:11.228 { 00:04:11.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:11.228 "dma_device_type": 2 00:04:11.228 } 00:04:11.228 ], 00:04:11.228 "driver_specific": {} 00:04:11.228 } 00:04:11.228 ]' 00:04:11.228 20:57:27 -- rpc/rpc.sh@17 -- # jq length 00:04:11.228 20:57:27 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:11.228 20:57:27 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:11.228 20:57:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:11.228 20:57:27 -- common/autotest_common.sh@10 -- # set +x 00:04:11.228 [2024-04-18 20:57:27.099237] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:11.228 [2024-04-18 20:57:27.099265] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:11.228 [2024-04-18 20:57:27.099278] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x229b340 00:04:11.228 [2024-04-18 20:57:27.099284] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:11.228 [2024-04-18 20:57:27.100368] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:11.228 [2024-04-18 20:57:27.100390] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:11.228 Passthru0 00:04:11.228 20:57:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:11.228 20:57:27 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:11.228 20:57:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:11.228 20:57:27 -- common/autotest_common.sh@10 -- # set +x 00:04:11.228 20:57:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:11.228 20:57:27 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:11.228 { 00:04:11.228 "name": "Malloc0", 00:04:11.228 "aliases": [ 00:04:11.228 "6831a0a6-deaf-4b1d-a0b3-93a7b608335c" 00:04:11.228 ], 00:04:11.228 "product_name": "Malloc disk", 00:04:11.228 "block_size": 512, 00:04:11.228 "num_blocks": 16384, 00:04:11.228 "uuid": "6831a0a6-deaf-4b1d-a0b3-93a7b608335c", 00:04:11.228 "assigned_rate_limits": { 00:04:11.228 "rw_ios_per_sec": 0, 00:04:11.228 "rw_mbytes_per_sec": 0, 00:04:11.228 "r_mbytes_per_sec": 0, 00:04:11.228 "w_mbytes_per_sec": 0 00:04:11.228 }, 00:04:11.228 "claimed": true, 00:04:11.228 "claim_type": "exclusive_write", 00:04:11.228 "zoned": false, 00:04:11.228 "supported_io_types": { 00:04:11.228 "read": true, 00:04:11.228 "write": true, 00:04:11.228 "unmap": true, 00:04:11.228 "write_zeroes": true, 00:04:11.228 "flush": true, 00:04:11.228 "reset": true, 00:04:11.228 "compare": false, 00:04:11.228 "compare_and_write": false, 00:04:11.228 "abort": true, 00:04:11.228 "nvme_admin": false, 00:04:11.228 "nvme_io": false 00:04:11.228 }, 00:04:11.228 "memory_domains": [ 00:04:11.228 { 00:04:11.228 "dma_device_id": "system", 00:04:11.228 "dma_device_type": 1 00:04:11.228 }, 00:04:11.228 { 00:04:11.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:11.228 "dma_device_type": 2 00:04:11.228 } 00:04:11.228 ], 00:04:11.228 "driver_specific": {} 00:04:11.228 }, 00:04:11.228 { 00:04:11.228 "name": "Passthru0", 00:04:11.228 "aliases": [ 00:04:11.228 "3b024d3e-be89-5577-9bd1-61080c8d8ef1" 00:04:11.228 ], 00:04:11.228 "product_name": "passthru", 00:04:11.228 "block_size": 512, 00:04:11.228 "num_blocks": 16384, 00:04:11.228 "uuid": "3b024d3e-be89-5577-9bd1-61080c8d8ef1", 00:04:11.229 "assigned_rate_limits": { 00:04:11.229 "rw_ios_per_sec": 0, 00:04:11.229 "rw_mbytes_per_sec": 0, 00:04:11.229 "r_mbytes_per_sec": 0, 00:04:11.229 "w_mbytes_per_sec": 0 00:04:11.229 }, 00:04:11.229 "claimed": false, 00:04:11.229 "zoned": false, 00:04:11.229 "supported_io_types": { 00:04:11.229 "read": true, 00:04:11.229 "write": true, 00:04:11.229 "unmap": true, 00:04:11.229 "write_zeroes": true, 00:04:11.229 "flush": true, 00:04:11.229 "reset": true, 00:04:11.229 "compare": false, 00:04:11.229 "compare_and_write": false, 00:04:11.229 "abort": true, 00:04:11.229 "nvme_admin": false, 00:04:11.229 "nvme_io": false 00:04:11.229 }, 00:04:11.229 "memory_domains": [ 00:04:11.229 { 00:04:11.229 "dma_device_id": "system", 00:04:11.229 "dma_device_type": 1 00:04:11.229 }, 00:04:11.229 { 00:04:11.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:11.229 "dma_device_type": 2 00:04:11.229 } 00:04:11.229 ], 00:04:11.229 "driver_specific": { 00:04:11.229 "passthru": { 00:04:11.229 "name": "Passthru0", 00:04:11.229 "base_bdev_name": "Malloc0" 00:04:11.229 } 00:04:11.229 } 00:04:11.229 } 00:04:11.229 ]' 00:04:11.229 20:57:27 -- rpc/rpc.sh@21 -- # jq length 00:04:11.486 20:57:27 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:11.486 20:57:27 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:11.486 20:57:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:11.486 20:57:27 -- common/autotest_common.sh@10 -- # set +x 00:04:11.486 20:57:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:11.486 20:57:27 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:11.486 20:57:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:11.486 20:57:27 -- common/autotest_common.sh@10 -- # set +x 00:04:11.486 20:57:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:11.486 20:57:27 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:11.486 20:57:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:11.486 20:57:27 -- common/autotest_common.sh@10 -- # set +x 00:04:11.486 20:57:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:11.486 20:57:27 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:11.486 20:57:27 -- rpc/rpc.sh@26 -- # jq length 00:04:11.486 20:57:27 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:11.486 00:04:11.486 real 0m0.265s 00:04:11.486 user 0m0.170s 00:04:11.486 sys 0m0.029s 00:04:11.486 20:57:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:11.486 20:57:27 -- common/autotest_common.sh@10 -- # set +x 00:04:11.486 ************************************ 00:04:11.486 END TEST rpc_integrity 00:04:11.486 ************************************ 00:04:11.486 20:57:27 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:11.486 20:57:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:11.486 20:57:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:11.486 20:57:27 -- common/autotest_common.sh@10 -- # set +x 00:04:11.486 ************************************ 00:04:11.486 START TEST rpc_plugins 00:04:11.486 ************************************ 00:04:11.486 20:57:27 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:04:11.486 20:57:27 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:11.486 20:57:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:11.486 20:57:27 -- common/autotest_common.sh@10 -- # set +x 00:04:11.486 20:57:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:11.486 20:57:27 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:11.486 20:57:27 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:11.486 20:57:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:11.486 20:57:27 -- common/autotest_common.sh@10 -- # set +x 00:04:11.743 20:57:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:11.743 20:57:27 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:11.743 { 00:04:11.743 "name": "Malloc1", 00:04:11.743 "aliases": [ 00:04:11.743 "9091003c-6c0f-4fc0-b3f6-5420219f72fd" 00:04:11.743 ], 00:04:11.743 "product_name": "Malloc disk", 00:04:11.743 "block_size": 4096, 00:04:11.743 "num_blocks": 256, 00:04:11.743 "uuid": "9091003c-6c0f-4fc0-b3f6-5420219f72fd", 00:04:11.743 "assigned_rate_limits": { 00:04:11.743 "rw_ios_per_sec": 0, 00:04:11.743 "rw_mbytes_per_sec": 0, 00:04:11.743 "r_mbytes_per_sec": 0, 00:04:11.743 "w_mbytes_per_sec": 0 00:04:11.743 }, 00:04:11.743 "claimed": false, 00:04:11.743 "zoned": false, 00:04:11.743 "supported_io_types": { 00:04:11.743 "read": true, 00:04:11.743 "write": true, 00:04:11.743 "unmap": true, 00:04:11.743 "write_zeroes": true, 00:04:11.743 "flush": true, 00:04:11.743 "reset": true, 00:04:11.743 "compare": false, 00:04:11.743 "compare_and_write": false, 00:04:11.743 "abort": true, 00:04:11.743 "nvme_admin": false, 00:04:11.743 "nvme_io": false 00:04:11.743 }, 00:04:11.743 "memory_domains": [ 00:04:11.743 { 00:04:11.743 "dma_device_id": "system", 00:04:11.743 "dma_device_type": 1 00:04:11.743 }, 00:04:11.743 { 00:04:11.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:11.743 "dma_device_type": 2 00:04:11.743 } 00:04:11.743 ], 00:04:11.743 "driver_specific": {} 00:04:11.743 } 00:04:11.743 ]' 00:04:11.743 20:57:27 -- rpc/rpc.sh@32 -- # jq length 00:04:11.743 20:57:27 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:11.743 20:57:27 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:11.743 20:57:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:11.743 20:57:27 -- common/autotest_common.sh@10 -- # set +x 00:04:11.743 20:57:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:11.743 20:57:27 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:11.743 20:57:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:11.743 20:57:27 -- common/autotest_common.sh@10 -- # set +x 00:04:11.743 20:57:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:11.743 20:57:27 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:11.743 20:57:27 -- rpc/rpc.sh@36 -- # jq length 00:04:11.743 20:57:27 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:11.743 00:04:11.743 real 0m0.139s 00:04:11.743 user 0m0.082s 00:04:11.743 sys 0m0.019s 00:04:11.743 20:57:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:11.743 20:57:27 -- common/autotest_common.sh@10 -- # set +x 00:04:11.743 ************************************ 00:04:11.743 END TEST rpc_plugins 00:04:11.743 ************************************ 00:04:11.743 20:57:27 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:11.743 20:57:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:11.743 20:57:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:11.743 20:57:27 -- common/autotest_common.sh@10 -- # set +x 00:04:12.000 ************************************ 00:04:12.000 START TEST rpc_trace_cmd_test 00:04:12.000 ************************************ 00:04:12.000 20:57:27 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:04:12.000 20:57:27 -- rpc/rpc.sh@40 -- # local info 00:04:12.000 20:57:27 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:12.000 20:57:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:12.000 20:57:27 -- common/autotest_common.sh@10 -- # set +x 00:04:12.000 20:57:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:12.000 20:57:27 -- rpc/rpc.sh@42 -- # info='{ 00:04:12.000 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2857496", 00:04:12.000 "tpoint_group_mask": "0x8", 00:04:12.000 "iscsi_conn": { 00:04:12.000 "mask": "0x2", 00:04:12.000 "tpoint_mask": "0x0" 00:04:12.000 }, 00:04:12.000 "scsi": { 00:04:12.000 "mask": "0x4", 00:04:12.000 "tpoint_mask": "0x0" 00:04:12.000 }, 00:04:12.000 "bdev": { 00:04:12.000 "mask": "0x8", 00:04:12.000 "tpoint_mask": "0xffffffffffffffff" 00:04:12.000 }, 00:04:12.000 "nvmf_rdma": { 00:04:12.000 "mask": "0x10", 00:04:12.000 "tpoint_mask": "0x0" 00:04:12.000 }, 00:04:12.000 "nvmf_tcp": { 00:04:12.000 "mask": "0x20", 00:04:12.000 "tpoint_mask": "0x0" 00:04:12.000 }, 00:04:12.000 "ftl": { 00:04:12.000 "mask": "0x40", 00:04:12.000 "tpoint_mask": "0x0" 00:04:12.000 }, 00:04:12.000 "blobfs": { 00:04:12.000 "mask": "0x80", 00:04:12.000 "tpoint_mask": "0x0" 00:04:12.000 }, 00:04:12.000 "dsa": { 00:04:12.000 "mask": "0x200", 00:04:12.000 "tpoint_mask": "0x0" 00:04:12.000 }, 00:04:12.000 "thread": { 00:04:12.000 "mask": "0x400", 00:04:12.000 "tpoint_mask": "0x0" 00:04:12.000 }, 00:04:12.001 "nvme_pcie": { 00:04:12.001 "mask": "0x800", 00:04:12.001 "tpoint_mask": "0x0" 00:04:12.001 }, 00:04:12.001 "iaa": { 00:04:12.001 "mask": "0x1000", 00:04:12.001 "tpoint_mask": "0x0" 00:04:12.001 }, 00:04:12.001 "nvme_tcp": { 00:04:12.001 "mask": "0x2000", 00:04:12.001 "tpoint_mask": "0x0" 00:04:12.001 }, 00:04:12.001 "bdev_nvme": { 00:04:12.001 "mask": "0x4000", 00:04:12.001 "tpoint_mask": "0x0" 00:04:12.001 }, 00:04:12.001 "sock": { 00:04:12.001 "mask": "0x8000", 00:04:12.001 "tpoint_mask": "0x0" 00:04:12.001 } 00:04:12.001 }' 00:04:12.001 20:57:27 -- rpc/rpc.sh@43 -- # jq length 00:04:12.001 20:57:27 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:12.001 20:57:27 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:12.001 20:57:27 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:12.001 20:57:27 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:12.001 20:57:27 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:12.001 20:57:27 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:12.001 20:57:27 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:12.001 20:57:27 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:12.001 20:57:27 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:12.001 00:04:12.001 real 0m0.216s 00:04:12.001 user 0m0.184s 00:04:12.001 sys 0m0.023s 00:04:12.001 20:57:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:12.001 20:57:27 -- common/autotest_common.sh@10 -- # set +x 00:04:12.001 ************************************ 00:04:12.001 END TEST rpc_trace_cmd_test 00:04:12.001 ************************************ 00:04:12.258 20:57:27 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:12.258 20:57:27 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:12.258 20:57:27 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:12.258 20:57:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:12.258 20:57:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:12.258 20:57:27 -- common/autotest_common.sh@10 -- # set +x 00:04:12.258 ************************************ 00:04:12.258 START TEST rpc_daemon_integrity 00:04:12.258 ************************************ 00:04:12.258 20:57:28 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:04:12.258 20:57:28 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:12.258 20:57:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:12.258 20:57:28 -- common/autotest_common.sh@10 -- # set +x 00:04:12.258 20:57:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:12.258 20:57:28 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:12.258 20:57:28 -- rpc/rpc.sh@13 -- # jq length 00:04:12.258 20:57:28 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:12.258 20:57:28 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:12.258 20:57:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:12.258 20:57:28 -- common/autotest_common.sh@10 -- # set +x 00:04:12.258 20:57:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:12.258 20:57:28 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:12.258 20:57:28 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:12.258 20:57:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:12.258 20:57:28 -- common/autotest_common.sh@10 -- # set +x 00:04:12.258 20:57:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:12.258 20:57:28 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:12.258 { 00:04:12.258 "name": "Malloc2", 00:04:12.258 "aliases": [ 00:04:12.258 "39198238-832d-4264-9c54-cdab5fc825de" 00:04:12.258 ], 00:04:12.258 "product_name": "Malloc disk", 00:04:12.258 "block_size": 512, 00:04:12.258 "num_blocks": 16384, 00:04:12.258 "uuid": "39198238-832d-4264-9c54-cdab5fc825de", 00:04:12.258 "assigned_rate_limits": { 00:04:12.258 "rw_ios_per_sec": 0, 00:04:12.258 "rw_mbytes_per_sec": 0, 00:04:12.258 "r_mbytes_per_sec": 0, 00:04:12.258 "w_mbytes_per_sec": 0 00:04:12.258 }, 00:04:12.258 "claimed": false, 00:04:12.258 "zoned": false, 00:04:12.258 "supported_io_types": { 00:04:12.258 "read": true, 00:04:12.258 "write": true, 00:04:12.258 "unmap": true, 00:04:12.258 "write_zeroes": true, 00:04:12.258 "flush": true, 00:04:12.258 "reset": true, 00:04:12.258 "compare": false, 00:04:12.258 "compare_and_write": false, 00:04:12.258 "abort": true, 00:04:12.258 "nvme_admin": false, 00:04:12.258 "nvme_io": false 00:04:12.258 }, 00:04:12.258 "memory_domains": [ 00:04:12.258 { 00:04:12.258 "dma_device_id": "system", 00:04:12.258 "dma_device_type": 1 00:04:12.258 }, 00:04:12.258 { 00:04:12.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:12.258 "dma_device_type": 2 00:04:12.258 } 00:04:12.258 ], 00:04:12.258 "driver_specific": {} 00:04:12.258 } 00:04:12.258 ]' 00:04:12.258 20:57:28 -- rpc/rpc.sh@17 -- # jq length 00:04:12.516 20:57:28 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:12.516 20:57:28 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:12.516 20:57:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:12.516 20:57:28 -- common/autotest_common.sh@10 -- # set +x 00:04:12.516 [2024-04-18 20:57:28.194211] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:12.516 [2024-04-18 20:57:28.194238] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:12.516 [2024-04-18 20:57:28.194253] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x229b020 00:04:12.516 [2024-04-18 20:57:28.194260] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:12.516 [2024-04-18 20:57:28.195218] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:12.516 [2024-04-18 20:57:28.195239] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:12.516 Passthru0 00:04:12.516 20:57:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:12.516 20:57:28 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:12.516 20:57:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:12.516 20:57:28 -- common/autotest_common.sh@10 -- # set +x 00:04:12.516 20:57:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:12.516 20:57:28 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:12.516 { 00:04:12.516 "name": "Malloc2", 00:04:12.516 "aliases": [ 00:04:12.516 "39198238-832d-4264-9c54-cdab5fc825de" 00:04:12.516 ], 00:04:12.516 "product_name": "Malloc disk", 00:04:12.516 "block_size": 512, 00:04:12.516 "num_blocks": 16384, 00:04:12.516 "uuid": "39198238-832d-4264-9c54-cdab5fc825de", 00:04:12.516 "assigned_rate_limits": { 00:04:12.516 "rw_ios_per_sec": 0, 00:04:12.516 "rw_mbytes_per_sec": 0, 00:04:12.516 "r_mbytes_per_sec": 0, 00:04:12.516 "w_mbytes_per_sec": 0 00:04:12.516 }, 00:04:12.516 "claimed": true, 00:04:12.516 "claim_type": "exclusive_write", 00:04:12.516 "zoned": false, 00:04:12.516 "supported_io_types": { 00:04:12.516 "read": true, 00:04:12.516 "write": true, 00:04:12.516 "unmap": true, 00:04:12.516 "write_zeroes": true, 00:04:12.516 "flush": true, 00:04:12.516 "reset": true, 00:04:12.516 "compare": false, 00:04:12.516 "compare_and_write": false, 00:04:12.516 "abort": true, 00:04:12.516 "nvme_admin": false, 00:04:12.516 "nvme_io": false 00:04:12.516 }, 00:04:12.516 "memory_domains": [ 00:04:12.516 { 00:04:12.516 "dma_device_id": "system", 00:04:12.516 "dma_device_type": 1 00:04:12.516 }, 00:04:12.516 { 00:04:12.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:12.516 "dma_device_type": 2 00:04:12.516 } 00:04:12.516 ], 00:04:12.516 "driver_specific": {} 00:04:12.516 }, 00:04:12.516 { 00:04:12.516 "name": "Passthru0", 00:04:12.516 "aliases": [ 00:04:12.516 "cb6d5192-bc95-595d-91b6-fc0631122ce6" 00:04:12.516 ], 00:04:12.516 "product_name": "passthru", 00:04:12.516 "block_size": 512, 00:04:12.516 "num_blocks": 16384, 00:04:12.516 "uuid": "cb6d5192-bc95-595d-91b6-fc0631122ce6", 00:04:12.516 "assigned_rate_limits": { 00:04:12.516 "rw_ios_per_sec": 0, 00:04:12.516 "rw_mbytes_per_sec": 0, 00:04:12.516 "r_mbytes_per_sec": 0, 00:04:12.516 "w_mbytes_per_sec": 0 00:04:12.516 }, 00:04:12.516 "claimed": false, 00:04:12.516 "zoned": false, 00:04:12.516 "supported_io_types": { 00:04:12.516 "read": true, 00:04:12.516 "write": true, 00:04:12.516 "unmap": true, 00:04:12.516 "write_zeroes": true, 00:04:12.516 "flush": true, 00:04:12.516 "reset": true, 00:04:12.516 "compare": false, 00:04:12.516 "compare_and_write": false, 00:04:12.516 "abort": true, 00:04:12.516 "nvme_admin": false, 00:04:12.516 "nvme_io": false 00:04:12.516 }, 00:04:12.516 "memory_domains": [ 00:04:12.516 { 00:04:12.516 "dma_device_id": "system", 00:04:12.516 "dma_device_type": 1 00:04:12.516 }, 00:04:12.516 { 00:04:12.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:12.516 "dma_device_type": 2 00:04:12.516 } 00:04:12.516 ], 00:04:12.516 "driver_specific": { 00:04:12.516 "passthru": { 00:04:12.516 "name": "Passthru0", 00:04:12.516 "base_bdev_name": "Malloc2" 00:04:12.516 } 00:04:12.516 } 00:04:12.516 } 00:04:12.516 ]' 00:04:12.516 20:57:28 -- rpc/rpc.sh@21 -- # jq length 00:04:12.516 20:57:28 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:12.516 20:57:28 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:12.516 20:57:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:12.516 20:57:28 -- common/autotest_common.sh@10 -- # set +x 00:04:12.516 20:57:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:12.516 20:57:28 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:12.516 20:57:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:12.516 20:57:28 -- common/autotest_common.sh@10 -- # set +x 00:04:12.516 20:57:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:12.516 20:57:28 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:12.516 20:57:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:12.516 20:57:28 -- common/autotest_common.sh@10 -- # set +x 00:04:12.516 20:57:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:12.516 20:57:28 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:12.516 20:57:28 -- rpc/rpc.sh@26 -- # jq length 00:04:12.516 20:57:28 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:12.516 00:04:12.516 real 0m0.271s 00:04:12.516 user 0m0.171s 00:04:12.516 sys 0m0.041s 00:04:12.516 20:57:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:12.516 20:57:28 -- common/autotest_common.sh@10 -- # set +x 00:04:12.516 ************************************ 00:04:12.516 END TEST rpc_daemon_integrity 00:04:12.516 ************************************ 00:04:12.516 20:57:28 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:12.516 20:57:28 -- rpc/rpc.sh@84 -- # killprocess 2857496 00:04:12.516 20:57:28 -- common/autotest_common.sh@936 -- # '[' -z 2857496 ']' 00:04:12.516 20:57:28 -- common/autotest_common.sh@940 -- # kill -0 2857496 00:04:12.516 20:57:28 -- common/autotest_common.sh@941 -- # uname 00:04:12.516 20:57:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:12.516 20:57:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2857496 00:04:12.516 20:57:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:12.516 20:57:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:12.516 20:57:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2857496' 00:04:12.516 killing process with pid 2857496 00:04:12.517 20:57:28 -- common/autotest_common.sh@955 -- # kill 2857496 00:04:12.517 20:57:28 -- common/autotest_common.sh@960 -- # wait 2857496 00:04:13.082 00:04:13.082 real 0m2.821s 00:04:13.082 user 0m3.670s 00:04:13.082 sys 0m0.812s 00:04:13.082 20:57:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:13.082 20:57:28 -- common/autotest_common.sh@10 -- # set +x 00:04:13.082 ************************************ 00:04:13.082 END TEST rpc 00:04:13.082 ************************************ 00:04:13.082 20:57:28 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:13.082 20:57:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:13.082 20:57:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:13.082 20:57:28 -- common/autotest_common.sh@10 -- # set +x 00:04:13.082 ************************************ 00:04:13.082 START TEST skip_rpc 00:04:13.082 ************************************ 00:04:13.082 20:57:28 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:13.082 * Looking for test storage... 00:04:13.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:13.082 20:57:28 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:13.082 20:57:28 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:13.082 20:57:28 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:13.082 20:57:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:13.082 20:57:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:13.082 20:57:28 -- common/autotest_common.sh@10 -- # set +x 00:04:13.340 ************************************ 00:04:13.340 START TEST skip_rpc 00:04:13.340 ************************************ 00:04:13.340 20:57:29 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:04:13.340 20:57:29 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2858179 00:04:13.340 20:57:29 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:13.340 20:57:29 -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:13.340 20:57:29 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:13.340 [2024-04-18 20:57:29.132045] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:04:13.340 [2024-04-18 20:57:29.132082] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2858179 ] 00:04:13.340 EAL: No free 2048 kB hugepages reported on node 1 00:04:13.340 [2024-04-18 20:57:29.188831] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.340 [2024-04-18 20:57:29.257414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.645 20:57:34 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:18.645 20:57:34 -- common/autotest_common.sh@638 -- # local es=0 00:04:18.645 20:57:34 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:18.645 20:57:34 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:04:18.645 20:57:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:18.645 20:57:34 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:04:18.645 20:57:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:18.645 20:57:34 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:04:18.645 20:57:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:18.645 20:57:34 -- common/autotest_common.sh@10 -- # set +x 00:04:18.645 20:57:34 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:04:18.645 20:57:34 -- common/autotest_common.sh@641 -- # es=1 00:04:18.645 20:57:34 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:18.645 20:57:34 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:18.645 20:57:34 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:18.645 20:57:34 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:18.645 20:57:34 -- rpc/skip_rpc.sh@23 -- # killprocess 2858179 00:04:18.645 20:57:34 -- common/autotest_common.sh@936 -- # '[' -z 2858179 ']' 00:04:18.645 20:57:34 -- common/autotest_common.sh@940 -- # kill -0 2858179 00:04:18.645 20:57:34 -- common/autotest_common.sh@941 -- # uname 00:04:18.645 20:57:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:18.645 20:57:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2858179 00:04:18.645 20:57:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:18.645 20:57:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:18.645 20:57:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2858179' 00:04:18.645 killing process with pid 2858179 00:04:18.645 20:57:34 -- common/autotest_common.sh@955 -- # kill 2858179 00:04:18.645 20:57:34 -- common/autotest_common.sh@960 -- # wait 2858179 00:04:18.645 00:04:18.645 real 0m5.383s 00:04:18.645 user 0m5.154s 00:04:18.645 sys 0m0.251s 00:04:18.645 20:57:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:18.645 20:57:34 -- common/autotest_common.sh@10 -- # set +x 00:04:18.645 ************************************ 00:04:18.645 END TEST skip_rpc 00:04:18.645 ************************************ 00:04:18.645 20:57:34 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:18.645 20:57:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:18.645 20:57:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:18.645 20:57:34 -- common/autotest_common.sh@10 -- # set +x 00:04:18.904 ************************************ 00:04:18.904 START TEST skip_rpc_with_json 00:04:18.904 ************************************ 00:04:18.904 20:57:34 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:04:18.904 20:57:34 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:18.904 20:57:34 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2859134 00:04:18.904 20:57:34 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:18.904 20:57:34 -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:18.904 20:57:34 -- rpc/skip_rpc.sh@31 -- # waitforlisten 2859134 00:04:18.904 20:57:34 -- common/autotest_common.sh@817 -- # '[' -z 2859134 ']' 00:04:18.904 20:57:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:18.904 20:57:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:18.904 20:57:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:18.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:18.904 20:57:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:18.904 20:57:34 -- common/autotest_common.sh@10 -- # set +x 00:04:18.904 [2024-04-18 20:57:34.653997] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:04:18.904 [2024-04-18 20:57:34.654035] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2859134 ] 00:04:18.904 EAL: No free 2048 kB hugepages reported on node 1 00:04:18.904 [2024-04-18 20:57:34.710482] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.904 [2024-04-18 20:57:34.787430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.840 20:57:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:19.840 20:57:35 -- common/autotest_common.sh@850 -- # return 0 00:04:19.840 20:57:35 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:19.840 20:57:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:19.840 20:57:35 -- common/autotest_common.sh@10 -- # set +x 00:04:19.840 [2024-04-18 20:57:35.436874] nvmf_rpc.c:2534:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:19.840 request: 00:04:19.840 { 00:04:19.840 "trtype": "tcp", 00:04:19.840 "method": "nvmf_get_transports", 00:04:19.840 "req_id": 1 00:04:19.840 } 00:04:19.840 Got JSON-RPC error response 00:04:19.840 response: 00:04:19.840 { 00:04:19.840 "code": -19, 00:04:19.840 "message": "No such device" 00:04:19.840 } 00:04:19.840 20:57:35 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:04:19.840 20:57:35 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:19.840 20:57:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:19.840 20:57:35 -- common/autotest_common.sh@10 -- # set +x 00:04:19.840 [2024-04-18 20:57:35.444961] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:19.840 20:57:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:19.840 20:57:35 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:19.840 20:57:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:19.840 20:57:35 -- common/autotest_common.sh@10 -- # set +x 00:04:19.840 20:57:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:19.840 20:57:35 -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:19.840 { 00:04:19.840 "subsystems": [ 00:04:19.840 { 00:04:19.840 "subsystem": "vfio_user_target", 00:04:19.840 "config": null 00:04:19.840 }, 00:04:19.840 { 00:04:19.840 "subsystem": "keyring", 00:04:19.840 "config": [] 00:04:19.840 }, 00:04:19.840 { 00:04:19.840 "subsystem": "iobuf", 00:04:19.840 "config": [ 00:04:19.840 { 00:04:19.840 "method": "iobuf_set_options", 00:04:19.840 "params": { 00:04:19.840 "small_pool_count": 8192, 00:04:19.840 "large_pool_count": 1024, 00:04:19.840 "small_bufsize": 8192, 00:04:19.840 "large_bufsize": 135168 00:04:19.840 } 00:04:19.840 } 00:04:19.840 ] 00:04:19.840 }, 00:04:19.840 { 00:04:19.840 "subsystem": "sock", 00:04:19.840 "config": [ 00:04:19.840 { 00:04:19.840 "method": "sock_impl_set_options", 00:04:19.840 "params": { 00:04:19.840 "impl_name": "posix", 00:04:19.840 "recv_buf_size": 2097152, 00:04:19.840 "send_buf_size": 2097152, 00:04:19.840 "enable_recv_pipe": true, 00:04:19.840 "enable_quickack": false, 00:04:19.840 "enable_placement_id": 0, 00:04:19.840 "enable_zerocopy_send_server": true, 00:04:19.840 "enable_zerocopy_send_client": false, 00:04:19.840 "zerocopy_threshold": 0, 00:04:19.840 "tls_version": 0, 00:04:19.840 "enable_ktls": false 00:04:19.840 } 00:04:19.840 }, 00:04:19.840 { 00:04:19.840 "method": "sock_impl_set_options", 00:04:19.840 "params": { 00:04:19.840 "impl_name": "ssl", 00:04:19.840 "recv_buf_size": 4096, 00:04:19.840 "send_buf_size": 4096, 00:04:19.840 "enable_recv_pipe": true, 00:04:19.840 "enable_quickack": false, 00:04:19.840 "enable_placement_id": 0, 00:04:19.840 "enable_zerocopy_send_server": true, 00:04:19.840 "enable_zerocopy_send_client": false, 00:04:19.840 "zerocopy_threshold": 0, 00:04:19.840 "tls_version": 0, 00:04:19.840 "enable_ktls": false 00:04:19.840 } 00:04:19.840 } 00:04:19.840 ] 00:04:19.840 }, 00:04:19.840 { 00:04:19.840 "subsystem": "vmd", 00:04:19.840 "config": [] 00:04:19.840 }, 00:04:19.840 { 00:04:19.840 "subsystem": "accel", 00:04:19.840 "config": [ 00:04:19.840 { 00:04:19.840 "method": "accel_set_options", 00:04:19.840 "params": { 00:04:19.840 "small_cache_size": 128, 00:04:19.840 "large_cache_size": 16, 00:04:19.840 "task_count": 2048, 00:04:19.840 "sequence_count": 2048, 00:04:19.840 "buf_count": 2048 00:04:19.840 } 00:04:19.840 } 00:04:19.840 ] 00:04:19.840 }, 00:04:19.840 { 00:04:19.841 "subsystem": "bdev", 00:04:19.841 "config": [ 00:04:19.841 { 00:04:19.841 "method": "bdev_set_options", 00:04:19.841 "params": { 00:04:19.841 "bdev_io_pool_size": 65535, 00:04:19.841 "bdev_io_cache_size": 256, 00:04:19.841 "bdev_auto_examine": true, 00:04:19.841 "iobuf_small_cache_size": 128, 00:04:19.841 "iobuf_large_cache_size": 16 00:04:19.841 } 00:04:19.841 }, 00:04:19.841 { 00:04:19.841 "method": "bdev_raid_set_options", 00:04:19.841 "params": { 00:04:19.841 "process_window_size_kb": 1024 00:04:19.841 } 00:04:19.841 }, 00:04:19.841 { 00:04:19.841 "method": "bdev_iscsi_set_options", 00:04:19.841 "params": { 00:04:19.841 "timeout_sec": 30 00:04:19.841 } 00:04:19.841 }, 00:04:19.841 { 00:04:19.841 "method": "bdev_nvme_set_options", 00:04:19.841 "params": { 00:04:19.841 "action_on_timeout": "none", 00:04:19.841 "timeout_us": 0, 00:04:19.841 "timeout_admin_us": 0, 00:04:19.841 "keep_alive_timeout_ms": 10000, 00:04:19.841 "arbitration_burst": 0, 00:04:19.841 "low_priority_weight": 0, 00:04:19.841 "medium_priority_weight": 0, 00:04:19.841 "high_priority_weight": 0, 00:04:19.841 "nvme_adminq_poll_period_us": 10000, 00:04:19.841 "nvme_ioq_poll_period_us": 0, 00:04:19.841 "io_queue_requests": 0, 00:04:19.841 "delay_cmd_submit": true, 00:04:19.841 "transport_retry_count": 4, 00:04:19.841 "bdev_retry_count": 3, 00:04:19.841 "transport_ack_timeout": 0, 00:04:19.841 "ctrlr_loss_timeout_sec": 0, 00:04:19.841 "reconnect_delay_sec": 0, 00:04:19.841 "fast_io_fail_timeout_sec": 0, 00:04:19.841 "disable_auto_failback": false, 00:04:19.841 "generate_uuids": false, 00:04:19.841 "transport_tos": 0, 00:04:19.841 "nvme_error_stat": false, 00:04:19.841 "rdma_srq_size": 0, 00:04:19.841 "io_path_stat": false, 00:04:19.841 "allow_accel_sequence": false, 00:04:19.841 "rdma_max_cq_size": 0, 00:04:19.841 "rdma_cm_event_timeout_ms": 0, 00:04:19.841 "dhchap_digests": [ 00:04:19.841 "sha256", 00:04:19.841 "sha384", 00:04:19.841 "sha512" 00:04:19.841 ], 00:04:19.841 "dhchap_dhgroups": [ 00:04:19.841 "null", 00:04:19.841 "ffdhe2048", 00:04:19.841 "ffdhe3072", 00:04:19.841 "ffdhe4096", 00:04:19.841 "ffdhe6144", 00:04:19.841 "ffdhe8192" 00:04:19.841 ] 00:04:19.841 } 00:04:19.841 }, 00:04:19.841 { 00:04:19.841 "method": "bdev_nvme_set_hotplug", 00:04:19.841 "params": { 00:04:19.841 "period_us": 100000, 00:04:19.841 "enable": false 00:04:19.841 } 00:04:19.841 }, 00:04:19.841 { 00:04:19.841 "method": "bdev_wait_for_examine" 00:04:19.841 } 00:04:19.841 ] 00:04:19.841 }, 00:04:19.841 { 00:04:19.841 "subsystem": "scsi", 00:04:19.841 "config": null 00:04:19.841 }, 00:04:19.841 { 00:04:19.841 "subsystem": "scheduler", 00:04:19.841 "config": [ 00:04:19.841 { 00:04:19.841 "method": "framework_set_scheduler", 00:04:19.841 "params": { 00:04:19.841 "name": "static" 00:04:19.841 } 00:04:19.841 } 00:04:19.841 ] 00:04:19.841 }, 00:04:19.841 { 00:04:19.841 "subsystem": "vhost_scsi", 00:04:19.841 "config": [] 00:04:19.841 }, 00:04:19.841 { 00:04:19.841 "subsystem": "vhost_blk", 00:04:19.841 "config": [] 00:04:19.841 }, 00:04:19.841 { 00:04:19.841 "subsystem": "ublk", 00:04:19.841 "config": [] 00:04:19.841 }, 00:04:19.841 { 00:04:19.841 "subsystem": "nbd", 00:04:19.841 "config": [] 00:04:19.841 }, 00:04:19.841 { 00:04:19.841 "subsystem": "nvmf", 00:04:19.841 "config": [ 00:04:19.841 { 00:04:19.841 "method": "nvmf_set_config", 00:04:19.841 "params": { 00:04:19.841 "discovery_filter": "match_any", 00:04:19.841 "admin_cmd_passthru": { 00:04:19.841 "identify_ctrlr": false 00:04:19.841 } 00:04:19.841 } 00:04:19.841 }, 00:04:19.841 { 00:04:19.841 "method": "nvmf_set_max_subsystems", 00:04:19.841 "params": { 00:04:19.841 "max_subsystems": 1024 00:04:19.841 } 00:04:19.841 }, 00:04:19.841 { 00:04:19.841 "method": "nvmf_set_crdt", 00:04:19.841 "params": { 00:04:19.841 "crdt1": 0, 00:04:19.841 "crdt2": 0, 00:04:19.841 "crdt3": 0 00:04:19.841 } 00:04:19.841 }, 00:04:19.841 { 00:04:19.841 "method": "nvmf_create_transport", 00:04:19.841 "params": { 00:04:19.841 "trtype": "TCP", 00:04:19.841 "max_queue_depth": 128, 00:04:19.841 "max_io_qpairs_per_ctrlr": 127, 00:04:19.841 "in_capsule_data_size": 4096, 00:04:19.841 "max_io_size": 131072, 00:04:19.841 "io_unit_size": 131072, 00:04:19.841 "max_aq_depth": 128, 00:04:19.841 "num_shared_buffers": 511, 00:04:19.841 "buf_cache_size": 4294967295, 00:04:19.841 "dif_insert_or_strip": false, 00:04:19.841 "zcopy": false, 00:04:19.841 "c2h_success": true, 00:04:19.841 "sock_priority": 0, 00:04:19.841 "abort_timeout_sec": 1, 00:04:19.841 "ack_timeout": 0 00:04:19.841 } 00:04:19.841 } 00:04:19.841 ] 00:04:19.841 }, 00:04:19.841 { 00:04:19.841 "subsystem": "iscsi", 00:04:19.841 "config": [ 00:04:19.841 { 00:04:19.841 "method": "iscsi_set_options", 00:04:19.841 "params": { 00:04:19.841 "node_base": "iqn.2016-06.io.spdk", 00:04:19.841 "max_sessions": 128, 00:04:19.841 "max_connections_per_session": 2, 00:04:19.841 "max_queue_depth": 64, 00:04:19.841 "default_time2wait": 2, 00:04:19.841 "default_time2retain": 20, 00:04:19.841 "first_burst_length": 8192, 00:04:19.841 "immediate_data": true, 00:04:19.841 "allow_duplicated_isid": false, 00:04:19.841 "error_recovery_level": 0, 00:04:19.841 "nop_timeout": 60, 00:04:19.841 "nop_in_interval": 30, 00:04:19.841 "disable_chap": false, 00:04:19.841 "require_chap": false, 00:04:19.841 "mutual_chap": false, 00:04:19.841 "chap_group": 0, 00:04:19.841 "max_large_datain_per_connection": 64, 00:04:19.841 "max_r2t_per_connection": 4, 00:04:19.841 "pdu_pool_size": 36864, 00:04:19.841 "immediate_data_pool_size": 16384, 00:04:19.841 "data_out_pool_size": 2048 00:04:19.841 } 00:04:19.841 } 00:04:19.841 ] 00:04:19.841 } 00:04:19.841 ] 00:04:19.841 } 00:04:19.841 20:57:35 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:19.841 20:57:35 -- rpc/skip_rpc.sh@40 -- # killprocess 2859134 00:04:19.841 20:57:35 -- common/autotest_common.sh@936 -- # '[' -z 2859134 ']' 00:04:19.841 20:57:35 -- common/autotest_common.sh@940 -- # kill -0 2859134 00:04:19.841 20:57:35 -- common/autotest_common.sh@941 -- # uname 00:04:19.841 20:57:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:19.841 20:57:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2859134 00:04:19.841 20:57:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:19.841 20:57:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:19.841 20:57:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2859134' 00:04:19.841 killing process with pid 2859134 00:04:19.841 20:57:35 -- common/autotest_common.sh@955 -- # kill 2859134 00:04:19.841 20:57:35 -- common/autotest_common.sh@960 -- # wait 2859134 00:04:20.100 20:57:35 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2859370 00:04:20.100 20:57:35 -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:20.100 20:57:35 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:25.369 20:57:40 -- rpc/skip_rpc.sh@50 -- # killprocess 2859370 00:04:25.369 20:57:40 -- common/autotest_common.sh@936 -- # '[' -z 2859370 ']' 00:04:25.369 20:57:40 -- common/autotest_common.sh@940 -- # kill -0 2859370 00:04:25.369 20:57:40 -- common/autotest_common.sh@941 -- # uname 00:04:25.369 20:57:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:25.369 20:57:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2859370 00:04:25.369 20:57:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:25.369 20:57:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:25.369 20:57:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2859370' 00:04:25.369 killing process with pid 2859370 00:04:25.369 20:57:41 -- common/autotest_common.sh@955 -- # kill 2859370 00:04:25.369 20:57:41 -- common/autotest_common.sh@960 -- # wait 2859370 00:04:25.628 20:57:41 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:25.628 20:57:41 -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:25.628 00:04:25.628 real 0m6.726s 00:04:25.628 user 0m6.552s 00:04:25.628 sys 0m0.532s 00:04:25.628 20:57:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:25.628 20:57:41 -- common/autotest_common.sh@10 -- # set +x 00:04:25.628 ************************************ 00:04:25.628 END TEST skip_rpc_with_json 00:04:25.628 ************************************ 00:04:25.628 20:57:41 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:25.628 20:57:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:25.628 20:57:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:25.628 20:57:41 -- common/autotest_common.sh@10 -- # set +x 00:04:25.628 ************************************ 00:04:25.628 START TEST skip_rpc_with_delay 00:04:25.628 ************************************ 00:04:25.628 20:57:41 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:04:25.628 20:57:41 -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:25.628 20:57:41 -- common/autotest_common.sh@638 -- # local es=0 00:04:25.628 20:57:41 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:25.628 20:57:41 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:25.628 20:57:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:25.628 20:57:41 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:25.628 20:57:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:25.628 20:57:41 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:25.628 20:57:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:25.628 20:57:41 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:25.628 20:57:41 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:25.628 20:57:41 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:25.628 [2024-04-18 20:57:41.510817] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:25.628 [2024-04-18 20:57:41.510874] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:25.628 20:57:41 -- common/autotest_common.sh@641 -- # es=1 00:04:25.628 20:57:41 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:25.628 20:57:41 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:25.628 20:57:41 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:25.628 00:04:25.628 real 0m0.060s 00:04:25.628 user 0m0.037s 00:04:25.628 sys 0m0.022s 00:04:25.628 20:57:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:25.628 20:57:41 -- common/autotest_common.sh@10 -- # set +x 00:04:25.628 ************************************ 00:04:25.628 END TEST skip_rpc_with_delay 00:04:25.628 ************************************ 00:04:25.628 20:57:41 -- rpc/skip_rpc.sh@77 -- # uname 00:04:25.628 20:57:41 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:25.628 20:57:41 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:25.628 20:57:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:25.628 20:57:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:25.628 20:57:41 -- common/autotest_common.sh@10 -- # set +x 00:04:25.887 ************************************ 00:04:25.887 START TEST exit_on_failed_rpc_init 00:04:25.887 ************************************ 00:04:25.887 20:57:41 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:04:25.887 20:57:41 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2860361 00:04:25.887 20:57:41 -- rpc/skip_rpc.sh@63 -- # waitforlisten 2860361 00:04:25.887 20:57:41 -- common/autotest_common.sh@817 -- # '[' -z 2860361 ']' 00:04:25.887 20:57:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.887 20:57:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:25.887 20:57:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.887 20:57:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:25.887 20:57:41 -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:25.887 20:57:41 -- common/autotest_common.sh@10 -- # set +x 00:04:25.887 [2024-04-18 20:57:41.693122] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:04:25.888 [2024-04-18 20:57:41.693161] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2860361 ] 00:04:25.888 EAL: No free 2048 kB hugepages reported on node 1 00:04:25.888 [2024-04-18 20:57:41.749414] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.146 [2024-04-18 20:57:41.827674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.715 20:57:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:26.715 20:57:42 -- common/autotest_common.sh@850 -- # return 0 00:04:26.715 20:57:42 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:26.715 20:57:42 -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:26.715 20:57:42 -- common/autotest_common.sh@638 -- # local es=0 00:04:26.715 20:57:42 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:26.715 20:57:42 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:26.715 20:57:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:26.715 20:57:42 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:26.715 20:57:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:26.715 20:57:42 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:26.715 20:57:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:26.715 20:57:42 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:26.715 20:57:42 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:26.715 20:57:42 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:26.715 [2024-04-18 20:57:42.521535] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:04:26.715 [2024-04-18 20:57:42.521579] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2860590 ] 00:04:26.715 EAL: No free 2048 kB hugepages reported on node 1 00:04:26.715 [2024-04-18 20:57:42.578289] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.974 [2024-04-18 20:57:42.649652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:26.975 [2024-04-18 20:57:42.649715] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:26.975 [2024-04-18 20:57:42.649724] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:26.975 [2024-04-18 20:57:42.649730] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:26.975 20:57:42 -- common/autotest_common.sh@641 -- # es=234 00:04:26.975 20:57:42 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:26.975 20:57:42 -- common/autotest_common.sh@650 -- # es=106 00:04:26.975 20:57:42 -- common/autotest_common.sh@651 -- # case "$es" in 00:04:26.975 20:57:42 -- common/autotest_common.sh@658 -- # es=1 00:04:26.975 20:57:42 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:26.975 20:57:42 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:26.975 20:57:42 -- rpc/skip_rpc.sh@70 -- # killprocess 2860361 00:04:26.975 20:57:42 -- common/autotest_common.sh@936 -- # '[' -z 2860361 ']' 00:04:26.975 20:57:42 -- common/autotest_common.sh@940 -- # kill -0 2860361 00:04:26.975 20:57:42 -- common/autotest_common.sh@941 -- # uname 00:04:26.975 20:57:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:26.975 20:57:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2860361 00:04:26.975 20:57:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:26.975 20:57:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:26.975 20:57:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2860361' 00:04:26.975 killing process with pid 2860361 00:04:26.975 20:57:42 -- common/autotest_common.sh@955 -- # kill 2860361 00:04:26.975 20:57:42 -- common/autotest_common.sh@960 -- # wait 2860361 00:04:27.235 00:04:27.235 real 0m1.469s 00:04:27.235 user 0m1.703s 00:04:27.235 sys 0m0.372s 00:04:27.235 20:57:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:27.235 20:57:43 -- common/autotest_common.sh@10 -- # set +x 00:04:27.235 ************************************ 00:04:27.235 END TEST exit_on_failed_rpc_init 00:04:27.235 ************************************ 00:04:27.235 20:57:43 -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:27.235 00:04:27.235 real 0m14.246s 00:04:27.235 user 0m13.655s 00:04:27.235 sys 0m1.544s 00:04:27.235 20:57:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:27.235 20:57:43 -- common/autotest_common.sh@10 -- # set +x 00:04:27.235 ************************************ 00:04:27.235 END TEST skip_rpc 00:04:27.235 ************************************ 00:04:27.494 20:57:43 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:27.494 20:57:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:27.494 20:57:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:27.494 20:57:43 -- common/autotest_common.sh@10 -- # set +x 00:04:27.494 ************************************ 00:04:27.494 START TEST rpc_client 00:04:27.494 ************************************ 00:04:27.494 20:57:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:27.494 * Looking for test storage... 00:04:27.494 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:27.494 20:57:43 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:27.494 OK 00:04:27.494 20:57:43 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:27.494 00:04:27.494 real 0m0.095s 00:04:27.494 user 0m0.037s 00:04:27.494 sys 0m0.066s 00:04:27.494 20:57:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:27.494 20:57:43 -- common/autotest_common.sh@10 -- # set +x 00:04:27.494 ************************************ 00:04:27.494 END TEST rpc_client 00:04:27.494 ************************************ 00:04:27.759 20:57:43 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:27.759 20:57:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:27.759 20:57:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:27.759 20:57:43 -- common/autotest_common.sh@10 -- # set +x 00:04:27.759 ************************************ 00:04:27.759 START TEST json_config 00:04:27.759 ************************************ 00:04:27.759 20:57:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:27.759 20:57:43 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:27.759 20:57:43 -- nvmf/common.sh@7 -- # uname -s 00:04:27.759 20:57:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:27.759 20:57:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:27.759 20:57:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:27.759 20:57:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:27.759 20:57:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:27.759 20:57:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:27.759 20:57:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:27.759 20:57:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:27.759 20:57:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:27.759 20:57:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:27.759 20:57:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:27.759 20:57:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:27.759 20:57:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:27.759 20:57:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:27.759 20:57:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:27.759 20:57:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:27.759 20:57:43 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:27.759 20:57:43 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:27.759 20:57:43 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:27.759 20:57:43 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:27.759 20:57:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.759 20:57:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.759 20:57:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.759 20:57:43 -- paths/export.sh@5 -- # export PATH 00:04:27.759 20:57:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.759 20:57:43 -- nvmf/common.sh@47 -- # : 0 00:04:27.759 20:57:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:27.759 20:57:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:27.759 20:57:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:27.759 20:57:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:27.759 20:57:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:27.759 20:57:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:27.759 20:57:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:27.759 20:57:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:27.759 20:57:43 -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:27.759 20:57:43 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:27.759 20:57:43 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:27.759 20:57:43 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:27.759 20:57:43 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:27.759 20:57:43 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:27.759 20:57:43 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:27.759 20:57:43 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:27.759 20:57:43 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:27.759 20:57:43 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:27.759 20:57:43 -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:27.759 20:57:43 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:27.759 20:57:43 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:27.759 20:57:43 -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:27.759 20:57:43 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:27.759 20:57:43 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:27.759 INFO: JSON configuration test init 00:04:27.759 20:57:43 -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:27.759 20:57:43 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:27.759 20:57:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:27.759 20:57:43 -- common/autotest_common.sh@10 -- # set +x 00:04:27.759 20:57:43 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:27.759 20:57:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:27.759 20:57:43 -- common/autotest_common.sh@10 -- # set +x 00:04:27.759 20:57:43 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:27.759 20:57:43 -- json_config/common.sh@9 -- # local app=target 00:04:27.759 20:57:43 -- json_config/common.sh@10 -- # shift 00:04:27.759 20:57:43 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:27.759 20:57:43 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:27.759 20:57:43 -- json_config/common.sh@15 -- # local app_extra_params= 00:04:27.759 20:57:43 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:27.759 20:57:43 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:27.759 20:57:43 -- json_config/common.sh@22 -- # app_pid["$app"]=2860940 00:04:27.759 20:57:43 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:27.759 Waiting for target to run... 00:04:27.759 20:57:43 -- json_config/common.sh@25 -- # waitforlisten 2860940 /var/tmp/spdk_tgt.sock 00:04:27.759 20:57:43 -- common/autotest_common.sh@817 -- # '[' -z 2860940 ']' 00:04:27.759 20:57:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:27.759 20:57:43 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:27.759 20:57:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:27.759 20:57:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:27.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:27.759 20:57:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:27.759 20:57:43 -- common/autotest_common.sh@10 -- # set +x 00:04:28.021 [2024-04-18 20:57:43.710527] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:04:28.021 [2024-04-18 20:57:43.710580] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2860940 ] 00:04:28.021 EAL: No free 2048 kB hugepages reported on node 1 00:04:28.280 [2024-04-18 20:57:44.146720] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.539 [2024-04-18 20:57:44.233194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.797 20:57:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:28.797 20:57:44 -- common/autotest_common.sh@850 -- # return 0 00:04:28.797 20:57:44 -- json_config/common.sh@26 -- # echo '' 00:04:28.797 00:04:28.797 20:57:44 -- json_config/json_config.sh@269 -- # create_accel_config 00:04:28.797 20:57:44 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:28.797 20:57:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:28.797 20:57:44 -- common/autotest_common.sh@10 -- # set +x 00:04:28.797 20:57:44 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:28.797 20:57:44 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:28.797 20:57:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:28.797 20:57:44 -- common/autotest_common.sh@10 -- # set +x 00:04:28.797 20:57:44 -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:28.798 20:57:44 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:28.798 20:57:44 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:32.086 20:57:47 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:32.086 20:57:47 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:32.086 20:57:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:32.086 20:57:47 -- common/autotest_common.sh@10 -- # set +x 00:04:32.086 20:57:47 -- json_config/json_config.sh@45 -- # local ret=0 00:04:32.086 20:57:47 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:32.086 20:57:47 -- json_config/json_config.sh@46 -- # local enabled_types 00:04:32.086 20:57:47 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:32.086 20:57:47 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:32.086 20:57:47 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:32.086 20:57:47 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:32.086 20:57:47 -- json_config/json_config.sh@48 -- # local get_types 00:04:32.086 20:57:47 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:32.086 20:57:47 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:32.086 20:57:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:32.086 20:57:47 -- common/autotest_common.sh@10 -- # set +x 00:04:32.086 20:57:47 -- json_config/json_config.sh@55 -- # return 0 00:04:32.086 20:57:47 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:32.086 20:57:47 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:32.086 20:57:47 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:32.086 20:57:47 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:32.086 20:57:47 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:32.086 20:57:47 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:32.086 20:57:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:32.086 20:57:47 -- common/autotest_common.sh@10 -- # set +x 00:04:32.086 20:57:47 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:32.086 20:57:47 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:32.086 20:57:47 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:32.086 20:57:47 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:32.086 20:57:47 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:32.086 MallocForNvmf0 00:04:32.086 20:57:47 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:32.086 20:57:47 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:32.346 MallocForNvmf1 00:04:32.346 20:57:48 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:32.346 20:57:48 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:32.604 [2024-04-18 20:57:48.296406] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:32.604 20:57:48 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:32.604 20:57:48 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:32.604 20:57:48 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:32.604 20:57:48 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:32.863 20:57:48 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:32.863 20:57:48 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:33.121 20:57:48 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:33.121 20:57:48 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:33.121 [2024-04-18 20:57:48.990721] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:33.121 20:57:49 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:33.121 20:57:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:33.121 20:57:49 -- common/autotest_common.sh@10 -- # set +x 00:04:33.121 20:57:49 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:33.121 20:57:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:33.121 20:57:49 -- common/autotest_common.sh@10 -- # set +x 00:04:33.380 20:57:49 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:33.380 20:57:49 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:33.380 20:57:49 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:33.380 MallocBdevForConfigChangeCheck 00:04:33.380 20:57:49 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:33.380 20:57:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:33.380 20:57:49 -- common/autotest_common.sh@10 -- # set +x 00:04:33.380 20:57:49 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:33.380 20:57:49 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:33.638 20:57:49 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:33.638 INFO: shutting down applications... 00:04:33.896 20:57:49 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:33.896 20:57:49 -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:33.896 20:57:49 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:33.896 20:57:49 -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:35.270 Calling clear_iscsi_subsystem 00:04:35.270 Calling clear_nvmf_subsystem 00:04:35.270 Calling clear_nbd_subsystem 00:04:35.270 Calling clear_ublk_subsystem 00:04:35.270 Calling clear_vhost_blk_subsystem 00:04:35.270 Calling clear_vhost_scsi_subsystem 00:04:35.270 Calling clear_bdev_subsystem 00:04:35.270 20:57:51 -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:35.270 20:57:51 -- json_config/json_config.sh@343 -- # count=100 00:04:35.270 20:57:51 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:35.270 20:57:51 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:35.270 20:57:51 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:35.270 20:57:51 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:35.528 20:57:51 -- json_config/json_config.sh@345 -- # break 00:04:35.528 20:57:51 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:35.528 20:57:51 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:35.528 20:57:51 -- json_config/common.sh@31 -- # local app=target 00:04:35.528 20:57:51 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:35.528 20:57:51 -- json_config/common.sh@35 -- # [[ -n 2860940 ]] 00:04:35.528 20:57:51 -- json_config/common.sh@38 -- # kill -SIGINT 2860940 00:04:35.528 20:57:51 -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:35.528 20:57:51 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:35.528 20:57:51 -- json_config/common.sh@41 -- # kill -0 2860940 00:04:35.528 20:57:51 -- json_config/common.sh@45 -- # sleep 0.5 00:04:36.156 20:57:51 -- json_config/common.sh@40 -- # (( i++ )) 00:04:36.156 20:57:51 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:36.156 20:57:51 -- json_config/common.sh@41 -- # kill -0 2860940 00:04:36.156 20:57:51 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:36.156 20:57:51 -- json_config/common.sh@43 -- # break 00:04:36.156 20:57:51 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:36.156 20:57:51 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:36.156 SPDK target shutdown done 00:04:36.156 20:57:51 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:36.156 INFO: relaunching applications... 00:04:36.156 20:57:51 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:36.156 20:57:51 -- json_config/common.sh@9 -- # local app=target 00:04:36.156 20:57:51 -- json_config/common.sh@10 -- # shift 00:04:36.156 20:57:51 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:36.156 20:57:51 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:36.156 20:57:51 -- json_config/common.sh@15 -- # local app_extra_params= 00:04:36.156 20:57:51 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:36.156 20:57:51 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:36.156 20:57:51 -- json_config/common.sh@22 -- # app_pid["$app"]=2862448 00:04:36.156 20:57:51 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:36.156 Waiting for target to run... 00:04:36.156 20:57:51 -- json_config/common.sh@25 -- # waitforlisten 2862448 /var/tmp/spdk_tgt.sock 00:04:36.156 20:57:51 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:36.156 20:57:51 -- common/autotest_common.sh@817 -- # '[' -z 2862448 ']' 00:04:36.156 20:57:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:36.156 20:57:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:36.156 20:57:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:36.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:36.156 20:57:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:36.156 20:57:51 -- common/autotest_common.sh@10 -- # set +x 00:04:36.156 [2024-04-18 20:57:51.996840] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:04:36.156 [2024-04-18 20:57:51.996899] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2862448 ] 00:04:36.156 EAL: No free 2048 kB hugepages reported on node 1 00:04:36.722 [2024-04-18 20:57:52.439230] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.722 [2024-04-18 20:57:52.526137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.007 [2024-04-18 20:57:55.534447] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:40.007 [2024-04-18 20:57:55.566874] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:40.265 20:57:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:40.265 20:57:56 -- common/autotest_common.sh@850 -- # return 0 00:04:40.265 20:57:56 -- json_config/common.sh@26 -- # echo '' 00:04:40.265 00:04:40.265 20:57:56 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:40.265 20:57:56 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:40.265 INFO: Checking if target configuration is the same... 00:04:40.265 20:57:56 -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:40.265 20:57:56 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:40.265 20:57:56 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:40.265 + '[' 2 -ne 2 ']' 00:04:40.265 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:40.265 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:40.266 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:40.266 +++ basename /dev/fd/62 00:04:40.266 ++ mktemp /tmp/62.XXX 00:04:40.266 + tmp_file_1=/tmp/62.w5V 00:04:40.266 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:40.266 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:40.266 + tmp_file_2=/tmp/spdk_tgt_config.json.FgR 00:04:40.266 + ret=0 00:04:40.266 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:40.523 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:40.781 + diff -u /tmp/62.w5V /tmp/spdk_tgt_config.json.FgR 00:04:40.781 + echo 'INFO: JSON config files are the same' 00:04:40.781 INFO: JSON config files are the same 00:04:40.781 + rm /tmp/62.w5V /tmp/spdk_tgt_config.json.FgR 00:04:40.781 + exit 0 00:04:40.781 20:57:56 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:40.781 20:57:56 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:40.781 INFO: changing configuration and checking if this can be detected... 00:04:40.781 20:57:56 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:40.781 20:57:56 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:40.781 20:57:56 -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:40.781 20:57:56 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:40.781 20:57:56 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:40.781 + '[' 2 -ne 2 ']' 00:04:40.781 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:40.781 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:40.781 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:40.781 +++ basename /dev/fd/62 00:04:40.781 ++ mktemp /tmp/62.XXX 00:04:40.781 + tmp_file_1=/tmp/62.dds 00:04:40.781 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:40.781 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:40.781 + tmp_file_2=/tmp/spdk_tgt_config.json.xLn 00:04:40.781 + ret=0 00:04:40.781 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:41.039 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:41.298 + diff -u /tmp/62.dds /tmp/spdk_tgt_config.json.xLn 00:04:41.298 + ret=1 00:04:41.298 + echo '=== Start of file: /tmp/62.dds ===' 00:04:41.298 + cat /tmp/62.dds 00:04:41.298 + echo '=== End of file: /tmp/62.dds ===' 00:04:41.298 + echo '' 00:04:41.298 + echo '=== Start of file: /tmp/spdk_tgt_config.json.xLn ===' 00:04:41.298 + cat /tmp/spdk_tgt_config.json.xLn 00:04:41.298 + echo '=== End of file: /tmp/spdk_tgt_config.json.xLn ===' 00:04:41.298 + echo '' 00:04:41.298 + rm /tmp/62.dds /tmp/spdk_tgt_config.json.xLn 00:04:41.298 + exit 1 00:04:41.298 20:57:56 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:41.298 INFO: configuration change detected. 00:04:41.298 20:57:56 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:41.298 20:57:56 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:41.298 20:57:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:41.298 20:57:57 -- common/autotest_common.sh@10 -- # set +x 00:04:41.298 20:57:57 -- json_config/json_config.sh@307 -- # local ret=0 00:04:41.298 20:57:57 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:41.298 20:57:57 -- json_config/json_config.sh@317 -- # [[ -n 2862448 ]] 00:04:41.298 20:57:57 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:41.298 20:57:57 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:41.298 20:57:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:41.298 20:57:57 -- common/autotest_common.sh@10 -- # set +x 00:04:41.298 20:57:57 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:41.298 20:57:57 -- json_config/json_config.sh@193 -- # uname -s 00:04:41.298 20:57:57 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:41.298 20:57:57 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:41.298 20:57:57 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:41.298 20:57:57 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:41.298 20:57:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:41.298 20:57:57 -- common/autotest_common.sh@10 -- # set +x 00:04:41.298 20:57:57 -- json_config/json_config.sh@323 -- # killprocess 2862448 00:04:41.298 20:57:57 -- common/autotest_common.sh@936 -- # '[' -z 2862448 ']' 00:04:41.298 20:57:57 -- common/autotest_common.sh@940 -- # kill -0 2862448 00:04:41.298 20:57:57 -- common/autotest_common.sh@941 -- # uname 00:04:41.298 20:57:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:41.298 20:57:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2862448 00:04:41.298 20:57:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:41.298 20:57:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:41.298 20:57:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2862448' 00:04:41.298 killing process with pid 2862448 00:04:41.298 20:57:57 -- common/autotest_common.sh@955 -- # kill 2862448 00:04:41.298 20:57:57 -- common/autotest_common.sh@960 -- # wait 2862448 00:04:42.674 20:57:58 -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:42.932 20:57:58 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:42.932 20:57:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:42.932 20:57:58 -- common/autotest_common.sh@10 -- # set +x 00:04:42.932 20:57:58 -- json_config/json_config.sh@328 -- # return 0 00:04:42.932 20:57:58 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:42.932 INFO: Success 00:04:42.932 00:04:42.932 real 0m15.086s 00:04:42.932 user 0m15.602s 00:04:42.932 sys 0m2.037s 00:04:42.932 20:57:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:42.932 20:57:58 -- common/autotest_common.sh@10 -- # set +x 00:04:42.932 ************************************ 00:04:42.932 END TEST json_config 00:04:42.932 ************************************ 00:04:42.932 20:57:58 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:42.932 20:57:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:42.932 20:57:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:42.932 20:57:58 -- common/autotest_common.sh@10 -- # set +x 00:04:42.932 ************************************ 00:04:42.932 START TEST json_config_extra_key 00:04:42.932 ************************************ 00:04:42.932 20:57:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:42.932 20:57:58 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:42.932 20:57:58 -- nvmf/common.sh@7 -- # uname -s 00:04:42.932 20:57:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:42.932 20:57:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:42.932 20:57:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:42.932 20:57:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:42.932 20:57:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:42.932 20:57:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:42.932 20:57:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:42.932 20:57:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:42.932 20:57:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:42.932 20:57:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:42.932 20:57:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:42.932 20:57:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:42.932 20:57:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:42.932 20:57:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:42.932 20:57:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:42.932 20:57:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:42.932 20:57:58 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:42.932 20:57:58 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:42.932 20:57:58 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:42.932 20:57:58 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:42.932 20:57:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.932 20:57:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.932 20:57:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.190 20:57:58 -- paths/export.sh@5 -- # export PATH 00:04:43.190 20:57:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.190 20:57:58 -- nvmf/common.sh@47 -- # : 0 00:04:43.190 20:57:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:43.190 20:57:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:43.190 20:57:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:43.190 20:57:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:43.190 20:57:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:43.190 20:57:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:43.190 20:57:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:43.190 20:57:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:43.190 20:57:58 -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:43.190 20:57:58 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:43.190 20:57:58 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:43.190 20:57:58 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:43.190 20:57:58 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:43.190 20:57:58 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:43.190 20:57:58 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:43.190 20:57:58 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:43.190 20:57:58 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:43.190 20:57:58 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:43.190 20:57:58 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:43.190 INFO: launching applications... 00:04:43.190 20:57:58 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:43.190 20:57:58 -- json_config/common.sh@9 -- # local app=target 00:04:43.190 20:57:58 -- json_config/common.sh@10 -- # shift 00:04:43.190 20:57:58 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:43.190 20:57:58 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:43.190 20:57:58 -- json_config/common.sh@15 -- # local app_extra_params= 00:04:43.190 20:57:58 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:43.190 20:57:58 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:43.190 20:57:58 -- json_config/common.sh@22 -- # app_pid["$app"]=2863724 00:04:43.190 20:57:58 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:43.190 Waiting for target to run... 00:04:43.190 20:57:58 -- json_config/common.sh@25 -- # waitforlisten 2863724 /var/tmp/spdk_tgt.sock 00:04:43.190 20:57:58 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:43.190 20:57:58 -- common/autotest_common.sh@817 -- # '[' -z 2863724 ']' 00:04:43.190 20:57:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:43.190 20:57:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:43.190 20:57:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:43.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:43.191 20:57:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:43.191 20:57:58 -- common/autotest_common.sh@10 -- # set +x 00:04:43.191 [2024-04-18 20:57:58.920251] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:04:43.191 [2024-04-18 20:57:58.920298] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2863724 ] 00:04:43.191 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.448 [2024-04-18 20:57:59.186526] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.448 [2024-04-18 20:57:59.253400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.013 20:57:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:44.013 20:57:59 -- common/autotest_common.sh@850 -- # return 0 00:04:44.013 20:57:59 -- json_config/common.sh@26 -- # echo '' 00:04:44.013 00:04:44.013 20:57:59 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:44.013 INFO: shutting down applications... 00:04:44.013 20:57:59 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:44.013 20:57:59 -- json_config/common.sh@31 -- # local app=target 00:04:44.013 20:57:59 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:44.013 20:57:59 -- json_config/common.sh@35 -- # [[ -n 2863724 ]] 00:04:44.013 20:57:59 -- json_config/common.sh@38 -- # kill -SIGINT 2863724 00:04:44.013 20:57:59 -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:44.013 20:57:59 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:44.013 20:57:59 -- json_config/common.sh@41 -- # kill -0 2863724 00:04:44.013 20:57:59 -- json_config/common.sh@45 -- # sleep 0.5 00:04:44.580 20:58:00 -- json_config/common.sh@40 -- # (( i++ )) 00:04:44.580 20:58:00 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:44.580 20:58:00 -- json_config/common.sh@41 -- # kill -0 2863724 00:04:44.580 20:58:00 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:44.580 20:58:00 -- json_config/common.sh@43 -- # break 00:04:44.580 20:58:00 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:44.580 20:58:00 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:44.580 SPDK target shutdown done 00:04:44.580 20:58:00 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:44.580 Success 00:04:44.580 00:04:44.580 real 0m1.439s 00:04:44.580 user 0m1.263s 00:04:44.580 sys 0m0.360s 00:04:44.580 20:58:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:44.580 20:58:00 -- common/autotest_common.sh@10 -- # set +x 00:04:44.580 ************************************ 00:04:44.580 END TEST json_config_extra_key 00:04:44.580 ************************************ 00:04:44.580 20:58:00 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:44.580 20:58:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:44.580 20:58:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:44.580 20:58:00 -- common/autotest_common.sh@10 -- # set +x 00:04:44.580 ************************************ 00:04:44.580 START TEST alias_rpc 00:04:44.580 ************************************ 00:04:44.580 20:58:00 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:44.580 * Looking for test storage... 00:04:44.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:44.580 20:58:00 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:44.580 20:58:00 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2864013 00:04:44.580 20:58:00 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2864013 00:04:44.580 20:58:00 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:44.580 20:58:00 -- common/autotest_common.sh@817 -- # '[' -z 2864013 ']' 00:04:44.580 20:58:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.580 20:58:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:44.580 20:58:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.580 20:58:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:44.580 20:58:00 -- common/autotest_common.sh@10 -- # set +x 00:04:44.838 [2024-04-18 20:58:00.522438] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:04:44.838 [2024-04-18 20:58:00.522488] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2864013 ] 00:04:44.838 EAL: No free 2048 kB hugepages reported on node 1 00:04:44.838 [2024-04-18 20:58:00.581848] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.838 [2024-04-18 20:58:00.659342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.403 20:58:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:45.403 20:58:01 -- common/autotest_common.sh@850 -- # return 0 00:04:45.403 20:58:01 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:45.660 20:58:01 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2864013 00:04:45.660 20:58:01 -- common/autotest_common.sh@936 -- # '[' -z 2864013 ']' 00:04:45.660 20:58:01 -- common/autotest_common.sh@940 -- # kill -0 2864013 00:04:45.660 20:58:01 -- common/autotest_common.sh@941 -- # uname 00:04:45.660 20:58:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:45.660 20:58:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2864013 00:04:45.660 20:58:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:45.660 20:58:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:45.660 20:58:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2864013' 00:04:45.660 killing process with pid 2864013 00:04:45.660 20:58:01 -- common/autotest_common.sh@955 -- # kill 2864013 00:04:45.660 20:58:01 -- common/autotest_common.sh@960 -- # wait 2864013 00:04:46.226 00:04:46.226 real 0m1.521s 00:04:46.226 user 0m1.670s 00:04:46.226 sys 0m0.402s 00:04:46.226 20:58:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:46.226 20:58:01 -- common/autotest_common.sh@10 -- # set +x 00:04:46.226 ************************************ 00:04:46.226 END TEST alias_rpc 00:04:46.226 ************************************ 00:04:46.226 20:58:01 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:04:46.226 20:58:01 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:46.226 20:58:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:46.226 20:58:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:46.226 20:58:01 -- common/autotest_common.sh@10 -- # set +x 00:04:46.226 ************************************ 00:04:46.226 START TEST spdkcli_tcp 00:04:46.226 ************************************ 00:04:46.226 20:58:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:46.226 * Looking for test storage... 00:04:46.226 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:46.226 20:58:02 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:46.226 20:58:02 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:46.226 20:58:02 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:46.226 20:58:02 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:46.226 20:58:02 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:46.226 20:58:02 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:46.226 20:58:02 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:46.226 20:58:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:46.226 20:58:02 -- common/autotest_common.sh@10 -- # set +x 00:04:46.226 20:58:02 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2864313 00:04:46.226 20:58:02 -- spdkcli/tcp.sh@27 -- # waitforlisten 2864313 00:04:46.226 20:58:02 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:46.226 20:58:02 -- common/autotest_common.sh@817 -- # '[' -z 2864313 ']' 00:04:46.226 20:58:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.226 20:58:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:46.226 20:58:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.226 20:58:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:46.226 20:58:02 -- common/autotest_common.sh@10 -- # set +x 00:04:46.483 [2024-04-18 20:58:02.195064] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:04:46.484 [2024-04-18 20:58:02.195108] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2864313 ] 00:04:46.484 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.484 [2024-04-18 20:58:02.255151] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:46.484 [2024-04-18 20:58:02.332378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.484 [2024-04-18 20:58:02.332380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.420 20:58:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:47.420 20:58:02 -- common/autotest_common.sh@850 -- # return 0 00:04:47.420 20:58:02 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:47.420 20:58:02 -- spdkcli/tcp.sh@31 -- # socat_pid=2864543 00:04:47.420 20:58:02 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:47.420 [ 00:04:47.420 "bdev_malloc_delete", 00:04:47.420 "bdev_malloc_create", 00:04:47.420 "bdev_null_resize", 00:04:47.420 "bdev_null_delete", 00:04:47.420 "bdev_null_create", 00:04:47.420 "bdev_nvme_cuse_unregister", 00:04:47.420 "bdev_nvme_cuse_register", 00:04:47.420 "bdev_opal_new_user", 00:04:47.420 "bdev_opal_set_lock_state", 00:04:47.420 "bdev_opal_delete", 00:04:47.420 "bdev_opal_get_info", 00:04:47.420 "bdev_opal_create", 00:04:47.420 "bdev_nvme_opal_revert", 00:04:47.420 "bdev_nvme_opal_init", 00:04:47.420 "bdev_nvme_send_cmd", 00:04:47.420 "bdev_nvme_get_path_iostat", 00:04:47.420 "bdev_nvme_get_mdns_discovery_info", 00:04:47.420 "bdev_nvme_stop_mdns_discovery", 00:04:47.420 "bdev_nvme_start_mdns_discovery", 00:04:47.420 "bdev_nvme_set_multipath_policy", 00:04:47.420 "bdev_nvme_set_preferred_path", 00:04:47.420 "bdev_nvme_get_io_paths", 00:04:47.420 "bdev_nvme_remove_error_injection", 00:04:47.420 "bdev_nvme_add_error_injection", 00:04:47.420 "bdev_nvme_get_discovery_info", 00:04:47.420 "bdev_nvme_stop_discovery", 00:04:47.420 "bdev_nvme_start_discovery", 00:04:47.420 "bdev_nvme_get_controller_health_info", 00:04:47.420 "bdev_nvme_disable_controller", 00:04:47.420 "bdev_nvme_enable_controller", 00:04:47.420 "bdev_nvme_reset_controller", 00:04:47.420 "bdev_nvme_get_transport_statistics", 00:04:47.420 "bdev_nvme_apply_firmware", 00:04:47.420 "bdev_nvme_detach_controller", 00:04:47.420 "bdev_nvme_get_controllers", 00:04:47.420 "bdev_nvme_attach_controller", 00:04:47.420 "bdev_nvme_set_hotplug", 00:04:47.420 "bdev_nvme_set_options", 00:04:47.420 "bdev_passthru_delete", 00:04:47.420 "bdev_passthru_create", 00:04:47.420 "bdev_lvol_grow_lvstore", 00:04:47.420 "bdev_lvol_get_lvols", 00:04:47.420 "bdev_lvol_get_lvstores", 00:04:47.420 "bdev_lvol_delete", 00:04:47.420 "bdev_lvol_set_read_only", 00:04:47.420 "bdev_lvol_resize", 00:04:47.420 "bdev_lvol_decouple_parent", 00:04:47.420 "bdev_lvol_inflate", 00:04:47.420 "bdev_lvol_rename", 00:04:47.420 "bdev_lvol_clone_bdev", 00:04:47.420 "bdev_lvol_clone", 00:04:47.420 "bdev_lvol_snapshot", 00:04:47.420 "bdev_lvol_create", 00:04:47.420 "bdev_lvol_delete_lvstore", 00:04:47.420 "bdev_lvol_rename_lvstore", 00:04:47.420 "bdev_lvol_create_lvstore", 00:04:47.420 "bdev_raid_set_options", 00:04:47.420 "bdev_raid_remove_base_bdev", 00:04:47.420 "bdev_raid_add_base_bdev", 00:04:47.420 "bdev_raid_delete", 00:04:47.420 "bdev_raid_create", 00:04:47.420 "bdev_raid_get_bdevs", 00:04:47.420 "bdev_error_inject_error", 00:04:47.420 "bdev_error_delete", 00:04:47.420 "bdev_error_create", 00:04:47.420 "bdev_split_delete", 00:04:47.420 "bdev_split_create", 00:04:47.420 "bdev_delay_delete", 00:04:47.420 "bdev_delay_create", 00:04:47.420 "bdev_delay_update_latency", 00:04:47.420 "bdev_zone_block_delete", 00:04:47.420 "bdev_zone_block_create", 00:04:47.420 "blobfs_create", 00:04:47.420 "blobfs_detect", 00:04:47.420 "blobfs_set_cache_size", 00:04:47.420 "bdev_aio_delete", 00:04:47.420 "bdev_aio_rescan", 00:04:47.420 "bdev_aio_create", 00:04:47.420 "bdev_ftl_set_property", 00:04:47.420 "bdev_ftl_get_properties", 00:04:47.420 "bdev_ftl_get_stats", 00:04:47.420 "bdev_ftl_unmap", 00:04:47.420 "bdev_ftl_unload", 00:04:47.420 "bdev_ftl_delete", 00:04:47.420 "bdev_ftl_load", 00:04:47.420 "bdev_ftl_create", 00:04:47.420 "bdev_virtio_attach_controller", 00:04:47.420 "bdev_virtio_scsi_get_devices", 00:04:47.420 "bdev_virtio_detach_controller", 00:04:47.420 "bdev_virtio_blk_set_hotplug", 00:04:47.420 "bdev_iscsi_delete", 00:04:47.420 "bdev_iscsi_create", 00:04:47.420 "bdev_iscsi_set_options", 00:04:47.420 "accel_error_inject_error", 00:04:47.420 "ioat_scan_accel_module", 00:04:47.420 "dsa_scan_accel_module", 00:04:47.420 "iaa_scan_accel_module", 00:04:47.420 "vfu_virtio_create_scsi_endpoint", 00:04:47.420 "vfu_virtio_scsi_remove_target", 00:04:47.420 "vfu_virtio_scsi_add_target", 00:04:47.420 "vfu_virtio_create_blk_endpoint", 00:04:47.420 "vfu_virtio_delete_endpoint", 00:04:47.420 "keyring_file_remove_key", 00:04:47.420 "keyring_file_add_key", 00:04:47.421 "iscsi_set_options", 00:04:47.421 "iscsi_get_auth_groups", 00:04:47.421 "iscsi_auth_group_remove_secret", 00:04:47.421 "iscsi_auth_group_add_secret", 00:04:47.421 "iscsi_delete_auth_group", 00:04:47.421 "iscsi_create_auth_group", 00:04:47.421 "iscsi_set_discovery_auth", 00:04:47.421 "iscsi_get_options", 00:04:47.421 "iscsi_target_node_request_logout", 00:04:47.421 "iscsi_target_node_set_redirect", 00:04:47.421 "iscsi_target_node_set_auth", 00:04:47.421 "iscsi_target_node_add_lun", 00:04:47.421 "iscsi_get_stats", 00:04:47.421 "iscsi_get_connections", 00:04:47.421 "iscsi_portal_group_set_auth", 00:04:47.421 "iscsi_start_portal_group", 00:04:47.421 "iscsi_delete_portal_group", 00:04:47.421 "iscsi_create_portal_group", 00:04:47.421 "iscsi_get_portal_groups", 00:04:47.421 "iscsi_delete_target_node", 00:04:47.421 "iscsi_target_node_remove_pg_ig_maps", 00:04:47.421 "iscsi_target_node_add_pg_ig_maps", 00:04:47.421 "iscsi_create_target_node", 00:04:47.421 "iscsi_get_target_nodes", 00:04:47.421 "iscsi_delete_initiator_group", 00:04:47.421 "iscsi_initiator_group_remove_initiators", 00:04:47.421 "iscsi_initiator_group_add_initiators", 00:04:47.421 "iscsi_create_initiator_group", 00:04:47.421 "iscsi_get_initiator_groups", 00:04:47.421 "nvmf_set_crdt", 00:04:47.421 "nvmf_set_config", 00:04:47.421 "nvmf_set_max_subsystems", 00:04:47.421 "nvmf_subsystem_get_listeners", 00:04:47.421 "nvmf_subsystem_get_qpairs", 00:04:47.421 "nvmf_subsystem_get_controllers", 00:04:47.421 "nvmf_get_stats", 00:04:47.421 "nvmf_get_transports", 00:04:47.421 "nvmf_create_transport", 00:04:47.421 "nvmf_get_targets", 00:04:47.421 "nvmf_delete_target", 00:04:47.421 "nvmf_create_target", 00:04:47.421 "nvmf_subsystem_allow_any_host", 00:04:47.421 "nvmf_subsystem_remove_host", 00:04:47.421 "nvmf_subsystem_add_host", 00:04:47.421 "nvmf_ns_remove_host", 00:04:47.421 "nvmf_ns_add_host", 00:04:47.421 "nvmf_subsystem_remove_ns", 00:04:47.421 "nvmf_subsystem_add_ns", 00:04:47.421 "nvmf_subsystem_listener_set_ana_state", 00:04:47.421 "nvmf_discovery_get_referrals", 00:04:47.421 "nvmf_discovery_remove_referral", 00:04:47.421 "nvmf_discovery_add_referral", 00:04:47.421 "nvmf_subsystem_remove_listener", 00:04:47.421 "nvmf_subsystem_add_listener", 00:04:47.421 "nvmf_delete_subsystem", 00:04:47.421 "nvmf_create_subsystem", 00:04:47.421 "nvmf_get_subsystems", 00:04:47.421 "env_dpdk_get_mem_stats", 00:04:47.421 "nbd_get_disks", 00:04:47.421 "nbd_stop_disk", 00:04:47.421 "nbd_start_disk", 00:04:47.421 "ublk_recover_disk", 00:04:47.421 "ublk_get_disks", 00:04:47.421 "ublk_stop_disk", 00:04:47.421 "ublk_start_disk", 00:04:47.421 "ublk_destroy_target", 00:04:47.421 "ublk_create_target", 00:04:47.421 "virtio_blk_create_transport", 00:04:47.421 "virtio_blk_get_transports", 00:04:47.421 "vhost_controller_set_coalescing", 00:04:47.421 "vhost_get_controllers", 00:04:47.421 "vhost_delete_controller", 00:04:47.421 "vhost_create_blk_controller", 00:04:47.421 "vhost_scsi_controller_remove_target", 00:04:47.421 "vhost_scsi_controller_add_target", 00:04:47.421 "vhost_start_scsi_controller", 00:04:47.421 "vhost_create_scsi_controller", 00:04:47.421 "thread_set_cpumask", 00:04:47.421 "framework_get_scheduler", 00:04:47.421 "framework_set_scheduler", 00:04:47.421 "framework_get_reactors", 00:04:47.421 "thread_get_io_channels", 00:04:47.421 "thread_get_pollers", 00:04:47.421 "thread_get_stats", 00:04:47.421 "framework_monitor_context_switch", 00:04:47.421 "spdk_kill_instance", 00:04:47.421 "log_enable_timestamps", 00:04:47.421 "log_get_flags", 00:04:47.421 "log_clear_flag", 00:04:47.421 "log_set_flag", 00:04:47.421 "log_get_level", 00:04:47.421 "log_set_level", 00:04:47.421 "log_get_print_level", 00:04:47.421 "log_set_print_level", 00:04:47.421 "framework_enable_cpumask_locks", 00:04:47.421 "framework_disable_cpumask_locks", 00:04:47.421 "framework_wait_init", 00:04:47.421 "framework_start_init", 00:04:47.421 "scsi_get_devices", 00:04:47.421 "bdev_get_histogram", 00:04:47.421 "bdev_enable_histogram", 00:04:47.421 "bdev_set_qos_limit", 00:04:47.421 "bdev_set_qd_sampling_period", 00:04:47.421 "bdev_get_bdevs", 00:04:47.421 "bdev_reset_iostat", 00:04:47.421 "bdev_get_iostat", 00:04:47.421 "bdev_examine", 00:04:47.421 "bdev_wait_for_examine", 00:04:47.421 "bdev_set_options", 00:04:47.421 "notify_get_notifications", 00:04:47.421 "notify_get_types", 00:04:47.421 "accel_get_stats", 00:04:47.421 "accel_set_options", 00:04:47.421 "accel_set_driver", 00:04:47.421 "accel_crypto_key_destroy", 00:04:47.421 "accel_crypto_keys_get", 00:04:47.421 "accel_crypto_key_create", 00:04:47.421 "accel_assign_opc", 00:04:47.421 "accel_get_module_info", 00:04:47.421 "accel_get_opc_assignments", 00:04:47.421 "vmd_rescan", 00:04:47.421 "vmd_remove_device", 00:04:47.421 "vmd_enable", 00:04:47.421 "sock_set_default_impl", 00:04:47.421 "sock_impl_set_options", 00:04:47.421 "sock_impl_get_options", 00:04:47.421 "iobuf_get_stats", 00:04:47.421 "iobuf_set_options", 00:04:47.421 "keyring_get_keys", 00:04:47.421 "framework_get_pci_devices", 00:04:47.421 "framework_get_config", 00:04:47.421 "framework_get_subsystems", 00:04:47.421 "vfu_tgt_set_base_path", 00:04:47.421 "trace_get_info", 00:04:47.421 "trace_get_tpoint_group_mask", 00:04:47.421 "trace_disable_tpoint_group", 00:04:47.421 "trace_enable_tpoint_group", 00:04:47.421 "trace_clear_tpoint_mask", 00:04:47.421 "trace_set_tpoint_mask", 00:04:47.421 "spdk_get_version", 00:04:47.421 "rpc_get_methods" 00:04:47.421 ] 00:04:47.421 20:58:03 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:47.421 20:58:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:47.421 20:58:03 -- common/autotest_common.sh@10 -- # set +x 00:04:47.421 20:58:03 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:47.421 20:58:03 -- spdkcli/tcp.sh@38 -- # killprocess 2864313 00:04:47.421 20:58:03 -- common/autotest_common.sh@936 -- # '[' -z 2864313 ']' 00:04:47.421 20:58:03 -- common/autotest_common.sh@940 -- # kill -0 2864313 00:04:47.421 20:58:03 -- common/autotest_common.sh@941 -- # uname 00:04:47.421 20:58:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:47.421 20:58:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2864313 00:04:47.421 20:58:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:47.421 20:58:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:47.421 20:58:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2864313' 00:04:47.421 killing process with pid 2864313 00:04:47.421 20:58:03 -- common/autotest_common.sh@955 -- # kill 2864313 00:04:47.421 20:58:03 -- common/autotest_common.sh@960 -- # wait 2864313 00:04:47.681 00:04:47.681 real 0m1.533s 00:04:47.681 user 0m2.825s 00:04:47.681 sys 0m0.423s 00:04:47.681 20:58:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:47.681 20:58:03 -- common/autotest_common.sh@10 -- # set +x 00:04:47.681 ************************************ 00:04:47.681 END TEST spdkcli_tcp 00:04:47.681 ************************************ 00:04:47.940 20:58:03 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:47.940 20:58:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:47.940 20:58:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:47.940 20:58:03 -- common/autotest_common.sh@10 -- # set +x 00:04:47.940 ************************************ 00:04:47.940 START TEST dpdk_mem_utility 00:04:47.940 ************************************ 00:04:47.940 20:58:03 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:47.940 * Looking for test storage... 00:04:47.940 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:47.940 20:58:03 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:47.940 20:58:03 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2864658 00:04:47.940 20:58:03 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2864658 00:04:47.940 20:58:03 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:47.940 20:58:03 -- common/autotest_common.sh@817 -- # '[' -z 2864658 ']' 00:04:47.940 20:58:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.940 20:58:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:47.940 20:58:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.940 20:58:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:47.940 20:58:03 -- common/autotest_common.sh@10 -- # set +x 00:04:48.198 [2024-04-18 20:58:03.884367] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:04:48.198 [2024-04-18 20:58:03.884414] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2864658 ] 00:04:48.198 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.198 [2024-04-18 20:58:03.944619] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.198 [2024-04-18 20:58:04.015743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.764 20:58:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:48.764 20:58:04 -- common/autotest_common.sh@850 -- # return 0 00:04:48.764 20:58:04 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:48.764 20:58:04 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:48.764 20:58:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:48.764 20:58:04 -- common/autotest_common.sh@10 -- # set +x 00:04:48.764 { 00:04:48.764 "filename": "/tmp/spdk_mem_dump.txt" 00:04:48.764 } 00:04:48.764 20:58:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:48.764 20:58:04 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:49.023 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:49.023 1 heaps totaling size 814.000000 MiB 00:04:49.023 size: 814.000000 MiB heap id: 0 00:04:49.023 end heaps---------- 00:04:49.023 8 mempools totaling size 598.116089 MiB 00:04:49.023 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:49.023 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:49.024 size: 84.521057 MiB name: bdev_io_2864658 00:04:49.024 size: 51.011292 MiB name: evtpool_2864658 00:04:49.024 size: 50.003479 MiB name: msgpool_2864658 00:04:49.024 size: 21.763794 MiB name: PDU_Pool 00:04:49.024 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:49.024 size: 0.026123 MiB name: Session_Pool 00:04:49.024 end mempools------- 00:04:49.024 6 memzones totaling size 4.142822 MiB 00:04:49.024 size: 1.000366 MiB name: RG_ring_0_2864658 00:04:49.024 size: 1.000366 MiB name: RG_ring_1_2864658 00:04:49.024 size: 1.000366 MiB name: RG_ring_4_2864658 00:04:49.024 size: 1.000366 MiB name: RG_ring_5_2864658 00:04:49.024 size: 0.125366 MiB name: RG_ring_2_2864658 00:04:49.024 size: 0.015991 MiB name: RG_ring_3_2864658 00:04:49.024 end memzones------- 00:04:49.024 20:58:04 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:49.024 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:49.024 list of free elements. size: 12.519348 MiB 00:04:49.024 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:49.024 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:49.024 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:49.024 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:49.024 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:49.024 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:49.024 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:49.024 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:49.024 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:49.024 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:49.024 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:49.024 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:49.024 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:49.024 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:49.024 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:49.024 list of standard malloc elements. size: 199.218079 MiB 00:04:49.024 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:49.024 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:49.024 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:49.024 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:49.024 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:49.024 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:49.024 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:49.024 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:49.024 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:49.024 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:49.024 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:49.024 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:49.024 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:49.024 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:49.024 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:49.024 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:49.024 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:49.024 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:49.024 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:49.024 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:49.024 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:49.024 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:49.024 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:49.024 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:49.024 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:49.024 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:49.024 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:49.024 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:49.024 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:49.024 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:49.024 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:49.024 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:49.024 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:49.024 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:49.024 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:49.024 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:49.024 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:49.024 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:49.024 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:49.024 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:49.024 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:49.024 list of memzone associated elements. size: 602.262573 MiB 00:04:49.024 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:49.024 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:49.024 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:49.024 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:49.024 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:49.024 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2864658_0 00:04:49.024 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:49.024 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2864658_0 00:04:49.024 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:49.024 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2864658_0 00:04:49.024 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:49.024 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:49.024 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:49.024 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:49.024 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:49.024 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2864658 00:04:49.024 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:49.024 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2864658 00:04:49.024 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:49.024 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2864658 00:04:49.024 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:49.024 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:49.024 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:49.024 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:49.024 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:49.024 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:49.024 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:49.024 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:49.024 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:49.024 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2864658 00:04:49.024 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:49.024 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2864658 00:04:49.024 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:49.024 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2864658 00:04:49.024 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:49.024 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2864658 00:04:49.024 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:49.025 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2864658 00:04:49.025 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:49.025 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:49.025 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:49.025 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:49.025 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:49.025 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:49.025 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:49.025 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2864658 00:04:49.025 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:49.025 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:49.025 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:49.025 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:49.025 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:49.025 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2864658 00:04:49.025 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:49.025 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:49.025 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:49.025 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2864658 00:04:49.025 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:49.025 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2864658 00:04:49.025 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:49.025 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:49.025 20:58:04 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:49.025 20:58:04 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2864658 00:04:49.025 20:58:04 -- common/autotest_common.sh@936 -- # '[' -z 2864658 ']' 00:04:49.025 20:58:04 -- common/autotest_common.sh@940 -- # kill -0 2864658 00:04:49.025 20:58:04 -- common/autotest_common.sh@941 -- # uname 00:04:49.025 20:58:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:49.025 20:58:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2864658 00:04:49.025 20:58:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:49.025 20:58:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:49.025 20:58:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2864658' 00:04:49.025 killing process with pid 2864658 00:04:49.025 20:58:04 -- common/autotest_common.sh@955 -- # kill 2864658 00:04:49.025 20:58:04 -- common/autotest_common.sh@960 -- # wait 2864658 00:04:49.284 00:04:49.284 real 0m1.421s 00:04:49.284 user 0m1.484s 00:04:49.284 sys 0m0.401s 00:04:49.284 20:58:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:49.284 20:58:05 -- common/autotest_common.sh@10 -- # set +x 00:04:49.284 ************************************ 00:04:49.284 END TEST dpdk_mem_utility 00:04:49.284 ************************************ 00:04:49.284 20:58:05 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:49.284 20:58:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:49.284 20:58:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.284 20:58:05 -- common/autotest_common.sh@10 -- # set +x 00:04:49.543 ************************************ 00:04:49.543 START TEST event 00:04:49.543 ************************************ 00:04:49.543 20:58:05 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:49.543 * Looking for test storage... 00:04:49.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:49.543 20:58:05 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:49.543 20:58:05 -- bdev/nbd_common.sh@6 -- # set -e 00:04:49.543 20:58:05 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:49.543 20:58:05 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:04:49.543 20:58:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.543 20:58:05 -- common/autotest_common.sh@10 -- # set +x 00:04:49.801 ************************************ 00:04:49.801 START TEST event_perf 00:04:49.801 ************************************ 00:04:49.801 20:58:05 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:49.801 Running I/O for 1 seconds...[2024-04-18 20:58:05.537886] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:04:49.801 [2024-04-18 20:58:05.537951] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2865142 ] 00:04:49.801 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.801 [2024-04-18 20:58:05.599019] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:49.801 [2024-04-18 20:58:05.670815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:49.801 [2024-04-18 20:58:05.670913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:49.801 [2024-04-18 20:58:05.671002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:49.801 [2024-04-18 20:58:05.671004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.216 Running I/O for 1 seconds... 00:04:51.217 lcore 0: 209226 00:04:51.217 lcore 1: 209224 00:04:51.217 lcore 2: 209227 00:04:51.217 lcore 3: 209226 00:04:51.217 done. 00:04:51.217 00:04:51.217 real 0m1.243s 00:04:51.217 user 0m4.167s 00:04:51.217 sys 0m0.074s 00:04:51.217 20:58:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:51.217 20:58:06 -- common/autotest_common.sh@10 -- # set +x 00:04:51.217 ************************************ 00:04:51.217 END TEST event_perf 00:04:51.217 ************************************ 00:04:51.217 20:58:06 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:51.217 20:58:06 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:04:51.217 20:58:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:51.217 20:58:06 -- common/autotest_common.sh@10 -- # set +x 00:04:51.217 ************************************ 00:04:51.217 START TEST event_reactor 00:04:51.217 ************************************ 00:04:51.217 20:58:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:51.217 [2024-04-18 20:58:06.927672] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:04:51.217 [2024-04-18 20:58:06.927716] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2865407 ] 00:04:51.217 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.217 [2024-04-18 20:58:06.986175] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.217 [2024-04-18 20:58:07.059765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.660 test_start 00:04:52.660 oneshot 00:04:52.660 tick 100 00:04:52.660 tick 100 00:04:52.660 tick 250 00:04:52.660 tick 100 00:04:52.660 tick 100 00:04:52.660 tick 100 00:04:52.660 tick 250 00:04:52.660 tick 500 00:04:52.660 tick 100 00:04:52.660 tick 100 00:04:52.660 tick 250 00:04:52.660 tick 100 00:04:52.660 tick 100 00:04:52.660 test_end 00:04:52.660 00:04:52.660 real 0m1.228s 00:04:52.660 user 0m1.155s 00:04:52.660 sys 0m0.069s 00:04:52.660 20:58:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:52.660 20:58:08 -- common/autotest_common.sh@10 -- # set +x 00:04:52.660 ************************************ 00:04:52.660 END TEST event_reactor 00:04:52.660 ************************************ 00:04:52.660 20:58:08 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:52.660 20:58:08 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:04:52.660 20:58:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:52.660 20:58:08 -- common/autotest_common.sh@10 -- # set +x 00:04:52.660 ************************************ 00:04:52.660 START TEST event_reactor_perf 00:04:52.660 ************************************ 00:04:52.660 20:58:08 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:52.660 [2024-04-18 20:58:08.319985] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:04:52.660 [2024-04-18 20:58:08.320026] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2865660 ] 00:04:52.660 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.660 [2024-04-18 20:58:08.378641] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.660 [2024-04-18 20:58:08.449454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.035 test_start 00:04:54.035 test_end 00:04:54.035 Performance: 498615 events per second 00:04:54.035 00:04:54.035 real 0m1.232s 00:04:54.035 user 0m1.156s 00:04:54.035 sys 0m0.072s 00:04:54.035 20:58:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:54.035 20:58:09 -- common/autotest_common.sh@10 -- # set +x 00:04:54.035 ************************************ 00:04:54.035 END TEST event_reactor_perf 00:04:54.035 ************************************ 00:04:54.035 20:58:09 -- event/event.sh@49 -- # uname -s 00:04:54.035 20:58:09 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:54.035 20:58:09 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:54.035 20:58:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:54.035 20:58:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.035 20:58:09 -- common/autotest_common.sh@10 -- # set +x 00:04:54.035 ************************************ 00:04:54.035 START TEST event_scheduler 00:04:54.035 ************************************ 00:04:54.035 20:58:09 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:54.035 * Looking for test storage... 00:04:54.035 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:54.035 20:58:09 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:54.035 20:58:09 -- scheduler/scheduler.sh@35 -- # scheduler_pid=2865950 00:04:54.035 20:58:09 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:54.035 20:58:09 -- scheduler/scheduler.sh@37 -- # waitforlisten 2865950 00:04:54.035 20:58:09 -- common/autotest_common.sh@817 -- # '[' -z 2865950 ']' 00:04:54.035 20:58:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.035 20:58:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:54.035 20:58:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.035 20:58:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:54.035 20:58:09 -- common/autotest_common.sh@10 -- # set +x 00:04:54.035 20:58:09 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:54.035 [2024-04-18 20:58:09.818863] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:04:54.035 [2024-04-18 20:58:09.818909] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2865950 ] 00:04:54.035 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.035 [2024-04-18 20:58:09.878673] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:54.035 [2024-04-18 20:58:09.957680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.035 [2024-04-18 20:58:09.957743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.035 [2024-04-18 20:58:09.957856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:54.035 [2024-04-18 20:58:09.957858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:54.970 20:58:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:54.970 20:58:10 -- common/autotest_common.sh@850 -- # return 0 00:04:54.970 20:58:10 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:54.970 20:58:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:54.970 20:58:10 -- common/autotest_common.sh@10 -- # set +x 00:04:54.970 POWER: Env isn't set yet! 00:04:54.970 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:54.970 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:54.970 POWER: Cannot set governor of lcore 0 to userspace 00:04:54.970 POWER: Attempting to initialise PSTAT power management... 00:04:54.970 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:04:54.970 POWER: Initialized successfully for lcore 0 power management 00:04:54.970 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:04:54.970 POWER: Initialized successfully for lcore 1 power management 00:04:54.970 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:04:54.970 POWER: Initialized successfully for lcore 2 power management 00:04:54.970 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:04:54.970 POWER: Initialized successfully for lcore 3 power management 00:04:54.970 20:58:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:54.970 20:58:10 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:54.970 20:58:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:54.970 20:58:10 -- common/autotest_common.sh@10 -- # set +x 00:04:54.970 [2024-04-18 20:58:10.747340] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:54.970 20:58:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:54.970 20:58:10 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:54.970 20:58:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:54.970 20:58:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.970 20:58:10 -- common/autotest_common.sh@10 -- # set +x 00:04:54.970 ************************************ 00:04:54.970 START TEST scheduler_create_thread 00:04:54.970 ************************************ 00:04:54.970 20:58:10 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:04:54.970 20:58:10 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:54.970 20:58:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:54.970 20:58:10 -- common/autotest_common.sh@10 -- # set +x 00:04:54.970 2 00:04:54.970 20:58:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:54.970 20:58:10 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:54.970 20:58:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:54.970 20:58:10 -- common/autotest_common.sh@10 -- # set +x 00:04:54.970 3 00:04:54.970 20:58:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:54.970 20:58:10 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:54.970 20:58:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:54.970 20:58:10 -- common/autotest_common.sh@10 -- # set +x 00:04:55.228 4 00:04:55.228 20:58:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.228 20:58:10 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:55.228 20:58:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.228 20:58:10 -- common/autotest_common.sh@10 -- # set +x 00:04:55.228 5 00:04:55.228 20:58:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.229 20:58:10 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:55.229 20:58:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.229 20:58:10 -- common/autotest_common.sh@10 -- # set +x 00:04:55.229 6 00:04:55.229 20:58:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.229 20:58:10 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:55.229 20:58:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.229 20:58:10 -- common/autotest_common.sh@10 -- # set +x 00:04:55.229 7 00:04:55.229 20:58:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.229 20:58:10 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:55.229 20:58:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.229 20:58:10 -- common/autotest_common.sh@10 -- # set +x 00:04:55.229 8 00:04:55.229 20:58:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.229 20:58:10 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:55.229 20:58:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.229 20:58:10 -- common/autotest_common.sh@10 -- # set +x 00:04:55.229 9 00:04:55.229 20:58:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.229 20:58:10 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:55.229 20:58:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.229 20:58:10 -- common/autotest_common.sh@10 -- # set +x 00:04:55.229 10 00:04:55.229 20:58:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.229 20:58:10 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:55.229 20:58:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.229 20:58:10 -- common/autotest_common.sh@10 -- # set +x 00:04:55.229 20:58:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.229 20:58:10 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:55.229 20:58:10 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:55.229 20:58:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.229 20:58:10 -- common/autotest_common.sh@10 -- # set +x 00:04:56.164 20:58:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:56.164 20:58:11 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:56.164 20:58:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:56.164 20:58:11 -- common/autotest_common.sh@10 -- # set +x 00:04:57.540 20:58:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:57.540 20:58:13 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:57.540 20:58:13 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:57.540 20:58:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:57.540 20:58:13 -- common/autotest_common.sh@10 -- # set +x 00:04:58.475 20:58:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:58.475 00:04:58.475 real 0m3.380s 00:04:58.475 user 0m0.025s 00:04:58.475 sys 0m0.003s 00:04:58.475 20:58:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:58.475 20:58:14 -- common/autotest_common.sh@10 -- # set +x 00:04:58.475 ************************************ 00:04:58.475 END TEST scheduler_create_thread 00:04:58.475 ************************************ 00:04:58.475 20:58:14 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:58.475 20:58:14 -- scheduler/scheduler.sh@46 -- # killprocess 2865950 00:04:58.475 20:58:14 -- common/autotest_common.sh@936 -- # '[' -z 2865950 ']' 00:04:58.475 20:58:14 -- common/autotest_common.sh@940 -- # kill -0 2865950 00:04:58.475 20:58:14 -- common/autotest_common.sh@941 -- # uname 00:04:58.475 20:58:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:58.475 20:58:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2865950 00:04:58.475 20:58:14 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:04:58.475 20:58:14 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:04:58.475 20:58:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2865950' 00:04:58.475 killing process with pid 2865950 00:04:58.475 20:58:14 -- common/autotest_common.sh@955 -- # kill 2865950 00:04:58.475 20:58:14 -- common/autotest_common.sh@960 -- # wait 2865950 00:04:58.733 [2024-04-18 20:58:14.639635] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:58.992 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:04:58.992 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:04:58.992 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:04:58.992 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:04:58.992 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:04:58.992 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:04:58.992 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:04:58.992 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:04:58.992 00:04:58.992 real 0m5.210s 00:04:58.992 user 0m10.719s 00:04:58.992 sys 0m0.426s 00:04:58.992 20:58:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:58.992 20:58:14 -- common/autotest_common.sh@10 -- # set +x 00:04:58.992 ************************************ 00:04:58.992 END TEST event_scheduler 00:04:58.992 ************************************ 00:04:59.251 20:58:14 -- event/event.sh@51 -- # modprobe -n nbd 00:04:59.251 20:58:14 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:59.251 20:58:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:59.251 20:58:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:59.251 20:58:14 -- common/autotest_common.sh@10 -- # set +x 00:04:59.251 ************************************ 00:04:59.251 START TEST app_repeat 00:04:59.251 ************************************ 00:04:59.251 20:58:15 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:04:59.251 20:58:15 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.251 20:58:15 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.251 20:58:15 -- event/event.sh@13 -- # local nbd_list 00:04:59.251 20:58:15 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:59.251 20:58:15 -- event/event.sh@14 -- # local bdev_list 00:04:59.251 20:58:15 -- event/event.sh@15 -- # local repeat_times=4 00:04:59.251 20:58:15 -- event/event.sh@17 -- # modprobe nbd 00:04:59.251 20:58:15 -- event/event.sh@19 -- # repeat_pid=2866931 00:04:59.251 20:58:15 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:59.251 20:58:15 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:59.251 20:58:15 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2866931' 00:04:59.251 Process app_repeat pid: 2866931 00:04:59.251 20:58:15 -- event/event.sh@23 -- # for i in {0..2} 00:04:59.251 20:58:15 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:59.251 spdk_app_start Round 0 00:04:59.251 20:58:15 -- event/event.sh@25 -- # waitforlisten 2866931 /var/tmp/spdk-nbd.sock 00:04:59.251 20:58:15 -- common/autotest_common.sh@817 -- # '[' -z 2866931 ']' 00:04:59.251 20:58:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:59.251 20:58:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:59.251 20:58:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:59.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:59.251 20:58:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:59.251 20:58:15 -- common/autotest_common.sh@10 -- # set +x 00:04:59.251 [2024-04-18 20:58:15.088247] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:04:59.251 [2024-04-18 20:58:15.088293] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2866931 ] 00:04:59.251 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.251 [2024-04-18 20:58:15.148760] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:59.510 [2024-04-18 20:58:15.226717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.510 [2024-04-18 20:58:15.226721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.076 20:58:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:00.076 20:58:15 -- common/autotest_common.sh@850 -- # return 0 00:05:00.076 20:58:15 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:00.335 Malloc0 00:05:00.335 20:58:16 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:00.594 Malloc1 00:05:00.594 20:58:16 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:00.594 20:58:16 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.594 20:58:16 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:00.594 20:58:16 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:00.594 20:58:16 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.594 20:58:16 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:00.594 20:58:16 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:00.594 20:58:16 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.594 20:58:16 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:00.594 20:58:16 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:00.594 20:58:16 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.594 20:58:16 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:00.594 20:58:16 -- bdev/nbd_common.sh@12 -- # local i 00:05:00.594 20:58:16 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:00.594 20:58:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.594 20:58:16 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:00.594 /dev/nbd0 00:05:00.594 20:58:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:00.594 20:58:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:00.594 20:58:16 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:00.594 20:58:16 -- common/autotest_common.sh@855 -- # local i 00:05:00.594 20:58:16 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:00.594 20:58:16 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:00.594 20:58:16 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:00.594 20:58:16 -- common/autotest_common.sh@859 -- # break 00:05:00.594 20:58:16 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:00.594 20:58:16 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:00.594 20:58:16 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:00.594 1+0 records in 00:05:00.594 1+0 records out 00:05:00.594 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227072 s, 18.0 MB/s 00:05:00.594 20:58:16 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:00.594 20:58:16 -- common/autotest_common.sh@872 -- # size=4096 00:05:00.594 20:58:16 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:00.594 20:58:16 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:00.594 20:58:16 -- common/autotest_common.sh@875 -- # return 0 00:05:00.594 20:58:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:00.594 20:58:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.594 20:58:16 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:00.853 /dev/nbd1 00:05:00.853 20:58:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:00.853 20:58:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:00.853 20:58:16 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:00.853 20:58:16 -- common/autotest_common.sh@855 -- # local i 00:05:00.853 20:58:16 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:00.853 20:58:16 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:00.853 20:58:16 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:00.853 20:58:16 -- common/autotest_common.sh@859 -- # break 00:05:00.853 20:58:16 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:00.853 20:58:16 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:00.853 20:58:16 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:00.853 1+0 records in 00:05:00.853 1+0 records out 00:05:00.853 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000174141 s, 23.5 MB/s 00:05:00.853 20:58:16 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:00.853 20:58:16 -- common/autotest_common.sh@872 -- # size=4096 00:05:00.853 20:58:16 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:00.853 20:58:16 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:00.853 20:58:16 -- common/autotest_common.sh@875 -- # return 0 00:05:00.853 20:58:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:00.853 20:58:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.853 20:58:16 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:00.853 20:58:16 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.853 20:58:16 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:01.112 20:58:16 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:01.112 { 00:05:01.112 "nbd_device": "/dev/nbd0", 00:05:01.112 "bdev_name": "Malloc0" 00:05:01.112 }, 00:05:01.112 { 00:05:01.112 "nbd_device": "/dev/nbd1", 00:05:01.112 "bdev_name": "Malloc1" 00:05:01.112 } 00:05:01.112 ]' 00:05:01.112 20:58:16 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:01.112 { 00:05:01.112 "nbd_device": "/dev/nbd0", 00:05:01.112 "bdev_name": "Malloc0" 00:05:01.112 }, 00:05:01.112 { 00:05:01.112 "nbd_device": "/dev/nbd1", 00:05:01.112 "bdev_name": "Malloc1" 00:05:01.112 } 00:05:01.112 ]' 00:05:01.112 20:58:16 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:01.112 20:58:16 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:01.112 /dev/nbd1' 00:05:01.112 20:58:16 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:01.112 /dev/nbd1' 00:05:01.112 20:58:16 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:01.112 20:58:16 -- bdev/nbd_common.sh@65 -- # count=2 00:05:01.112 20:58:16 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:01.112 20:58:16 -- bdev/nbd_common.sh@95 -- # count=2 00:05:01.112 20:58:16 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:01.112 20:58:16 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:01.112 20:58:16 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.112 20:58:16 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:01.112 20:58:16 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:01.112 20:58:16 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:01.112 20:58:16 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:01.112 20:58:16 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:01.112 256+0 records in 00:05:01.112 256+0 records out 00:05:01.112 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103217 s, 102 MB/s 00:05:01.112 20:58:16 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:01.112 20:58:16 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:01.112 256+0 records in 00:05:01.112 256+0 records out 00:05:01.112 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134081 s, 78.2 MB/s 00:05:01.112 20:58:16 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:01.112 20:58:16 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:01.112 256+0 records in 00:05:01.112 256+0 records out 00:05:01.112 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146729 s, 71.5 MB/s 00:05:01.112 20:58:16 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:01.112 20:58:16 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.112 20:58:16 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:01.112 20:58:16 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:01.112 20:58:16 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:01.112 20:58:16 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:01.112 20:58:16 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:01.112 20:58:16 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:01.113 20:58:16 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:01.113 20:58:16 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:01.113 20:58:16 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:01.113 20:58:16 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:01.113 20:58:16 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:01.113 20:58:16 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.113 20:58:16 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.113 20:58:16 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:01.113 20:58:16 -- bdev/nbd_common.sh@51 -- # local i 00:05:01.113 20:58:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:01.113 20:58:16 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:01.371 20:58:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:01.371 20:58:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:01.371 20:58:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:01.371 20:58:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:01.371 20:58:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:01.371 20:58:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:01.371 20:58:17 -- bdev/nbd_common.sh@41 -- # break 00:05:01.371 20:58:17 -- bdev/nbd_common.sh@45 -- # return 0 00:05:01.371 20:58:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:01.371 20:58:17 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:01.630 20:58:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:01.630 20:58:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:01.630 20:58:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:01.630 20:58:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:01.630 20:58:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:01.630 20:58:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:01.630 20:58:17 -- bdev/nbd_common.sh@41 -- # break 00:05:01.630 20:58:17 -- bdev/nbd_common.sh@45 -- # return 0 00:05:01.630 20:58:17 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:01.630 20:58:17 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.630 20:58:17 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:01.888 20:58:17 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:01.888 20:58:17 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:01.888 20:58:17 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:01.888 20:58:17 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:01.888 20:58:17 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:01.888 20:58:17 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:01.888 20:58:17 -- bdev/nbd_common.sh@65 -- # true 00:05:01.888 20:58:17 -- bdev/nbd_common.sh@65 -- # count=0 00:05:01.888 20:58:17 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:01.888 20:58:17 -- bdev/nbd_common.sh@104 -- # count=0 00:05:01.888 20:58:17 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:01.888 20:58:17 -- bdev/nbd_common.sh@109 -- # return 0 00:05:01.888 20:58:17 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:01.888 20:58:17 -- event/event.sh@35 -- # sleep 3 00:05:02.147 [2024-04-18 20:58:18.019329] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:02.406 [2024-04-18 20:58:18.085720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.406 [2024-04-18 20:58:18.085722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.406 [2024-04-18 20:58:18.127245] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:02.406 [2024-04-18 20:58:18.127286] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:04.942 20:58:20 -- event/event.sh@23 -- # for i in {0..2} 00:05:04.942 20:58:20 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:04.942 spdk_app_start Round 1 00:05:04.942 20:58:20 -- event/event.sh@25 -- # waitforlisten 2866931 /var/tmp/spdk-nbd.sock 00:05:04.942 20:58:20 -- common/autotest_common.sh@817 -- # '[' -z 2866931 ']' 00:05:04.942 20:58:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:04.942 20:58:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:04.942 20:58:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:04.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:04.942 20:58:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:04.942 20:58:20 -- common/autotest_common.sh@10 -- # set +x 00:05:05.201 20:58:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:05.201 20:58:20 -- common/autotest_common.sh@850 -- # return 0 00:05:05.201 20:58:20 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:05.459 Malloc0 00:05:05.459 20:58:21 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:05.459 Malloc1 00:05:05.459 20:58:21 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:05.459 20:58:21 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.459 20:58:21 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.459 20:58:21 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:05.459 20:58:21 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.459 20:58:21 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:05.459 20:58:21 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:05.459 20:58:21 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.459 20:58:21 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.459 20:58:21 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:05.459 20:58:21 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.459 20:58:21 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:05.459 20:58:21 -- bdev/nbd_common.sh@12 -- # local i 00:05:05.459 20:58:21 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:05.459 20:58:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.459 20:58:21 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:05.717 /dev/nbd0 00:05:05.717 20:58:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:05.717 20:58:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:05.717 20:58:21 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:05.717 20:58:21 -- common/autotest_common.sh@855 -- # local i 00:05:05.717 20:58:21 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:05.717 20:58:21 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:05.717 20:58:21 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:05.717 20:58:21 -- common/autotest_common.sh@859 -- # break 00:05:05.717 20:58:21 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:05.717 20:58:21 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:05.717 20:58:21 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:05.717 1+0 records in 00:05:05.717 1+0 records out 00:05:05.717 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000102579 s, 39.9 MB/s 00:05:05.717 20:58:21 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:05.717 20:58:21 -- common/autotest_common.sh@872 -- # size=4096 00:05:05.717 20:58:21 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:05.717 20:58:21 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:05.717 20:58:21 -- common/autotest_common.sh@875 -- # return 0 00:05:05.717 20:58:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:05.717 20:58:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.717 20:58:21 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:05.975 /dev/nbd1 00:05:05.975 20:58:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:05.975 20:58:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:05.975 20:58:21 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:05.975 20:58:21 -- common/autotest_common.sh@855 -- # local i 00:05:05.975 20:58:21 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:05.975 20:58:21 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:05.975 20:58:21 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:05.975 20:58:21 -- common/autotest_common.sh@859 -- # break 00:05:05.975 20:58:21 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:05.975 20:58:21 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:05.975 20:58:21 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:05.975 1+0 records in 00:05:05.975 1+0 records out 00:05:05.975 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000194499 s, 21.1 MB/s 00:05:05.975 20:58:21 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:05.975 20:58:21 -- common/autotest_common.sh@872 -- # size=4096 00:05:05.975 20:58:21 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:05.975 20:58:21 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:05.975 20:58:21 -- common/autotest_common.sh@875 -- # return 0 00:05:05.975 20:58:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:05.975 20:58:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.975 20:58:21 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:05.975 20:58:21 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.975 20:58:21 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:06.232 20:58:21 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:06.232 { 00:05:06.232 "nbd_device": "/dev/nbd0", 00:05:06.232 "bdev_name": "Malloc0" 00:05:06.232 }, 00:05:06.232 { 00:05:06.232 "nbd_device": "/dev/nbd1", 00:05:06.232 "bdev_name": "Malloc1" 00:05:06.232 } 00:05:06.232 ]' 00:05:06.232 20:58:21 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:06.232 { 00:05:06.232 "nbd_device": "/dev/nbd0", 00:05:06.232 "bdev_name": "Malloc0" 00:05:06.232 }, 00:05:06.232 { 00:05:06.232 "nbd_device": "/dev/nbd1", 00:05:06.232 "bdev_name": "Malloc1" 00:05:06.232 } 00:05:06.232 ]' 00:05:06.232 20:58:21 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:06.232 20:58:21 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:06.232 /dev/nbd1' 00:05:06.232 20:58:21 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:06.232 /dev/nbd1' 00:05:06.232 20:58:21 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:06.232 20:58:21 -- bdev/nbd_common.sh@65 -- # count=2 00:05:06.232 20:58:21 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:06.232 20:58:21 -- bdev/nbd_common.sh@95 -- # count=2 00:05:06.232 20:58:21 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:06.232 20:58:21 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:06.232 20:58:21 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.232 20:58:21 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.232 20:58:21 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:06.232 20:58:21 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:06.232 20:58:21 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:06.232 20:58:21 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:06.232 256+0 records in 00:05:06.232 256+0 records out 00:05:06.232 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00420198 s, 250 MB/s 00:05:06.232 20:58:21 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.232 20:58:21 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:06.232 256+0 records in 00:05:06.232 256+0 records out 00:05:06.232 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141511 s, 74.1 MB/s 00:05:06.232 20:58:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.232 20:58:22 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:06.232 256+0 records in 00:05:06.232 256+0 records out 00:05:06.232 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143824 s, 72.9 MB/s 00:05:06.232 20:58:22 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:06.232 20:58:22 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.232 20:58:22 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.232 20:58:22 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:06.232 20:58:22 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:06.232 20:58:22 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:06.232 20:58:22 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:06.232 20:58:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.232 20:58:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:06.232 20:58:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.232 20:58:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:06.232 20:58:22 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:06.233 20:58:22 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:06.233 20:58:22 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.233 20:58:22 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.233 20:58:22 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:06.233 20:58:22 -- bdev/nbd_common.sh@51 -- # local i 00:05:06.233 20:58:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.233 20:58:22 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:06.491 20:58:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:06.491 20:58:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:06.491 20:58:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:06.491 20:58:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:06.491 20:58:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:06.491 20:58:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:06.491 20:58:22 -- bdev/nbd_common.sh@41 -- # break 00:05:06.491 20:58:22 -- bdev/nbd_common.sh@45 -- # return 0 00:05:06.491 20:58:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.491 20:58:22 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:06.491 20:58:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:06.750 20:58:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:06.750 20:58:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:06.750 20:58:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:06.750 20:58:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:06.750 20:58:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:06.750 20:58:22 -- bdev/nbd_common.sh@41 -- # break 00:05:06.750 20:58:22 -- bdev/nbd_common.sh@45 -- # return 0 00:05:06.750 20:58:22 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:06.750 20:58:22 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.750 20:58:22 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:06.750 20:58:22 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:06.750 20:58:22 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:06.750 20:58:22 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:06.750 20:58:22 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:06.750 20:58:22 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:06.750 20:58:22 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:06.750 20:58:22 -- bdev/nbd_common.sh@65 -- # true 00:05:06.750 20:58:22 -- bdev/nbd_common.sh@65 -- # count=0 00:05:06.750 20:58:22 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:06.750 20:58:22 -- bdev/nbd_common.sh@104 -- # count=0 00:05:06.750 20:58:22 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:06.750 20:58:22 -- bdev/nbd_common.sh@109 -- # return 0 00:05:06.750 20:58:22 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:07.008 20:58:22 -- event/event.sh@35 -- # sleep 3 00:05:07.266 [2024-04-18 20:58:23.038002] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:07.266 [2024-04-18 20:58:23.103724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:07.266 [2024-04-18 20:58:23.103726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.266 [2024-04-18 20:58:23.146201] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:07.266 [2024-04-18 20:58:23.146248] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:10.586 20:58:25 -- event/event.sh@23 -- # for i in {0..2} 00:05:10.586 20:58:25 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:10.586 spdk_app_start Round 2 00:05:10.586 20:58:25 -- event/event.sh@25 -- # waitforlisten 2866931 /var/tmp/spdk-nbd.sock 00:05:10.586 20:58:25 -- common/autotest_common.sh@817 -- # '[' -z 2866931 ']' 00:05:10.586 20:58:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:10.586 20:58:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:10.586 20:58:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:10.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:10.586 20:58:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:10.586 20:58:25 -- common/autotest_common.sh@10 -- # set +x 00:05:10.586 20:58:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:10.586 20:58:26 -- common/autotest_common.sh@850 -- # return 0 00:05:10.586 20:58:26 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:10.586 Malloc0 00:05:10.586 20:58:26 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:10.586 Malloc1 00:05:10.586 20:58:26 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:10.586 20:58:26 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.586 20:58:26 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.586 20:58:26 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:10.586 20:58:26 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.586 20:58:26 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:10.586 20:58:26 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:10.586 20:58:26 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.586 20:58:26 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.586 20:58:26 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:10.586 20:58:26 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.586 20:58:26 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:10.586 20:58:26 -- bdev/nbd_common.sh@12 -- # local i 00:05:10.586 20:58:26 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:10.586 20:58:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.586 20:58:26 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:10.844 /dev/nbd0 00:05:10.844 20:58:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:10.844 20:58:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:10.844 20:58:26 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:10.844 20:58:26 -- common/autotest_common.sh@855 -- # local i 00:05:10.844 20:58:26 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:10.844 20:58:26 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:10.844 20:58:26 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:10.844 20:58:26 -- common/autotest_common.sh@859 -- # break 00:05:10.844 20:58:26 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:10.844 20:58:26 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:10.844 20:58:26 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:10.844 1+0 records in 00:05:10.844 1+0 records out 00:05:10.844 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199382 s, 20.5 MB/s 00:05:10.844 20:58:26 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:10.844 20:58:26 -- common/autotest_common.sh@872 -- # size=4096 00:05:10.844 20:58:26 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:10.844 20:58:26 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:10.844 20:58:26 -- common/autotest_common.sh@875 -- # return 0 00:05:10.844 20:58:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:10.844 20:58:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.844 20:58:26 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:10.844 /dev/nbd1 00:05:10.844 20:58:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:10.844 20:58:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:10.844 20:58:26 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:10.844 20:58:26 -- common/autotest_common.sh@855 -- # local i 00:05:10.844 20:58:26 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:10.844 20:58:26 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:10.844 20:58:26 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:10.844 20:58:26 -- common/autotest_common.sh@859 -- # break 00:05:10.844 20:58:26 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:10.844 20:58:26 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:10.844 20:58:26 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:10.844 1+0 records in 00:05:10.844 1+0 records out 00:05:10.844 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000180331 s, 22.7 MB/s 00:05:10.844 20:58:26 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:10.845 20:58:26 -- common/autotest_common.sh@872 -- # size=4096 00:05:10.845 20:58:26 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:10.845 20:58:26 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:10.845 20:58:26 -- common/autotest_common.sh@875 -- # return 0 00:05:10.845 20:58:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:10.845 20:58:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.845 20:58:26 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:10.845 20:58:26 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.845 20:58:26 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:11.103 20:58:26 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:11.103 { 00:05:11.103 "nbd_device": "/dev/nbd0", 00:05:11.103 "bdev_name": "Malloc0" 00:05:11.103 }, 00:05:11.103 { 00:05:11.103 "nbd_device": "/dev/nbd1", 00:05:11.103 "bdev_name": "Malloc1" 00:05:11.103 } 00:05:11.103 ]' 00:05:11.103 20:58:26 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:11.103 { 00:05:11.103 "nbd_device": "/dev/nbd0", 00:05:11.103 "bdev_name": "Malloc0" 00:05:11.103 }, 00:05:11.103 { 00:05:11.103 "nbd_device": "/dev/nbd1", 00:05:11.103 "bdev_name": "Malloc1" 00:05:11.103 } 00:05:11.103 ]' 00:05:11.103 20:58:26 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:11.103 20:58:26 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:11.103 /dev/nbd1' 00:05:11.103 20:58:26 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:11.103 /dev/nbd1' 00:05:11.103 20:58:26 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:11.103 20:58:26 -- bdev/nbd_common.sh@65 -- # count=2 00:05:11.103 20:58:26 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:11.103 20:58:26 -- bdev/nbd_common.sh@95 -- # count=2 00:05:11.103 20:58:26 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:11.103 20:58:26 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:11.103 20:58:26 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.103 20:58:26 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:11.103 20:58:26 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:11.103 20:58:26 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:11.103 20:58:26 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:11.103 20:58:26 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:11.103 256+0 records in 00:05:11.103 256+0 records out 00:05:11.103 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104429 s, 100 MB/s 00:05:11.103 20:58:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:11.103 20:58:27 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:11.103 256+0 records in 00:05:11.103 256+0 records out 00:05:11.103 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140233 s, 74.8 MB/s 00:05:11.103 20:58:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:11.103 20:58:27 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:11.361 256+0 records in 00:05:11.361 256+0 records out 00:05:11.361 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147186 s, 71.2 MB/s 00:05:11.361 20:58:27 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:11.361 20:58:27 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.361 20:58:27 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:11.361 20:58:27 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:11.361 20:58:27 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:11.361 20:58:27 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:11.361 20:58:27 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:11.361 20:58:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:11.361 20:58:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:11.361 20:58:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:11.361 20:58:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:11.361 20:58:27 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:11.361 20:58:27 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:11.361 20:58:27 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.361 20:58:27 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.361 20:58:27 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:11.361 20:58:27 -- bdev/nbd_common.sh@51 -- # local i 00:05:11.361 20:58:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:11.361 20:58:27 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:11.361 20:58:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:11.361 20:58:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:11.361 20:58:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:11.361 20:58:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:11.361 20:58:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:11.361 20:58:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:11.361 20:58:27 -- bdev/nbd_common.sh@41 -- # break 00:05:11.361 20:58:27 -- bdev/nbd_common.sh@45 -- # return 0 00:05:11.361 20:58:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:11.361 20:58:27 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:11.620 20:58:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:11.620 20:58:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:11.620 20:58:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:11.620 20:58:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:11.620 20:58:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:11.620 20:58:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:11.620 20:58:27 -- bdev/nbd_common.sh@41 -- # break 00:05:11.620 20:58:27 -- bdev/nbd_common.sh@45 -- # return 0 00:05:11.620 20:58:27 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:11.620 20:58:27 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.620 20:58:27 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:11.878 20:58:27 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:11.878 20:58:27 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:11.878 20:58:27 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:11.878 20:58:27 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:11.878 20:58:27 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:11.878 20:58:27 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:11.878 20:58:27 -- bdev/nbd_common.sh@65 -- # true 00:05:11.878 20:58:27 -- bdev/nbd_common.sh@65 -- # count=0 00:05:11.878 20:58:27 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:11.878 20:58:27 -- bdev/nbd_common.sh@104 -- # count=0 00:05:11.878 20:58:27 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:11.878 20:58:27 -- bdev/nbd_common.sh@109 -- # return 0 00:05:11.878 20:58:27 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:12.136 20:58:27 -- event/event.sh@35 -- # sleep 3 00:05:12.136 [2024-04-18 20:58:28.065107] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:12.396 [2024-04-18 20:58:28.131345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.396 [2024-04-18 20:58:28.131348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.396 [2024-04-18 20:58:28.173021] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:12.396 [2024-04-18 20:58:28.173060] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:14.924 20:58:30 -- event/event.sh@38 -- # waitforlisten 2866931 /var/tmp/spdk-nbd.sock 00:05:14.924 20:58:30 -- common/autotest_common.sh@817 -- # '[' -z 2866931 ']' 00:05:14.924 20:58:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:14.924 20:58:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:14.924 20:58:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:14.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:14.924 20:58:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:14.924 20:58:30 -- common/autotest_common.sh@10 -- # set +x 00:05:15.182 20:58:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:15.182 20:58:31 -- common/autotest_common.sh@850 -- # return 0 00:05:15.182 20:58:31 -- event/event.sh@39 -- # killprocess 2866931 00:05:15.182 20:58:31 -- common/autotest_common.sh@936 -- # '[' -z 2866931 ']' 00:05:15.182 20:58:31 -- common/autotest_common.sh@940 -- # kill -0 2866931 00:05:15.182 20:58:31 -- common/autotest_common.sh@941 -- # uname 00:05:15.182 20:58:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:15.182 20:58:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2866931 00:05:15.182 20:58:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:15.182 20:58:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:15.182 20:58:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2866931' 00:05:15.182 killing process with pid 2866931 00:05:15.182 20:58:31 -- common/autotest_common.sh@955 -- # kill 2866931 00:05:15.182 20:58:31 -- common/autotest_common.sh@960 -- # wait 2866931 00:05:15.440 spdk_app_start is called in Round 0. 00:05:15.440 Shutdown signal received, stop current app iteration 00:05:15.440 Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 reinitialization... 00:05:15.440 spdk_app_start is called in Round 1. 00:05:15.440 Shutdown signal received, stop current app iteration 00:05:15.440 Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 reinitialization... 00:05:15.440 spdk_app_start is called in Round 2. 00:05:15.440 Shutdown signal received, stop current app iteration 00:05:15.440 Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 reinitialization... 00:05:15.440 spdk_app_start is called in Round 3. 00:05:15.441 Shutdown signal received, stop current app iteration 00:05:15.441 20:58:31 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:15.441 20:58:31 -- event/event.sh@42 -- # return 0 00:05:15.441 00:05:15.441 real 0m16.198s 00:05:15.441 user 0m35.009s 00:05:15.441 sys 0m2.256s 00:05:15.441 20:58:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:15.441 20:58:31 -- common/autotest_common.sh@10 -- # set +x 00:05:15.441 ************************************ 00:05:15.441 END TEST app_repeat 00:05:15.441 ************************************ 00:05:15.441 20:58:31 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:15.441 20:58:31 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:15.441 20:58:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:15.441 20:58:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:15.441 20:58:31 -- common/autotest_common.sh@10 -- # set +x 00:05:15.699 ************************************ 00:05:15.699 START TEST cpu_locks 00:05:15.699 ************************************ 00:05:15.699 20:58:31 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:15.699 * Looking for test storage... 00:05:15.699 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:15.699 20:58:31 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:15.699 20:58:31 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:15.699 20:58:31 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:15.699 20:58:31 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:15.699 20:58:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:15.699 20:58:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:15.699 20:58:31 -- common/autotest_common.sh@10 -- # set +x 00:05:15.958 ************************************ 00:05:15.958 START TEST default_locks 00:05:15.958 ************************************ 00:05:15.958 20:58:31 -- common/autotest_common.sh@1111 -- # default_locks 00:05:15.958 20:58:31 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2869936 00:05:15.958 20:58:31 -- event/cpu_locks.sh@47 -- # waitforlisten 2869936 00:05:15.958 20:58:31 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:15.958 20:58:31 -- common/autotest_common.sh@817 -- # '[' -z 2869936 ']' 00:05:15.958 20:58:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.958 20:58:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:15.958 20:58:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.958 20:58:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:15.958 20:58:31 -- common/autotest_common.sh@10 -- # set +x 00:05:15.958 [2024-04-18 20:58:31.683701] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:05:15.958 [2024-04-18 20:58:31.683746] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2869936 ] 00:05:15.958 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.958 [2024-04-18 20:58:31.742141] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.958 [2024-04-18 20:58:31.811113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.892 20:58:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:16.892 20:58:32 -- common/autotest_common.sh@850 -- # return 0 00:05:16.892 20:58:32 -- event/cpu_locks.sh@49 -- # locks_exist 2869936 00:05:16.892 20:58:32 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:16.892 20:58:32 -- event/cpu_locks.sh@22 -- # lslocks -p 2869936 00:05:17.150 lslocks: write error 00:05:17.150 20:58:32 -- event/cpu_locks.sh@50 -- # killprocess 2869936 00:05:17.150 20:58:32 -- common/autotest_common.sh@936 -- # '[' -z 2869936 ']' 00:05:17.150 20:58:32 -- common/autotest_common.sh@940 -- # kill -0 2869936 00:05:17.150 20:58:32 -- common/autotest_common.sh@941 -- # uname 00:05:17.150 20:58:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:17.150 20:58:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2869936 00:05:17.150 20:58:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:17.150 20:58:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:17.150 20:58:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2869936' 00:05:17.150 killing process with pid 2869936 00:05:17.150 20:58:32 -- common/autotest_common.sh@955 -- # kill 2869936 00:05:17.150 20:58:32 -- common/autotest_common.sh@960 -- # wait 2869936 00:05:17.408 20:58:33 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2869936 00:05:17.408 20:58:33 -- common/autotest_common.sh@638 -- # local es=0 00:05:17.408 20:58:33 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 2869936 00:05:17.408 20:58:33 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:17.408 20:58:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:17.408 20:58:33 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:17.408 20:58:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:17.408 20:58:33 -- common/autotest_common.sh@641 -- # waitforlisten 2869936 00:05:17.408 20:58:33 -- common/autotest_common.sh@817 -- # '[' -z 2869936 ']' 00:05:17.408 20:58:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.408 20:58:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:17.408 20:58:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.408 20:58:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:17.408 20:58:33 -- common/autotest_common.sh@10 -- # set +x 00:05:17.408 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (2869936) - No such process 00:05:17.408 ERROR: process (pid: 2869936) is no longer running 00:05:17.408 20:58:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:17.408 20:58:33 -- common/autotest_common.sh@850 -- # return 1 00:05:17.408 20:58:33 -- common/autotest_common.sh@641 -- # es=1 00:05:17.408 20:58:33 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:17.408 20:58:33 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:17.408 20:58:33 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:17.408 20:58:33 -- event/cpu_locks.sh@54 -- # no_locks 00:05:17.408 20:58:33 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:17.408 20:58:33 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:17.408 20:58:33 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:17.408 00:05:17.408 real 0m1.610s 00:05:17.408 user 0m1.669s 00:05:17.408 sys 0m0.533s 00:05:17.408 20:58:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:17.408 20:58:33 -- common/autotest_common.sh@10 -- # set +x 00:05:17.408 ************************************ 00:05:17.408 END TEST default_locks 00:05:17.408 ************************************ 00:05:17.408 20:58:33 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:17.408 20:58:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:17.408 20:58:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.408 20:58:33 -- common/autotest_common.sh@10 -- # set +x 00:05:17.666 ************************************ 00:05:17.666 START TEST default_locks_via_rpc 00:05:17.666 ************************************ 00:05:17.666 20:58:33 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:05:17.666 20:58:33 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2870212 00:05:17.667 20:58:33 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:17.667 20:58:33 -- event/cpu_locks.sh@63 -- # waitforlisten 2870212 00:05:17.667 20:58:33 -- common/autotest_common.sh@817 -- # '[' -z 2870212 ']' 00:05:17.667 20:58:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.667 20:58:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:17.667 20:58:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.667 20:58:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:17.667 20:58:33 -- common/autotest_common.sh@10 -- # set +x 00:05:17.667 [2024-04-18 20:58:33.445910] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:05:17.667 [2024-04-18 20:58:33.445947] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2870212 ] 00:05:17.667 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.667 [2024-04-18 20:58:33.504842] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.667 [2024-04-18 20:58:33.582846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.599 20:58:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:18.599 20:58:34 -- common/autotest_common.sh@850 -- # return 0 00:05:18.599 20:58:34 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:18.599 20:58:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:18.599 20:58:34 -- common/autotest_common.sh@10 -- # set +x 00:05:18.599 20:58:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:18.599 20:58:34 -- event/cpu_locks.sh@67 -- # no_locks 00:05:18.599 20:58:34 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:18.599 20:58:34 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:18.599 20:58:34 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:18.599 20:58:34 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:18.599 20:58:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:18.599 20:58:34 -- common/autotest_common.sh@10 -- # set +x 00:05:18.599 20:58:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:18.599 20:58:34 -- event/cpu_locks.sh@71 -- # locks_exist 2870212 00:05:18.599 20:58:34 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:18.599 20:58:34 -- event/cpu_locks.sh@22 -- # lslocks -p 2870212 00:05:18.858 20:58:34 -- event/cpu_locks.sh@73 -- # killprocess 2870212 00:05:18.858 20:58:34 -- common/autotest_common.sh@936 -- # '[' -z 2870212 ']' 00:05:18.858 20:58:34 -- common/autotest_common.sh@940 -- # kill -0 2870212 00:05:18.858 20:58:34 -- common/autotest_common.sh@941 -- # uname 00:05:18.858 20:58:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:18.858 20:58:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2870212 00:05:18.858 20:58:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:18.858 20:58:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:18.858 20:58:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2870212' 00:05:18.858 killing process with pid 2870212 00:05:18.858 20:58:34 -- common/autotest_common.sh@955 -- # kill 2870212 00:05:18.858 20:58:34 -- common/autotest_common.sh@960 -- # wait 2870212 00:05:19.425 00:05:19.425 real 0m1.654s 00:05:19.425 user 0m1.733s 00:05:19.425 sys 0m0.543s 00:05:19.425 20:58:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:19.425 20:58:35 -- common/autotest_common.sh@10 -- # set +x 00:05:19.425 ************************************ 00:05:19.425 END TEST default_locks_via_rpc 00:05:19.425 ************************************ 00:05:19.425 20:58:35 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:19.425 20:58:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:19.425 20:58:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:19.425 20:58:35 -- common/autotest_common.sh@10 -- # set +x 00:05:19.425 ************************************ 00:05:19.425 START TEST non_locking_app_on_locked_coremask 00:05:19.425 ************************************ 00:05:19.425 20:58:35 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:05:19.425 20:58:35 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2870484 00:05:19.425 20:58:35 -- event/cpu_locks.sh@81 -- # waitforlisten 2870484 /var/tmp/spdk.sock 00:05:19.425 20:58:35 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:19.425 20:58:35 -- common/autotest_common.sh@817 -- # '[' -z 2870484 ']' 00:05:19.425 20:58:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.425 20:58:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:19.425 20:58:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.425 20:58:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:19.425 20:58:35 -- common/autotest_common.sh@10 -- # set +x 00:05:19.425 [2024-04-18 20:58:35.259907] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:05:19.425 [2024-04-18 20:58:35.259944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2870484 ] 00:05:19.425 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.425 [2024-04-18 20:58:35.317666] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.682 [2024-04-18 20:58:35.396730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.247 20:58:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:20.247 20:58:36 -- common/autotest_common.sh@850 -- # return 0 00:05:20.247 20:58:36 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:20.247 20:58:36 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2870708 00:05:20.247 20:58:36 -- event/cpu_locks.sh@85 -- # waitforlisten 2870708 /var/tmp/spdk2.sock 00:05:20.247 20:58:36 -- common/autotest_common.sh@817 -- # '[' -z 2870708 ']' 00:05:20.247 20:58:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:20.247 20:58:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:20.248 20:58:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:20.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:20.248 20:58:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:20.248 20:58:36 -- common/autotest_common.sh@10 -- # set +x 00:05:20.248 [2024-04-18 20:58:36.085615] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:05:20.248 [2024-04-18 20:58:36.085662] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2870708 ] 00:05:20.248 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.248 [2024-04-18 20:58:36.164440] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:20.248 [2024-04-18 20:58:36.164462] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.505 [2024-04-18 20:58:36.309021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.072 20:58:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:21.072 20:58:36 -- common/autotest_common.sh@850 -- # return 0 00:05:21.072 20:58:36 -- event/cpu_locks.sh@87 -- # locks_exist 2870484 00:05:21.072 20:58:36 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:21.072 20:58:36 -- event/cpu_locks.sh@22 -- # lslocks -p 2870484 00:05:21.636 lslocks: write error 00:05:21.636 20:58:37 -- event/cpu_locks.sh@89 -- # killprocess 2870484 00:05:21.636 20:58:37 -- common/autotest_common.sh@936 -- # '[' -z 2870484 ']' 00:05:21.636 20:58:37 -- common/autotest_common.sh@940 -- # kill -0 2870484 00:05:21.636 20:58:37 -- common/autotest_common.sh@941 -- # uname 00:05:21.636 20:58:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:21.636 20:58:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2870484 00:05:21.636 20:58:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:21.636 20:58:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:21.636 20:58:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2870484' 00:05:21.636 killing process with pid 2870484 00:05:21.636 20:58:37 -- common/autotest_common.sh@955 -- # kill 2870484 00:05:21.636 20:58:37 -- common/autotest_common.sh@960 -- # wait 2870484 00:05:22.201 20:58:38 -- event/cpu_locks.sh@90 -- # killprocess 2870708 00:05:22.201 20:58:38 -- common/autotest_common.sh@936 -- # '[' -z 2870708 ']' 00:05:22.201 20:58:38 -- common/autotest_common.sh@940 -- # kill -0 2870708 00:05:22.201 20:58:38 -- common/autotest_common.sh@941 -- # uname 00:05:22.201 20:58:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:22.201 20:58:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2870708 00:05:22.201 20:58:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:22.201 20:58:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:22.201 20:58:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2870708' 00:05:22.201 killing process with pid 2870708 00:05:22.201 20:58:38 -- common/autotest_common.sh@955 -- # kill 2870708 00:05:22.201 20:58:38 -- common/autotest_common.sh@960 -- # wait 2870708 00:05:22.768 00:05:22.768 real 0m3.255s 00:05:22.768 user 0m3.488s 00:05:22.768 sys 0m0.883s 00:05:22.768 20:58:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:22.768 20:58:38 -- common/autotest_common.sh@10 -- # set +x 00:05:22.768 ************************************ 00:05:22.768 END TEST non_locking_app_on_locked_coremask 00:05:22.768 ************************************ 00:05:22.768 20:58:38 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:22.768 20:58:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:22.768 20:58:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:22.768 20:58:38 -- common/autotest_common.sh@10 -- # set +x 00:05:22.768 ************************************ 00:05:22.768 START TEST locking_app_on_unlocked_coremask 00:05:22.768 ************************************ 00:05:22.768 20:58:38 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:05:22.768 20:58:38 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2871208 00:05:22.768 20:58:38 -- event/cpu_locks.sh@99 -- # waitforlisten 2871208 /var/tmp/spdk.sock 00:05:22.768 20:58:38 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:22.768 20:58:38 -- common/autotest_common.sh@817 -- # '[' -z 2871208 ']' 00:05:22.768 20:58:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.768 20:58:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:22.768 20:58:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.768 20:58:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:22.768 20:58:38 -- common/autotest_common.sh@10 -- # set +x 00:05:22.768 [2024-04-18 20:58:38.681220] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:05:22.768 [2024-04-18 20:58:38.681264] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2871208 ] 00:05:23.026 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.026 [2024-04-18 20:58:38.741741] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:23.026 [2024-04-18 20:58:38.741768] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.026 [2024-04-18 20:58:38.814191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.594 20:58:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:23.594 20:58:39 -- common/autotest_common.sh@850 -- # return 0 00:05:23.594 20:58:39 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2871242 00:05:23.594 20:58:39 -- event/cpu_locks.sh@103 -- # waitforlisten 2871242 /var/tmp/spdk2.sock 00:05:23.594 20:58:39 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:23.594 20:58:39 -- common/autotest_common.sh@817 -- # '[' -z 2871242 ']' 00:05:23.594 20:58:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:23.594 20:58:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:23.594 20:58:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:23.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:23.594 20:58:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:23.594 20:58:39 -- common/autotest_common.sh@10 -- # set +x 00:05:23.852 [2024-04-18 20:58:39.531010] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:05:23.852 [2024-04-18 20:58:39.531057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2871242 ] 00:05:23.852 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.852 [2024-04-18 20:58:39.613819] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.852 [2024-04-18 20:58:39.764489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.415 20:58:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:24.415 20:58:40 -- common/autotest_common.sh@850 -- # return 0 00:05:24.415 20:58:40 -- event/cpu_locks.sh@105 -- # locks_exist 2871242 00:05:24.415 20:58:40 -- event/cpu_locks.sh@22 -- # lslocks -p 2871242 00:05:24.415 20:58:40 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:24.978 lslocks: write error 00:05:24.978 20:58:40 -- event/cpu_locks.sh@107 -- # killprocess 2871208 00:05:24.978 20:58:40 -- common/autotest_common.sh@936 -- # '[' -z 2871208 ']' 00:05:24.978 20:58:40 -- common/autotest_common.sh@940 -- # kill -0 2871208 00:05:24.978 20:58:40 -- common/autotest_common.sh@941 -- # uname 00:05:24.978 20:58:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:24.978 20:58:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2871208 00:05:24.978 20:58:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:24.978 20:58:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:24.978 20:58:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2871208' 00:05:24.978 killing process with pid 2871208 00:05:24.978 20:58:40 -- common/autotest_common.sh@955 -- # kill 2871208 00:05:24.978 20:58:40 -- common/autotest_common.sh@960 -- # wait 2871208 00:05:25.627 20:58:41 -- event/cpu_locks.sh@108 -- # killprocess 2871242 00:05:25.627 20:58:41 -- common/autotest_common.sh@936 -- # '[' -z 2871242 ']' 00:05:25.627 20:58:41 -- common/autotest_common.sh@940 -- # kill -0 2871242 00:05:25.627 20:58:41 -- common/autotest_common.sh@941 -- # uname 00:05:25.627 20:58:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:25.627 20:58:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2871242 00:05:25.884 20:58:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:25.884 20:58:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:25.884 20:58:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2871242' 00:05:25.884 killing process with pid 2871242 00:05:25.884 20:58:41 -- common/autotest_common.sh@955 -- # kill 2871242 00:05:25.884 20:58:41 -- common/autotest_common.sh@960 -- # wait 2871242 00:05:26.142 00:05:26.142 real 0m3.281s 00:05:26.142 user 0m3.486s 00:05:26.142 sys 0m0.973s 00:05:26.142 20:58:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:26.142 20:58:41 -- common/autotest_common.sh@10 -- # set +x 00:05:26.142 ************************************ 00:05:26.142 END TEST locking_app_on_unlocked_coremask 00:05:26.142 ************************************ 00:05:26.142 20:58:41 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:26.142 20:58:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:26.142 20:58:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:26.142 20:58:41 -- common/autotest_common.sh@10 -- # set +x 00:05:26.142 ************************************ 00:05:26.142 START TEST locking_app_on_locked_coremask 00:05:26.142 ************************************ 00:05:26.142 20:58:42 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:05:26.401 20:58:42 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2871722 00:05:26.401 20:58:42 -- event/cpu_locks.sh@116 -- # waitforlisten 2871722 /var/tmp/spdk.sock 00:05:26.401 20:58:42 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:26.401 20:58:42 -- common/autotest_common.sh@817 -- # '[' -z 2871722 ']' 00:05:26.401 20:58:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.401 20:58:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:26.401 20:58:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.401 20:58:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:26.401 20:58:42 -- common/autotest_common.sh@10 -- # set +x 00:05:26.401 [2024-04-18 20:58:42.120123] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:05:26.401 [2024-04-18 20:58:42.120162] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2871722 ] 00:05:26.401 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.401 [2024-04-18 20:58:42.178851] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.401 [2024-04-18 20:58:42.256292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.334 20:58:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:27.334 20:58:42 -- common/autotest_common.sh@850 -- # return 0 00:05:27.334 20:58:42 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2871950 00:05:27.334 20:58:42 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2871950 /var/tmp/spdk2.sock 00:05:27.334 20:58:42 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:27.334 20:58:42 -- common/autotest_common.sh@638 -- # local es=0 00:05:27.334 20:58:42 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 2871950 /var/tmp/spdk2.sock 00:05:27.334 20:58:42 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:27.334 20:58:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:27.334 20:58:42 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:27.334 20:58:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:27.334 20:58:42 -- common/autotest_common.sh@641 -- # waitforlisten 2871950 /var/tmp/spdk2.sock 00:05:27.334 20:58:42 -- common/autotest_common.sh@817 -- # '[' -z 2871950 ']' 00:05:27.334 20:58:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:27.334 20:58:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:27.334 20:58:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:27.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:27.334 20:58:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:27.334 20:58:42 -- common/autotest_common.sh@10 -- # set +x 00:05:27.334 [2024-04-18 20:58:42.966235] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:05:27.334 [2024-04-18 20:58:42.966281] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2871950 ] 00:05:27.334 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.334 [2024-04-18 20:58:43.046585] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2871722 has claimed it. 00:05:27.334 [2024-04-18 20:58:43.046613] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:27.900 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (2871950) - No such process 00:05:27.900 ERROR: process (pid: 2871950) is no longer running 00:05:27.900 20:58:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:27.900 20:58:43 -- common/autotest_common.sh@850 -- # return 1 00:05:27.900 20:58:43 -- common/autotest_common.sh@641 -- # es=1 00:05:27.900 20:58:43 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:27.900 20:58:43 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:27.900 20:58:43 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:27.900 20:58:43 -- event/cpu_locks.sh@122 -- # locks_exist 2871722 00:05:27.900 20:58:43 -- event/cpu_locks.sh@22 -- # lslocks -p 2871722 00:05:27.900 20:58:43 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:28.159 lslocks: write error 00:05:28.159 20:58:43 -- event/cpu_locks.sh@124 -- # killprocess 2871722 00:05:28.159 20:58:43 -- common/autotest_common.sh@936 -- # '[' -z 2871722 ']' 00:05:28.159 20:58:43 -- common/autotest_common.sh@940 -- # kill -0 2871722 00:05:28.159 20:58:43 -- common/autotest_common.sh@941 -- # uname 00:05:28.159 20:58:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:28.159 20:58:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2871722 00:05:28.159 20:58:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:28.159 20:58:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:28.159 20:58:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2871722' 00:05:28.159 killing process with pid 2871722 00:05:28.159 20:58:43 -- common/autotest_common.sh@955 -- # kill 2871722 00:05:28.159 20:58:43 -- common/autotest_common.sh@960 -- # wait 2871722 00:05:28.417 00:05:28.417 real 0m2.179s 00:05:28.417 user 0m2.401s 00:05:28.417 sys 0m0.572s 00:05:28.417 20:58:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:28.417 20:58:44 -- common/autotest_common.sh@10 -- # set +x 00:05:28.417 ************************************ 00:05:28.417 END TEST locking_app_on_locked_coremask 00:05:28.417 ************************************ 00:05:28.417 20:58:44 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:28.417 20:58:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:28.417 20:58:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:28.418 20:58:44 -- common/autotest_common.sh@10 -- # set +x 00:05:28.676 ************************************ 00:05:28.676 START TEST locking_overlapped_coremask 00:05:28.676 ************************************ 00:05:28.676 20:58:44 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:05:28.676 20:58:44 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2872223 00:05:28.676 20:58:44 -- event/cpu_locks.sh@133 -- # waitforlisten 2872223 /var/tmp/spdk.sock 00:05:28.676 20:58:44 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:28.676 20:58:44 -- common/autotest_common.sh@817 -- # '[' -z 2872223 ']' 00:05:28.676 20:58:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.676 20:58:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:28.676 20:58:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.676 20:58:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:28.676 20:58:44 -- common/autotest_common.sh@10 -- # set +x 00:05:28.676 [2024-04-18 20:58:44.456671] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:05:28.676 [2024-04-18 20:58:44.456708] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2872223 ] 00:05:28.676 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.676 [2024-04-18 20:58:44.516481] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:28.676 [2024-04-18 20:58:44.594924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.676 [2024-04-18 20:58:44.595023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.676 [2024-04-18 20:58:44.595025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.612 20:58:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:29.612 20:58:45 -- common/autotest_common.sh@850 -- # return 0 00:05:29.612 20:58:45 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2872399 00:05:29.612 20:58:45 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2872399 /var/tmp/spdk2.sock 00:05:29.612 20:58:45 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:29.612 20:58:45 -- common/autotest_common.sh@638 -- # local es=0 00:05:29.612 20:58:45 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 2872399 /var/tmp/spdk2.sock 00:05:29.612 20:58:45 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:29.612 20:58:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:29.612 20:58:45 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:29.612 20:58:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:29.612 20:58:45 -- common/autotest_common.sh@641 -- # waitforlisten 2872399 /var/tmp/spdk2.sock 00:05:29.612 20:58:45 -- common/autotest_common.sh@817 -- # '[' -z 2872399 ']' 00:05:29.612 20:58:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:29.612 20:58:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:29.612 20:58:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:29.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:29.612 20:58:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:29.612 20:58:45 -- common/autotest_common.sh@10 -- # set +x 00:05:29.612 [2024-04-18 20:58:45.308994] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:05:29.612 [2024-04-18 20:58:45.309045] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2872399 ] 00:05:29.612 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.612 [2024-04-18 20:58:45.394910] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2872223 has claimed it. 00:05:29.612 [2024-04-18 20:58:45.394950] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:30.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (2872399) - No such process 00:05:30.179 ERROR: process (pid: 2872399) is no longer running 00:05:30.179 20:58:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:30.179 20:58:45 -- common/autotest_common.sh@850 -- # return 1 00:05:30.179 20:58:45 -- common/autotest_common.sh@641 -- # es=1 00:05:30.179 20:58:45 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:30.179 20:58:45 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:30.179 20:58:45 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:30.179 20:58:45 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:30.179 20:58:45 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:30.179 20:58:45 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:30.179 20:58:45 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:30.179 20:58:45 -- event/cpu_locks.sh@141 -- # killprocess 2872223 00:05:30.179 20:58:45 -- common/autotest_common.sh@936 -- # '[' -z 2872223 ']' 00:05:30.179 20:58:45 -- common/autotest_common.sh@940 -- # kill -0 2872223 00:05:30.179 20:58:45 -- common/autotest_common.sh@941 -- # uname 00:05:30.179 20:58:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:30.179 20:58:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2872223 00:05:30.179 20:58:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:30.179 20:58:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:30.179 20:58:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2872223' 00:05:30.179 killing process with pid 2872223 00:05:30.179 20:58:45 -- common/autotest_common.sh@955 -- # kill 2872223 00:05:30.179 20:58:45 -- common/autotest_common.sh@960 -- # wait 2872223 00:05:30.437 00:05:30.437 real 0m1.918s 00:05:30.437 user 0m5.392s 00:05:30.437 sys 0m0.395s 00:05:30.437 20:58:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:30.437 20:58:46 -- common/autotest_common.sh@10 -- # set +x 00:05:30.437 ************************************ 00:05:30.437 END TEST locking_overlapped_coremask 00:05:30.437 ************************************ 00:05:30.437 20:58:46 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:30.437 20:58:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:30.437 20:58:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:30.437 20:58:46 -- common/autotest_common.sh@10 -- # set +x 00:05:30.697 ************************************ 00:05:30.697 START TEST locking_overlapped_coremask_via_rpc 00:05:30.697 ************************************ 00:05:30.697 20:58:46 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:05:30.697 20:58:46 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2872575 00:05:30.697 20:58:46 -- event/cpu_locks.sh@149 -- # waitforlisten 2872575 /var/tmp/spdk.sock 00:05:30.697 20:58:46 -- common/autotest_common.sh@817 -- # '[' -z 2872575 ']' 00:05:30.697 20:58:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.697 20:58:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:30.697 20:58:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.697 20:58:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:30.697 20:58:46 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:30.697 20:58:46 -- common/autotest_common.sh@10 -- # set +x 00:05:30.697 [2024-04-18 20:58:46.518108] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:05:30.697 [2024-04-18 20:58:46.518147] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2872575 ] 00:05:30.697 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.697 [2024-04-18 20:58:46.577706] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:30.697 [2024-04-18 20:58:46.577730] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:30.955 [2024-04-18 20:58:46.657279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.955 [2024-04-18 20:58:46.657299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:30.955 [2024-04-18 20:58:46.657301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.525 20:58:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:31.525 20:58:47 -- common/autotest_common.sh@850 -- # return 0 00:05:31.525 20:58:47 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2872734 00:05:31.525 20:58:47 -- event/cpu_locks.sh@153 -- # waitforlisten 2872734 /var/tmp/spdk2.sock 00:05:31.525 20:58:47 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:31.525 20:58:47 -- common/autotest_common.sh@817 -- # '[' -z 2872734 ']' 00:05:31.525 20:58:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:31.525 20:58:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:31.525 20:58:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:31.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:31.525 20:58:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:31.525 20:58:47 -- common/autotest_common.sh@10 -- # set +x 00:05:31.525 [2024-04-18 20:58:47.367751] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:05:31.525 [2024-04-18 20:58:47.367799] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2872734 ] 00:05:31.525 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.525 [2024-04-18 20:58:47.452743] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:31.525 [2024-04-18 20:58:47.452772] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:31.783 [2024-04-18 20:58:47.605181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:31.783 [2024-04-18 20:58:47.605295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:31.783 [2024-04-18 20:58:47.605296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:32.350 20:58:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:32.350 20:58:48 -- common/autotest_common.sh@850 -- # return 0 00:05:32.350 20:58:48 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:32.350 20:58:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:32.350 20:58:48 -- common/autotest_common.sh@10 -- # set +x 00:05:32.350 20:58:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:32.350 20:58:48 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:32.350 20:58:48 -- common/autotest_common.sh@638 -- # local es=0 00:05:32.350 20:58:48 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:32.350 20:58:48 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:05:32.350 20:58:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:32.350 20:58:48 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:05:32.350 20:58:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:32.350 20:58:48 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:32.350 20:58:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:32.350 20:58:48 -- common/autotest_common.sh@10 -- # set +x 00:05:32.350 [2024-04-18 20:58:48.192584] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2872575 has claimed it. 00:05:32.350 request: 00:05:32.350 { 00:05:32.350 "method": "framework_enable_cpumask_locks", 00:05:32.350 "req_id": 1 00:05:32.350 } 00:05:32.350 Got JSON-RPC error response 00:05:32.350 response: 00:05:32.350 { 00:05:32.350 "code": -32603, 00:05:32.350 "message": "Failed to claim CPU core: 2" 00:05:32.350 } 00:05:32.350 20:58:48 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:05:32.350 20:58:48 -- common/autotest_common.sh@641 -- # es=1 00:05:32.350 20:58:48 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:32.350 20:58:48 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:32.350 20:58:48 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:32.350 20:58:48 -- event/cpu_locks.sh@158 -- # waitforlisten 2872575 /var/tmp/spdk.sock 00:05:32.350 20:58:48 -- common/autotest_common.sh@817 -- # '[' -z 2872575 ']' 00:05:32.350 20:58:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.350 20:58:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:32.350 20:58:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.350 20:58:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:32.350 20:58:48 -- common/autotest_common.sh@10 -- # set +x 00:05:32.608 20:58:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:32.608 20:58:48 -- common/autotest_common.sh@850 -- # return 0 00:05:32.608 20:58:48 -- event/cpu_locks.sh@159 -- # waitforlisten 2872734 /var/tmp/spdk2.sock 00:05:32.608 20:58:48 -- common/autotest_common.sh@817 -- # '[' -z 2872734 ']' 00:05:32.608 20:58:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:32.608 20:58:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:32.608 20:58:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:32.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:32.608 20:58:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:32.608 20:58:48 -- common/autotest_common.sh@10 -- # set +x 00:05:32.866 20:58:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:32.866 20:58:48 -- common/autotest_common.sh@850 -- # return 0 00:05:32.866 20:58:48 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:32.866 20:58:48 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:32.866 20:58:48 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:32.866 20:58:48 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:32.866 00:05:32.866 real 0m2.106s 00:05:32.866 user 0m0.872s 00:05:32.866 sys 0m0.167s 00:05:32.866 20:58:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:32.866 20:58:48 -- common/autotest_common.sh@10 -- # set +x 00:05:32.866 ************************************ 00:05:32.866 END TEST locking_overlapped_coremask_via_rpc 00:05:32.866 ************************************ 00:05:32.866 20:58:48 -- event/cpu_locks.sh@174 -- # cleanup 00:05:32.866 20:58:48 -- event/cpu_locks.sh@15 -- # [[ -z 2872575 ]] 00:05:32.866 20:58:48 -- event/cpu_locks.sh@15 -- # killprocess 2872575 00:05:32.866 20:58:48 -- common/autotest_common.sh@936 -- # '[' -z 2872575 ']' 00:05:32.866 20:58:48 -- common/autotest_common.sh@940 -- # kill -0 2872575 00:05:32.866 20:58:48 -- common/autotest_common.sh@941 -- # uname 00:05:32.866 20:58:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:32.866 20:58:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2872575 00:05:32.866 20:58:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:32.866 20:58:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:32.866 20:58:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2872575' 00:05:32.866 killing process with pid 2872575 00:05:32.867 20:58:48 -- common/autotest_common.sh@955 -- # kill 2872575 00:05:32.867 20:58:48 -- common/autotest_common.sh@960 -- # wait 2872575 00:05:33.125 20:58:48 -- event/cpu_locks.sh@16 -- # [[ -z 2872734 ]] 00:05:33.125 20:58:48 -- event/cpu_locks.sh@16 -- # killprocess 2872734 00:05:33.125 20:58:48 -- common/autotest_common.sh@936 -- # '[' -z 2872734 ']' 00:05:33.125 20:58:48 -- common/autotest_common.sh@940 -- # kill -0 2872734 00:05:33.125 20:58:48 -- common/autotest_common.sh@941 -- # uname 00:05:33.125 20:58:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:33.125 20:58:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2872734 00:05:33.125 20:58:49 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:33.125 20:58:49 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:33.125 20:58:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2872734' 00:05:33.125 killing process with pid 2872734 00:05:33.125 20:58:49 -- common/autotest_common.sh@955 -- # kill 2872734 00:05:33.125 20:58:49 -- common/autotest_common.sh@960 -- # wait 2872734 00:05:33.693 20:58:49 -- event/cpu_locks.sh@18 -- # rm -f 00:05:33.693 20:58:49 -- event/cpu_locks.sh@1 -- # cleanup 00:05:33.693 20:58:49 -- event/cpu_locks.sh@15 -- # [[ -z 2872575 ]] 00:05:33.693 20:58:49 -- event/cpu_locks.sh@15 -- # killprocess 2872575 00:05:33.693 20:58:49 -- common/autotest_common.sh@936 -- # '[' -z 2872575 ']' 00:05:33.693 20:58:49 -- common/autotest_common.sh@940 -- # kill -0 2872575 00:05:33.693 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (2872575) - No such process 00:05:33.693 20:58:49 -- common/autotest_common.sh@963 -- # echo 'Process with pid 2872575 is not found' 00:05:33.693 Process with pid 2872575 is not found 00:05:33.693 20:58:49 -- event/cpu_locks.sh@16 -- # [[ -z 2872734 ]] 00:05:33.693 20:58:49 -- event/cpu_locks.sh@16 -- # killprocess 2872734 00:05:33.693 20:58:49 -- common/autotest_common.sh@936 -- # '[' -z 2872734 ']' 00:05:33.693 20:58:49 -- common/autotest_common.sh@940 -- # kill -0 2872734 00:05:33.693 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (2872734) - No such process 00:05:33.693 20:58:49 -- common/autotest_common.sh@963 -- # echo 'Process with pid 2872734 is not found' 00:05:33.693 Process with pid 2872734 is not found 00:05:33.693 20:58:49 -- event/cpu_locks.sh@18 -- # rm -f 00:05:33.693 00:05:33.693 real 0m17.971s 00:05:33.693 user 0m29.927s 00:05:33.693 sys 0m5.236s 00:05:33.693 20:58:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:33.693 20:58:49 -- common/autotest_common.sh@10 -- # set +x 00:05:33.693 ************************************ 00:05:33.693 END TEST cpu_locks 00:05:33.693 ************************************ 00:05:33.693 00:05:33.693 real 0m44.102s 00:05:33.693 user 1m22.527s 00:05:33.693 sys 0m8.711s 00:05:33.693 20:58:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:33.693 20:58:49 -- common/autotest_common.sh@10 -- # set +x 00:05:33.693 ************************************ 00:05:33.693 END TEST event 00:05:33.693 ************************************ 00:05:33.693 20:58:49 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:33.693 20:58:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:33.693 20:58:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.693 20:58:49 -- common/autotest_common.sh@10 -- # set +x 00:05:33.693 ************************************ 00:05:33.693 START TEST thread 00:05:33.693 ************************************ 00:05:33.693 20:58:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:33.953 * Looking for test storage... 00:05:33.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:33.953 20:58:49 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:33.953 20:58:49 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:33.953 20:58:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.953 20:58:49 -- common/autotest_common.sh@10 -- # set +x 00:05:33.953 ************************************ 00:05:33.953 START TEST thread_poller_perf 00:05:33.953 ************************************ 00:05:33.953 20:58:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:33.953 [2024-04-18 20:58:49.793253] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:05:33.953 [2024-04-18 20:58:49.793319] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2873302 ] 00:05:33.953 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.953 [2024-04-18 20:58:49.856002] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.212 [2024-04-18 20:58:49.928196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.212 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:35.147 ====================================== 00:05:35.147 busy:2307174402 (cyc) 00:05:35.147 total_run_count: 393000 00:05:35.147 tsc_hz: 2300000000 (cyc) 00:05:35.147 ====================================== 00:05:35.147 poller_cost: 5870 (cyc), 2552 (nsec) 00:05:35.147 00:05:35.147 real 0m1.257s 00:05:35.147 user 0m1.179s 00:05:35.147 sys 0m0.073s 00:05:35.147 20:58:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:35.147 20:58:51 -- common/autotest_common.sh@10 -- # set +x 00:05:35.147 ************************************ 00:05:35.147 END TEST thread_poller_perf 00:05:35.147 ************************************ 00:05:35.147 20:58:51 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:35.147 20:58:51 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:35.147 20:58:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:35.147 20:58:51 -- common/autotest_common.sh@10 -- # set +x 00:05:35.406 ************************************ 00:05:35.406 START TEST thread_poller_perf 00:05:35.406 ************************************ 00:05:35.406 20:58:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:35.406 [2024-04-18 20:58:51.194879] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:05:35.406 [2024-04-18 20:58:51.194935] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2873557 ] 00:05:35.406 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.406 [2024-04-18 20:58:51.256394] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.406 [2024-04-18 20:58:51.326419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.406 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:36.779 ====================================== 00:05:36.779 busy:2301825042 (cyc) 00:05:36.779 total_run_count: 5504000 00:05:36.779 tsc_hz: 2300000000 (cyc) 00:05:36.779 ====================================== 00:05:36.779 poller_cost: 418 (cyc), 181 (nsec) 00:05:36.779 00:05:36.779 real 0m1.238s 00:05:36.779 user 0m1.161s 00:05:36.779 sys 0m0.073s 00:05:36.779 20:58:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:36.779 20:58:52 -- common/autotest_common.sh@10 -- # set +x 00:05:36.779 ************************************ 00:05:36.779 END TEST thread_poller_perf 00:05:36.779 ************************************ 00:05:36.779 20:58:52 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:36.779 00:05:36.779 real 0m2.858s 00:05:36.779 user 0m2.485s 00:05:36.779 sys 0m0.358s 00:05:36.779 20:58:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:36.779 20:58:52 -- common/autotest_common.sh@10 -- # set +x 00:05:36.779 ************************************ 00:05:36.779 END TEST thread 00:05:36.779 ************************************ 00:05:36.779 20:58:52 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:36.779 20:58:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:36.779 20:58:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:36.779 20:58:52 -- common/autotest_common.sh@10 -- # set +x 00:05:36.779 ************************************ 00:05:36.779 START TEST accel 00:05:36.779 ************************************ 00:05:36.779 20:58:52 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:36.779 * Looking for test storage... 00:05:36.779 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:36.779 20:58:52 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:36.779 20:58:52 -- accel/accel.sh@82 -- # get_expected_opcs 00:05:36.779 20:58:52 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:36.779 20:58:52 -- accel/accel.sh@62 -- # spdk_tgt_pid=2873858 00:05:36.779 20:58:52 -- accel/accel.sh@63 -- # waitforlisten 2873858 00:05:36.779 20:58:52 -- common/autotest_common.sh@817 -- # '[' -z 2873858 ']' 00:05:36.779 20:58:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.779 20:58:52 -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:36.779 20:58:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:36.779 20:58:52 -- accel/accel.sh@61 -- # build_accel_config 00:05:36.779 20:58:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.779 20:58:52 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:36.779 20:58:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:36.779 20:58:52 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:36.779 20:58:52 -- common/autotest_common.sh@10 -- # set +x 00:05:36.780 20:58:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.780 20:58:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.780 20:58:52 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:36.780 20:58:52 -- accel/accel.sh@40 -- # local IFS=, 00:05:36.780 20:58:52 -- accel/accel.sh@41 -- # jq -r . 00:05:37.038 [2024-04-18 20:58:52.735376] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:05:37.038 [2024-04-18 20:58:52.735421] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2873858 ] 00:05:37.038 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.038 [2024-04-18 20:58:52.795183] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.038 [2024-04-18 20:58:52.866948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.603 20:58:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:37.603 20:58:53 -- common/autotest_common.sh@850 -- # return 0 00:05:37.603 20:58:53 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:37.603 20:58:53 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:37.603 20:58:53 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:37.603 20:58:53 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:37.603 20:58:53 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:37.603 20:58:53 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:37.603 20:58:53 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:37.603 20:58:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:37.603 20:58:53 -- common/autotest_common.sh@10 -- # set +x 00:05:37.861 20:58:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:37.861 20:58:53 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:37.861 20:58:53 -- accel/accel.sh@72 -- # IFS== 00:05:37.861 20:58:53 -- accel/accel.sh@72 -- # read -r opc module 00:05:37.861 20:58:53 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:37.861 20:58:53 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:37.861 20:58:53 -- accel/accel.sh@72 -- # IFS== 00:05:37.861 20:58:53 -- accel/accel.sh@72 -- # read -r opc module 00:05:37.861 20:58:53 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:37.861 20:58:53 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:37.861 20:58:53 -- accel/accel.sh@72 -- # IFS== 00:05:37.861 20:58:53 -- accel/accel.sh@72 -- # read -r opc module 00:05:37.861 20:58:53 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:37.861 20:58:53 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:37.861 20:58:53 -- accel/accel.sh@72 -- # IFS== 00:05:37.861 20:58:53 -- accel/accel.sh@72 -- # read -r opc module 00:05:37.861 20:58:53 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:37.861 20:58:53 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:37.861 20:58:53 -- accel/accel.sh@72 -- # IFS== 00:05:37.861 20:58:53 -- accel/accel.sh@72 -- # read -r opc module 00:05:37.861 20:58:53 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:37.861 20:58:53 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:37.861 20:58:53 -- accel/accel.sh@72 -- # IFS== 00:05:37.861 20:58:53 -- accel/accel.sh@72 -- # read -r opc module 00:05:37.861 20:58:53 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:37.861 20:58:53 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:37.861 20:58:53 -- accel/accel.sh@72 -- # IFS== 00:05:37.861 20:58:53 -- accel/accel.sh@72 -- # read -r opc module 00:05:37.861 20:58:53 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:37.861 20:58:53 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:37.861 20:58:53 -- accel/accel.sh@72 -- # IFS== 00:05:37.861 20:58:53 -- accel/accel.sh@72 -- # read -r opc module 00:05:37.861 20:58:53 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:37.861 20:58:53 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:37.861 20:58:53 -- accel/accel.sh@72 -- # IFS== 00:05:37.861 20:58:53 -- accel/accel.sh@72 -- # read -r opc module 00:05:37.861 20:58:53 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:37.861 20:58:53 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:37.861 20:58:53 -- accel/accel.sh@72 -- # IFS== 00:05:37.861 20:58:53 -- accel/accel.sh@72 -- # read -r opc module 00:05:37.861 20:58:53 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:37.861 20:58:53 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:37.861 20:58:53 -- accel/accel.sh@72 -- # IFS== 00:05:37.861 20:58:53 -- accel/accel.sh@72 -- # read -r opc module 00:05:37.861 20:58:53 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:37.861 20:58:53 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:37.861 20:58:53 -- accel/accel.sh@72 -- # IFS== 00:05:37.861 20:58:53 -- accel/accel.sh@72 -- # read -r opc module 00:05:37.861 20:58:53 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:37.861 20:58:53 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:37.861 20:58:53 -- accel/accel.sh@72 -- # IFS== 00:05:37.861 20:58:53 -- accel/accel.sh@72 -- # read -r opc module 00:05:37.861 20:58:53 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:37.861 20:58:53 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:37.861 20:58:53 -- accel/accel.sh@72 -- # IFS== 00:05:37.861 20:58:53 -- accel/accel.sh@72 -- # read -r opc module 00:05:37.861 20:58:53 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:37.861 20:58:53 -- accel/accel.sh@75 -- # killprocess 2873858 00:05:37.861 20:58:53 -- common/autotest_common.sh@936 -- # '[' -z 2873858 ']' 00:05:37.861 20:58:53 -- common/autotest_common.sh@940 -- # kill -0 2873858 00:05:37.861 20:58:53 -- common/autotest_common.sh@941 -- # uname 00:05:37.861 20:58:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:37.861 20:58:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2873858 00:05:37.861 20:58:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:37.861 20:58:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:37.861 20:58:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2873858' 00:05:37.861 killing process with pid 2873858 00:05:37.861 20:58:53 -- common/autotest_common.sh@955 -- # kill 2873858 00:05:37.861 20:58:53 -- common/autotest_common.sh@960 -- # wait 2873858 00:05:38.120 20:58:53 -- accel/accel.sh@76 -- # trap - ERR 00:05:38.120 20:58:53 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:38.120 20:58:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:38.120 20:58:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.120 20:58:53 -- common/autotest_common.sh@10 -- # set +x 00:05:38.379 20:58:54 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:05:38.379 20:58:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:38.379 20:58:54 -- accel/accel.sh@12 -- # build_accel_config 00:05:38.379 20:58:54 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:38.379 20:58:54 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:38.379 20:58:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.379 20:58:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.379 20:58:54 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:38.379 20:58:54 -- accel/accel.sh@40 -- # local IFS=, 00:05:38.379 20:58:54 -- accel/accel.sh@41 -- # jq -r . 00:05:38.379 20:58:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:38.379 20:58:54 -- common/autotest_common.sh@10 -- # set +x 00:05:38.379 20:58:54 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:38.379 20:58:54 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:38.379 20:58:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.379 20:58:54 -- common/autotest_common.sh@10 -- # set +x 00:05:38.379 ************************************ 00:05:38.379 START TEST accel_missing_filename 00:05:38.379 ************************************ 00:05:38.379 20:58:54 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:05:38.379 20:58:54 -- common/autotest_common.sh@638 -- # local es=0 00:05:38.379 20:58:54 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:38.379 20:58:54 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:38.379 20:58:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:38.379 20:58:54 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:38.379 20:58:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:38.379 20:58:54 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:05:38.379 20:58:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:38.379 20:58:54 -- accel/accel.sh@12 -- # build_accel_config 00:05:38.379 20:58:54 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:38.379 20:58:54 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:38.379 20:58:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.379 20:58:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.379 20:58:54 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:38.379 20:58:54 -- accel/accel.sh@40 -- # local IFS=, 00:05:38.379 20:58:54 -- accel/accel.sh@41 -- # jq -r . 00:05:38.379 [2024-04-18 20:58:54.309984] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:05:38.379 [2024-04-18 20:58:54.310056] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2874143 ] 00:05:38.640 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.640 [2024-04-18 20:58:54.372230] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.640 [2024-04-18 20:58:54.446979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.640 [2024-04-18 20:58:54.487174] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:38.640 [2024-04-18 20:58:54.546780] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:05:38.898 A filename is required. 00:05:38.898 20:58:54 -- common/autotest_common.sh@641 -- # es=234 00:05:38.898 20:58:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:38.898 20:58:54 -- common/autotest_common.sh@650 -- # es=106 00:05:38.898 20:58:54 -- common/autotest_common.sh@651 -- # case "$es" in 00:05:38.898 20:58:54 -- common/autotest_common.sh@658 -- # es=1 00:05:38.898 20:58:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:38.898 00:05:38.898 real 0m0.363s 00:05:38.898 user 0m0.281s 00:05:38.898 sys 0m0.119s 00:05:38.898 20:58:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:38.898 20:58:54 -- common/autotest_common.sh@10 -- # set +x 00:05:38.898 ************************************ 00:05:38.898 END TEST accel_missing_filename 00:05:38.898 ************************************ 00:05:38.898 20:58:54 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:38.898 20:58:54 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:05:38.898 20:58:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.898 20:58:54 -- common/autotest_common.sh@10 -- # set +x 00:05:38.898 ************************************ 00:05:38.898 START TEST accel_compress_verify 00:05:38.898 ************************************ 00:05:38.898 20:58:54 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:38.898 20:58:54 -- common/autotest_common.sh@638 -- # local es=0 00:05:38.898 20:58:54 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:38.898 20:58:54 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:38.898 20:58:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:38.898 20:58:54 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:38.898 20:58:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:38.898 20:58:54 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:38.898 20:58:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:38.898 20:58:54 -- accel/accel.sh@12 -- # build_accel_config 00:05:38.898 20:58:54 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:38.898 20:58:54 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:38.898 20:58:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.898 20:58:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.898 20:58:54 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:38.898 20:58:54 -- accel/accel.sh@40 -- # local IFS=, 00:05:38.898 20:58:54 -- accel/accel.sh@41 -- # jq -r . 00:05:39.157 [2024-04-18 20:58:54.836085] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:05:39.157 [2024-04-18 20:58:54.836155] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2874321 ] 00:05:39.157 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.157 [2024-04-18 20:58:54.898915] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.157 [2024-04-18 20:58:54.974177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.157 [2024-04-18 20:58:55.015137] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:39.157 [2024-04-18 20:58:55.075071] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:05:39.415 00:05:39.415 Compression does not support the verify option, aborting. 00:05:39.415 20:58:55 -- common/autotest_common.sh@641 -- # es=161 00:05:39.415 20:58:55 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:39.415 20:58:55 -- common/autotest_common.sh@650 -- # es=33 00:05:39.415 20:58:55 -- common/autotest_common.sh@651 -- # case "$es" in 00:05:39.415 20:58:55 -- common/autotest_common.sh@658 -- # es=1 00:05:39.415 20:58:55 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:39.415 00:05:39.415 real 0m0.363s 00:05:39.416 user 0m0.276s 00:05:39.416 sys 0m0.126s 00:05:39.416 20:58:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:39.416 20:58:55 -- common/autotest_common.sh@10 -- # set +x 00:05:39.416 ************************************ 00:05:39.416 END TEST accel_compress_verify 00:05:39.416 ************************************ 00:05:39.416 20:58:55 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:39.416 20:58:55 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:39.416 20:58:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.416 20:58:55 -- common/autotest_common.sh@10 -- # set +x 00:05:39.416 ************************************ 00:05:39.416 START TEST accel_wrong_workload 00:05:39.416 ************************************ 00:05:39.416 20:58:55 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:05:39.416 20:58:55 -- common/autotest_common.sh@638 -- # local es=0 00:05:39.416 20:58:55 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:39.416 20:58:55 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:39.416 20:58:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:39.416 20:58:55 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:39.416 20:58:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:39.416 20:58:55 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:05:39.416 20:58:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:39.416 20:58:55 -- accel/accel.sh@12 -- # build_accel_config 00:05:39.416 20:58:55 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:39.416 20:58:55 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:39.416 20:58:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.416 20:58:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.416 20:58:55 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:39.416 20:58:55 -- accel/accel.sh@40 -- # local IFS=, 00:05:39.416 20:58:55 -- accel/accel.sh@41 -- # jq -r . 00:05:39.416 Unsupported workload type: foobar 00:05:39.416 [2024-04-18 20:58:55.337789] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:39.416 accel_perf options: 00:05:39.416 [-h help message] 00:05:39.416 [-q queue depth per core] 00:05:39.416 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:39.416 [-T number of threads per core 00:05:39.416 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:39.416 [-t time in seconds] 00:05:39.416 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:39.416 [ dif_verify, , dif_generate, dif_generate_copy 00:05:39.416 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:39.416 [-l for compress/decompress workloads, name of uncompressed input file 00:05:39.416 [-S for crc32c workload, use this seed value (default 0) 00:05:39.416 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:39.416 [-f for fill workload, use this BYTE value (default 255) 00:05:39.416 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:39.416 [-y verify result if this switch is on] 00:05:39.416 [-a tasks to allocate per core (default: same value as -q)] 00:05:39.416 Can be used to spread operations across a wider range of memory. 00:05:39.416 20:58:55 -- common/autotest_common.sh@641 -- # es=1 00:05:39.416 20:58:55 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:39.416 20:58:55 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:39.416 20:58:55 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:39.416 00:05:39.416 real 0m0.034s 00:05:39.416 user 0m0.022s 00:05:39.416 sys 0m0.012s 00:05:39.416 20:58:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:39.416 20:58:55 -- common/autotest_common.sh@10 -- # set +x 00:05:39.416 ************************************ 00:05:39.416 END TEST accel_wrong_workload 00:05:39.416 ************************************ 00:05:39.675 Error: writing output failed: Broken pipe 00:05:39.675 20:58:55 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:39.675 20:58:55 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:05:39.675 20:58:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.675 20:58:55 -- common/autotest_common.sh@10 -- # set +x 00:05:39.675 ************************************ 00:05:39.675 START TEST accel_negative_buffers 00:05:39.675 ************************************ 00:05:39.675 20:58:55 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:39.675 20:58:55 -- common/autotest_common.sh@638 -- # local es=0 00:05:39.675 20:58:55 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:39.675 20:58:55 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:39.675 20:58:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:39.675 20:58:55 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:39.675 20:58:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:39.675 20:58:55 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:05:39.675 20:58:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:39.675 20:58:55 -- accel/accel.sh@12 -- # build_accel_config 00:05:39.675 20:58:55 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:39.675 20:58:55 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:39.675 20:58:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.675 20:58:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.675 20:58:55 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:39.675 20:58:55 -- accel/accel.sh@40 -- # local IFS=, 00:05:39.675 20:58:55 -- accel/accel.sh@41 -- # jq -r . 00:05:39.675 -x option must be non-negative. 00:05:39.675 [2024-04-18 20:58:55.552559] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:39.675 accel_perf options: 00:05:39.675 [-h help message] 00:05:39.675 [-q queue depth per core] 00:05:39.675 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:39.675 [-T number of threads per core 00:05:39.675 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:39.675 [-t time in seconds] 00:05:39.675 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:39.675 [ dif_verify, , dif_generate, dif_generate_copy 00:05:39.675 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:39.675 [-l for compress/decompress workloads, name of uncompressed input file 00:05:39.675 [-S for crc32c workload, use this seed value (default 0) 00:05:39.675 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:39.675 [-f for fill workload, use this BYTE value (default 255) 00:05:39.675 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:39.675 [-y verify result if this switch is on] 00:05:39.675 [-a tasks to allocate per core (default: same value as -q)] 00:05:39.675 Can be used to spread operations across a wider range of memory. 00:05:39.675 20:58:55 -- common/autotest_common.sh@641 -- # es=1 00:05:39.675 20:58:55 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:39.675 20:58:55 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:39.675 20:58:55 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:39.675 00:05:39.676 real 0m0.033s 00:05:39.676 user 0m0.020s 00:05:39.676 sys 0m0.013s 00:05:39.676 20:58:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:39.676 20:58:55 -- common/autotest_common.sh@10 -- # set +x 00:05:39.676 ************************************ 00:05:39.676 END TEST accel_negative_buffers 00:05:39.676 ************************************ 00:05:39.676 Error: writing output failed: Broken pipe 00:05:39.676 20:58:55 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:39.676 20:58:55 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:39.676 20:58:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.676 20:58:55 -- common/autotest_common.sh@10 -- # set +x 00:05:39.933 ************************************ 00:05:39.934 START TEST accel_crc32c 00:05:39.934 ************************************ 00:05:39.934 20:58:55 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:39.934 20:58:55 -- accel/accel.sh@16 -- # local accel_opc 00:05:39.934 20:58:55 -- accel/accel.sh@17 -- # local accel_module 00:05:39.934 20:58:55 -- accel/accel.sh@19 -- # IFS=: 00:05:39.934 20:58:55 -- accel/accel.sh@19 -- # read -r var val 00:05:39.934 20:58:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:39.934 20:58:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:39.934 20:58:55 -- accel/accel.sh@12 -- # build_accel_config 00:05:39.934 20:58:55 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:39.934 20:58:55 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:39.934 20:58:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.934 20:58:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.934 20:58:55 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:39.934 20:58:55 -- accel/accel.sh@40 -- # local IFS=, 00:05:39.934 20:58:55 -- accel/accel.sh@41 -- # jq -r . 00:05:39.934 [2024-04-18 20:58:55.726774] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:05:39.934 [2024-04-18 20:58:55.726838] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2874482 ] 00:05:39.934 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.934 [2024-04-18 20:58:55.785392] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.934 [2024-04-18 20:58:55.855294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.260 20:58:55 -- accel/accel.sh@20 -- # val= 00:05:40.260 20:58:55 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.260 20:58:55 -- accel/accel.sh@19 -- # IFS=: 00:05:40.260 20:58:55 -- accel/accel.sh@19 -- # read -r var val 00:05:40.260 20:58:55 -- accel/accel.sh@20 -- # val= 00:05:40.260 20:58:55 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.260 20:58:55 -- accel/accel.sh@19 -- # IFS=: 00:05:40.260 20:58:55 -- accel/accel.sh@19 -- # read -r var val 00:05:40.260 20:58:55 -- accel/accel.sh@20 -- # val=0x1 00:05:40.260 20:58:55 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.260 20:58:55 -- accel/accel.sh@19 -- # IFS=: 00:05:40.260 20:58:55 -- accel/accel.sh@19 -- # read -r var val 00:05:40.260 20:58:55 -- accel/accel.sh@20 -- # val= 00:05:40.260 20:58:55 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.260 20:58:55 -- accel/accel.sh@19 -- # IFS=: 00:05:40.260 20:58:55 -- accel/accel.sh@19 -- # read -r var val 00:05:40.260 20:58:55 -- accel/accel.sh@20 -- # val= 00:05:40.260 20:58:55 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.260 20:58:55 -- accel/accel.sh@19 -- # IFS=: 00:05:40.260 20:58:55 -- accel/accel.sh@19 -- # read -r var val 00:05:40.260 20:58:55 -- accel/accel.sh@20 -- # val=crc32c 00:05:40.260 20:58:55 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.260 20:58:55 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:40.260 20:58:55 -- accel/accel.sh@19 -- # IFS=: 00:05:40.260 20:58:55 -- accel/accel.sh@19 -- # read -r var val 00:05:40.260 20:58:55 -- accel/accel.sh@20 -- # val=32 00:05:40.260 20:58:55 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.260 20:58:55 -- accel/accel.sh@19 -- # IFS=: 00:05:40.260 20:58:55 -- accel/accel.sh@19 -- # read -r var val 00:05:40.260 20:58:55 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:40.260 20:58:55 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.260 20:58:55 -- accel/accel.sh@19 -- # IFS=: 00:05:40.260 20:58:55 -- accel/accel.sh@19 -- # read -r var val 00:05:40.260 20:58:55 -- accel/accel.sh@20 -- # val= 00:05:40.260 20:58:55 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.260 20:58:55 -- accel/accel.sh@19 -- # IFS=: 00:05:40.260 20:58:55 -- accel/accel.sh@19 -- # read -r var val 00:05:40.260 20:58:55 -- accel/accel.sh@20 -- # val=software 00:05:40.260 20:58:55 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.260 20:58:55 -- accel/accel.sh@22 -- # accel_module=software 00:05:40.260 20:58:55 -- accel/accel.sh@19 -- # IFS=: 00:05:40.260 20:58:55 -- accel/accel.sh@19 -- # read -r var val 00:05:40.260 20:58:55 -- accel/accel.sh@20 -- # val=32 00:05:40.260 20:58:55 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.260 20:58:55 -- accel/accel.sh@19 -- # IFS=: 00:05:40.260 20:58:55 -- accel/accel.sh@19 -- # read -r var val 00:05:40.260 20:58:55 -- accel/accel.sh@20 -- # val=32 00:05:40.260 20:58:55 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.260 20:58:55 -- accel/accel.sh@19 -- # IFS=: 00:05:40.260 20:58:55 -- accel/accel.sh@19 -- # read -r var val 00:05:40.260 20:58:55 -- accel/accel.sh@20 -- # val=1 00:05:40.260 20:58:55 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.260 20:58:55 -- accel/accel.sh@19 -- # IFS=: 00:05:40.260 20:58:55 -- accel/accel.sh@19 -- # read -r var val 00:05:40.260 20:58:55 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:40.260 20:58:55 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.260 20:58:55 -- accel/accel.sh@19 -- # IFS=: 00:05:40.260 20:58:55 -- accel/accel.sh@19 -- # read -r var val 00:05:40.260 20:58:55 -- accel/accel.sh@20 -- # val=Yes 00:05:40.260 20:58:55 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.260 20:58:55 -- accel/accel.sh@19 -- # IFS=: 00:05:40.260 20:58:55 -- accel/accel.sh@19 -- # read -r var val 00:05:40.260 20:58:55 -- accel/accel.sh@20 -- # val= 00:05:40.260 20:58:55 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.260 20:58:55 -- accel/accel.sh@19 -- # IFS=: 00:05:40.260 20:58:55 -- accel/accel.sh@19 -- # read -r var val 00:05:40.260 20:58:55 -- accel/accel.sh@20 -- # val= 00:05:40.260 20:58:55 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.260 20:58:55 -- accel/accel.sh@19 -- # IFS=: 00:05:40.260 20:58:55 -- accel/accel.sh@19 -- # read -r var val 00:05:41.197 20:58:57 -- accel/accel.sh@20 -- # val= 00:05:41.197 20:58:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.197 20:58:57 -- accel/accel.sh@19 -- # IFS=: 00:05:41.197 20:58:57 -- accel/accel.sh@19 -- # read -r var val 00:05:41.197 20:58:57 -- accel/accel.sh@20 -- # val= 00:05:41.197 20:58:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.197 20:58:57 -- accel/accel.sh@19 -- # IFS=: 00:05:41.197 20:58:57 -- accel/accel.sh@19 -- # read -r var val 00:05:41.197 20:58:57 -- accel/accel.sh@20 -- # val= 00:05:41.197 20:58:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.197 20:58:57 -- accel/accel.sh@19 -- # IFS=: 00:05:41.197 20:58:57 -- accel/accel.sh@19 -- # read -r var val 00:05:41.197 20:58:57 -- accel/accel.sh@20 -- # val= 00:05:41.197 20:58:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.197 20:58:57 -- accel/accel.sh@19 -- # IFS=: 00:05:41.197 20:58:57 -- accel/accel.sh@19 -- # read -r var val 00:05:41.197 20:58:57 -- accel/accel.sh@20 -- # val= 00:05:41.197 20:58:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.197 20:58:57 -- accel/accel.sh@19 -- # IFS=: 00:05:41.197 20:58:57 -- accel/accel.sh@19 -- # read -r var val 00:05:41.197 20:58:57 -- accel/accel.sh@20 -- # val= 00:05:41.197 20:58:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.197 20:58:57 -- accel/accel.sh@19 -- # IFS=: 00:05:41.197 20:58:57 -- accel/accel.sh@19 -- # read -r var val 00:05:41.197 20:58:57 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:41.197 20:58:57 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:41.197 20:58:57 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:41.197 00:05:41.197 real 0m1.357s 00:05:41.197 user 0m1.253s 00:05:41.197 sys 0m0.115s 00:05:41.197 20:58:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:41.197 20:58:57 -- common/autotest_common.sh@10 -- # set +x 00:05:41.197 ************************************ 00:05:41.197 END TEST accel_crc32c 00:05:41.197 ************************************ 00:05:41.197 20:58:57 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:41.197 20:58:57 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:41.197 20:58:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:41.197 20:58:57 -- common/autotest_common.sh@10 -- # set +x 00:05:41.457 ************************************ 00:05:41.457 START TEST accel_crc32c_C2 00:05:41.457 ************************************ 00:05:41.457 20:58:57 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:41.457 20:58:57 -- accel/accel.sh@16 -- # local accel_opc 00:05:41.457 20:58:57 -- accel/accel.sh@17 -- # local accel_module 00:05:41.457 20:58:57 -- accel/accel.sh@19 -- # IFS=: 00:05:41.457 20:58:57 -- accel/accel.sh@19 -- # read -r var val 00:05:41.457 20:58:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:41.457 20:58:57 -- accel/accel.sh@12 -- # build_accel_config 00:05:41.457 20:58:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:41.457 20:58:57 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:41.457 20:58:57 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:41.457 20:58:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.457 20:58:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.457 20:58:57 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:41.457 20:58:57 -- accel/accel.sh@40 -- # local IFS=, 00:05:41.457 20:58:57 -- accel/accel.sh@41 -- # jq -r . 00:05:41.457 [2024-04-18 20:58:57.221386] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:05:41.457 [2024-04-18 20:58:57.221450] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2874736 ] 00:05:41.457 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.457 [2024-04-18 20:58:57.280297] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.457 [2024-04-18 20:58:57.351705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.716 20:58:57 -- accel/accel.sh@20 -- # val= 00:05:41.716 20:58:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.716 20:58:57 -- accel/accel.sh@19 -- # IFS=: 00:05:41.716 20:58:57 -- accel/accel.sh@19 -- # read -r var val 00:05:41.716 20:58:57 -- accel/accel.sh@20 -- # val= 00:05:41.716 20:58:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.716 20:58:57 -- accel/accel.sh@19 -- # IFS=: 00:05:41.716 20:58:57 -- accel/accel.sh@19 -- # read -r var val 00:05:41.716 20:58:57 -- accel/accel.sh@20 -- # val=0x1 00:05:41.716 20:58:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.716 20:58:57 -- accel/accel.sh@19 -- # IFS=: 00:05:41.716 20:58:57 -- accel/accel.sh@19 -- # read -r var val 00:05:41.716 20:58:57 -- accel/accel.sh@20 -- # val= 00:05:41.716 20:58:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.716 20:58:57 -- accel/accel.sh@19 -- # IFS=: 00:05:41.716 20:58:57 -- accel/accel.sh@19 -- # read -r var val 00:05:41.716 20:58:57 -- accel/accel.sh@20 -- # val= 00:05:41.716 20:58:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.716 20:58:57 -- accel/accel.sh@19 -- # IFS=: 00:05:41.716 20:58:57 -- accel/accel.sh@19 -- # read -r var val 00:05:41.716 20:58:57 -- accel/accel.sh@20 -- # val=crc32c 00:05:41.716 20:58:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.716 20:58:57 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:41.716 20:58:57 -- accel/accel.sh@19 -- # IFS=: 00:05:41.716 20:58:57 -- accel/accel.sh@19 -- # read -r var val 00:05:41.716 20:58:57 -- accel/accel.sh@20 -- # val=0 00:05:41.716 20:58:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.716 20:58:57 -- accel/accel.sh@19 -- # IFS=: 00:05:41.716 20:58:57 -- accel/accel.sh@19 -- # read -r var val 00:05:41.716 20:58:57 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:41.716 20:58:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.716 20:58:57 -- accel/accel.sh@19 -- # IFS=: 00:05:41.716 20:58:57 -- accel/accel.sh@19 -- # read -r var val 00:05:41.716 20:58:57 -- accel/accel.sh@20 -- # val= 00:05:41.716 20:58:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.716 20:58:57 -- accel/accel.sh@19 -- # IFS=: 00:05:41.716 20:58:57 -- accel/accel.sh@19 -- # read -r var val 00:05:41.716 20:58:57 -- accel/accel.sh@20 -- # val=software 00:05:41.716 20:58:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.716 20:58:57 -- accel/accel.sh@22 -- # accel_module=software 00:05:41.716 20:58:57 -- accel/accel.sh@19 -- # IFS=: 00:05:41.716 20:58:57 -- accel/accel.sh@19 -- # read -r var val 00:05:41.716 20:58:57 -- accel/accel.sh@20 -- # val=32 00:05:41.716 20:58:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.716 20:58:57 -- accel/accel.sh@19 -- # IFS=: 00:05:41.716 20:58:57 -- accel/accel.sh@19 -- # read -r var val 00:05:41.716 20:58:57 -- accel/accel.sh@20 -- # val=32 00:05:41.716 20:58:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.716 20:58:57 -- accel/accel.sh@19 -- # IFS=: 00:05:41.716 20:58:57 -- accel/accel.sh@19 -- # read -r var val 00:05:41.716 20:58:57 -- accel/accel.sh@20 -- # val=1 00:05:41.716 20:58:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.716 20:58:57 -- accel/accel.sh@19 -- # IFS=: 00:05:41.716 20:58:57 -- accel/accel.sh@19 -- # read -r var val 00:05:41.716 20:58:57 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:41.716 20:58:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.717 20:58:57 -- accel/accel.sh@19 -- # IFS=: 00:05:41.717 20:58:57 -- accel/accel.sh@19 -- # read -r var val 00:05:41.717 20:58:57 -- accel/accel.sh@20 -- # val=Yes 00:05:41.717 20:58:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.717 20:58:57 -- accel/accel.sh@19 -- # IFS=: 00:05:41.717 20:58:57 -- accel/accel.sh@19 -- # read -r var val 00:05:41.717 20:58:57 -- accel/accel.sh@20 -- # val= 00:05:41.717 20:58:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.717 20:58:57 -- accel/accel.sh@19 -- # IFS=: 00:05:41.717 20:58:57 -- accel/accel.sh@19 -- # read -r var val 00:05:41.717 20:58:57 -- accel/accel.sh@20 -- # val= 00:05:41.717 20:58:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.717 20:58:57 -- accel/accel.sh@19 -- # IFS=: 00:05:41.717 20:58:57 -- accel/accel.sh@19 -- # read -r var val 00:05:42.654 20:58:58 -- accel/accel.sh@20 -- # val= 00:05:42.654 20:58:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.654 20:58:58 -- accel/accel.sh@19 -- # IFS=: 00:05:42.654 20:58:58 -- accel/accel.sh@19 -- # read -r var val 00:05:42.654 20:58:58 -- accel/accel.sh@20 -- # val= 00:05:42.654 20:58:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.654 20:58:58 -- accel/accel.sh@19 -- # IFS=: 00:05:42.654 20:58:58 -- accel/accel.sh@19 -- # read -r var val 00:05:42.654 20:58:58 -- accel/accel.sh@20 -- # val= 00:05:42.654 20:58:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.654 20:58:58 -- accel/accel.sh@19 -- # IFS=: 00:05:42.654 20:58:58 -- accel/accel.sh@19 -- # read -r var val 00:05:42.654 20:58:58 -- accel/accel.sh@20 -- # val= 00:05:42.654 20:58:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.654 20:58:58 -- accel/accel.sh@19 -- # IFS=: 00:05:42.654 20:58:58 -- accel/accel.sh@19 -- # read -r var val 00:05:42.654 20:58:58 -- accel/accel.sh@20 -- # val= 00:05:42.654 20:58:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.654 20:58:58 -- accel/accel.sh@19 -- # IFS=: 00:05:42.654 20:58:58 -- accel/accel.sh@19 -- # read -r var val 00:05:42.654 20:58:58 -- accel/accel.sh@20 -- # val= 00:05:42.654 20:58:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.654 20:58:58 -- accel/accel.sh@19 -- # IFS=: 00:05:42.654 20:58:58 -- accel/accel.sh@19 -- # read -r var val 00:05:42.654 20:58:58 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:42.654 20:58:58 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:42.654 20:58:58 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:42.654 00:05:42.654 real 0m1.363s 00:05:42.654 user 0m1.258s 00:05:42.654 sys 0m0.116s 00:05:42.654 20:58:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:42.654 20:58:58 -- common/autotest_common.sh@10 -- # set +x 00:05:42.654 ************************************ 00:05:42.654 END TEST accel_crc32c_C2 00:05:42.654 ************************************ 00:05:42.913 20:58:58 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:42.913 20:58:58 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:42.913 20:58:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:42.913 20:58:58 -- common/autotest_common.sh@10 -- # set +x 00:05:42.913 ************************************ 00:05:42.913 START TEST accel_copy 00:05:42.913 ************************************ 00:05:42.913 20:58:58 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:05:42.913 20:58:58 -- accel/accel.sh@16 -- # local accel_opc 00:05:42.913 20:58:58 -- accel/accel.sh@17 -- # local accel_module 00:05:42.913 20:58:58 -- accel/accel.sh@19 -- # IFS=: 00:05:42.913 20:58:58 -- accel/accel.sh@19 -- # read -r var val 00:05:42.913 20:58:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:42.913 20:58:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:42.913 20:58:58 -- accel/accel.sh@12 -- # build_accel_config 00:05:42.913 20:58:58 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:42.913 20:58:58 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:42.913 20:58:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.913 20:58:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.913 20:58:58 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:42.913 20:58:58 -- accel/accel.sh@40 -- # local IFS=, 00:05:42.913 20:58:58 -- accel/accel.sh@41 -- # jq -r . 00:05:42.913 [2024-04-18 20:58:58.731173] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:05:42.913 [2024-04-18 20:58:58.731218] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2875004 ] 00:05:42.913 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.913 [2024-04-18 20:58:58.790972] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.172 [2024-04-18 20:58:58.863816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.172 20:58:58 -- accel/accel.sh@20 -- # val= 00:05:43.172 20:58:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.172 20:58:58 -- accel/accel.sh@19 -- # IFS=: 00:05:43.172 20:58:58 -- accel/accel.sh@19 -- # read -r var val 00:05:43.172 20:58:58 -- accel/accel.sh@20 -- # val= 00:05:43.172 20:58:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.172 20:58:58 -- accel/accel.sh@19 -- # IFS=: 00:05:43.172 20:58:58 -- accel/accel.sh@19 -- # read -r var val 00:05:43.172 20:58:58 -- accel/accel.sh@20 -- # val=0x1 00:05:43.172 20:58:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.172 20:58:58 -- accel/accel.sh@19 -- # IFS=: 00:05:43.172 20:58:58 -- accel/accel.sh@19 -- # read -r var val 00:05:43.172 20:58:58 -- accel/accel.sh@20 -- # val= 00:05:43.172 20:58:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.172 20:58:58 -- accel/accel.sh@19 -- # IFS=: 00:05:43.172 20:58:58 -- accel/accel.sh@19 -- # read -r var val 00:05:43.172 20:58:58 -- accel/accel.sh@20 -- # val= 00:05:43.172 20:58:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.172 20:58:58 -- accel/accel.sh@19 -- # IFS=: 00:05:43.172 20:58:58 -- accel/accel.sh@19 -- # read -r var val 00:05:43.172 20:58:58 -- accel/accel.sh@20 -- # val=copy 00:05:43.172 20:58:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.172 20:58:58 -- accel/accel.sh@23 -- # accel_opc=copy 00:05:43.172 20:58:58 -- accel/accel.sh@19 -- # IFS=: 00:05:43.172 20:58:58 -- accel/accel.sh@19 -- # read -r var val 00:05:43.172 20:58:58 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:43.172 20:58:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.173 20:58:58 -- accel/accel.sh@19 -- # IFS=: 00:05:43.173 20:58:58 -- accel/accel.sh@19 -- # read -r var val 00:05:43.173 20:58:58 -- accel/accel.sh@20 -- # val= 00:05:43.173 20:58:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.173 20:58:58 -- accel/accel.sh@19 -- # IFS=: 00:05:43.173 20:58:58 -- accel/accel.sh@19 -- # read -r var val 00:05:43.173 20:58:58 -- accel/accel.sh@20 -- # val=software 00:05:43.173 20:58:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.173 20:58:58 -- accel/accel.sh@22 -- # accel_module=software 00:05:43.173 20:58:58 -- accel/accel.sh@19 -- # IFS=: 00:05:43.173 20:58:58 -- accel/accel.sh@19 -- # read -r var val 00:05:43.173 20:58:58 -- accel/accel.sh@20 -- # val=32 00:05:43.173 20:58:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.173 20:58:58 -- accel/accel.sh@19 -- # IFS=: 00:05:43.173 20:58:58 -- accel/accel.sh@19 -- # read -r var val 00:05:43.173 20:58:58 -- accel/accel.sh@20 -- # val=32 00:05:43.173 20:58:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.173 20:58:58 -- accel/accel.sh@19 -- # IFS=: 00:05:43.173 20:58:58 -- accel/accel.sh@19 -- # read -r var val 00:05:43.173 20:58:58 -- accel/accel.sh@20 -- # val=1 00:05:43.173 20:58:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.173 20:58:58 -- accel/accel.sh@19 -- # IFS=: 00:05:43.173 20:58:58 -- accel/accel.sh@19 -- # read -r var val 00:05:43.173 20:58:58 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:43.173 20:58:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.173 20:58:58 -- accel/accel.sh@19 -- # IFS=: 00:05:43.173 20:58:58 -- accel/accel.sh@19 -- # read -r var val 00:05:43.173 20:58:58 -- accel/accel.sh@20 -- # val=Yes 00:05:43.173 20:58:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.173 20:58:58 -- accel/accel.sh@19 -- # IFS=: 00:05:43.173 20:58:58 -- accel/accel.sh@19 -- # read -r var val 00:05:43.173 20:58:58 -- accel/accel.sh@20 -- # val= 00:05:43.173 20:58:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.173 20:58:58 -- accel/accel.sh@19 -- # IFS=: 00:05:43.173 20:58:58 -- accel/accel.sh@19 -- # read -r var val 00:05:43.173 20:58:58 -- accel/accel.sh@20 -- # val= 00:05:43.173 20:58:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.173 20:58:58 -- accel/accel.sh@19 -- # IFS=: 00:05:43.173 20:58:58 -- accel/accel.sh@19 -- # read -r var val 00:05:44.548 20:59:00 -- accel/accel.sh@20 -- # val= 00:05:44.548 20:59:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.548 20:59:00 -- accel/accel.sh@19 -- # IFS=: 00:05:44.548 20:59:00 -- accel/accel.sh@19 -- # read -r var val 00:05:44.548 20:59:00 -- accel/accel.sh@20 -- # val= 00:05:44.548 20:59:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.548 20:59:00 -- accel/accel.sh@19 -- # IFS=: 00:05:44.548 20:59:00 -- accel/accel.sh@19 -- # read -r var val 00:05:44.548 20:59:00 -- accel/accel.sh@20 -- # val= 00:05:44.548 20:59:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.548 20:59:00 -- accel/accel.sh@19 -- # IFS=: 00:05:44.548 20:59:00 -- accel/accel.sh@19 -- # read -r var val 00:05:44.548 20:59:00 -- accel/accel.sh@20 -- # val= 00:05:44.548 20:59:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.548 20:59:00 -- accel/accel.sh@19 -- # IFS=: 00:05:44.548 20:59:00 -- accel/accel.sh@19 -- # read -r var val 00:05:44.548 20:59:00 -- accel/accel.sh@20 -- # val= 00:05:44.548 20:59:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.548 20:59:00 -- accel/accel.sh@19 -- # IFS=: 00:05:44.548 20:59:00 -- accel/accel.sh@19 -- # read -r var val 00:05:44.548 20:59:00 -- accel/accel.sh@20 -- # val= 00:05:44.548 20:59:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.548 20:59:00 -- accel/accel.sh@19 -- # IFS=: 00:05:44.548 20:59:00 -- accel/accel.sh@19 -- # read -r var val 00:05:44.548 20:59:00 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:44.548 20:59:00 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:44.548 20:59:00 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:44.548 00:05:44.548 real 0m1.360s 00:05:44.548 user 0m1.255s 00:05:44.548 sys 0m0.118s 00:05:44.548 20:59:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:44.548 20:59:00 -- common/autotest_common.sh@10 -- # set +x 00:05:44.548 ************************************ 00:05:44.548 END TEST accel_copy 00:05:44.548 ************************************ 00:05:44.548 20:59:00 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:44.548 20:59:00 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:05:44.548 20:59:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:44.548 20:59:00 -- common/autotest_common.sh@10 -- # set +x 00:05:44.548 ************************************ 00:05:44.548 START TEST accel_fill 00:05:44.548 ************************************ 00:05:44.548 20:59:00 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:44.548 20:59:00 -- accel/accel.sh@16 -- # local accel_opc 00:05:44.548 20:59:00 -- accel/accel.sh@17 -- # local accel_module 00:05:44.548 20:59:00 -- accel/accel.sh@19 -- # IFS=: 00:05:44.548 20:59:00 -- accel/accel.sh@19 -- # read -r var val 00:05:44.548 20:59:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:44.548 20:59:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:44.548 20:59:00 -- accel/accel.sh@12 -- # build_accel_config 00:05:44.548 20:59:00 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:44.548 20:59:00 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:44.548 20:59:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.548 20:59:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.548 20:59:00 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:44.548 20:59:00 -- accel/accel.sh@40 -- # local IFS=, 00:05:44.548 20:59:00 -- accel/accel.sh@41 -- # jq -r . 00:05:44.548 [2024-04-18 20:59:00.260482] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:05:44.548 [2024-04-18 20:59:00.260559] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2875335 ] 00:05:44.548 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.548 [2024-04-18 20:59:00.326630] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.548 [2024-04-18 20:59:00.403116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.548 20:59:00 -- accel/accel.sh@20 -- # val= 00:05:44.548 20:59:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.548 20:59:00 -- accel/accel.sh@19 -- # IFS=: 00:05:44.548 20:59:00 -- accel/accel.sh@19 -- # read -r var val 00:05:44.548 20:59:00 -- accel/accel.sh@20 -- # val= 00:05:44.548 20:59:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.548 20:59:00 -- accel/accel.sh@19 -- # IFS=: 00:05:44.548 20:59:00 -- accel/accel.sh@19 -- # read -r var val 00:05:44.548 20:59:00 -- accel/accel.sh@20 -- # val=0x1 00:05:44.548 20:59:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.548 20:59:00 -- accel/accel.sh@19 -- # IFS=: 00:05:44.548 20:59:00 -- accel/accel.sh@19 -- # read -r var val 00:05:44.548 20:59:00 -- accel/accel.sh@20 -- # val= 00:05:44.548 20:59:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.548 20:59:00 -- accel/accel.sh@19 -- # IFS=: 00:05:44.548 20:59:00 -- accel/accel.sh@19 -- # read -r var val 00:05:44.548 20:59:00 -- accel/accel.sh@20 -- # val= 00:05:44.548 20:59:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.548 20:59:00 -- accel/accel.sh@19 -- # IFS=: 00:05:44.548 20:59:00 -- accel/accel.sh@19 -- # read -r var val 00:05:44.548 20:59:00 -- accel/accel.sh@20 -- # val=fill 00:05:44.548 20:59:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.548 20:59:00 -- accel/accel.sh@23 -- # accel_opc=fill 00:05:44.548 20:59:00 -- accel/accel.sh@19 -- # IFS=: 00:05:44.548 20:59:00 -- accel/accel.sh@19 -- # read -r var val 00:05:44.548 20:59:00 -- accel/accel.sh@20 -- # val=0x80 00:05:44.548 20:59:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.548 20:59:00 -- accel/accel.sh@19 -- # IFS=: 00:05:44.548 20:59:00 -- accel/accel.sh@19 -- # read -r var val 00:05:44.548 20:59:00 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:44.548 20:59:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.548 20:59:00 -- accel/accel.sh@19 -- # IFS=: 00:05:44.548 20:59:00 -- accel/accel.sh@19 -- # read -r var val 00:05:44.548 20:59:00 -- accel/accel.sh@20 -- # val= 00:05:44.548 20:59:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.548 20:59:00 -- accel/accel.sh@19 -- # IFS=: 00:05:44.548 20:59:00 -- accel/accel.sh@19 -- # read -r var val 00:05:44.548 20:59:00 -- accel/accel.sh@20 -- # val=software 00:05:44.548 20:59:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.548 20:59:00 -- accel/accel.sh@22 -- # accel_module=software 00:05:44.548 20:59:00 -- accel/accel.sh@19 -- # IFS=: 00:05:44.548 20:59:00 -- accel/accel.sh@19 -- # read -r var val 00:05:44.548 20:59:00 -- accel/accel.sh@20 -- # val=64 00:05:44.548 20:59:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.548 20:59:00 -- accel/accel.sh@19 -- # IFS=: 00:05:44.548 20:59:00 -- accel/accel.sh@19 -- # read -r var val 00:05:44.548 20:59:00 -- accel/accel.sh@20 -- # val=64 00:05:44.548 20:59:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.548 20:59:00 -- accel/accel.sh@19 -- # IFS=: 00:05:44.548 20:59:00 -- accel/accel.sh@19 -- # read -r var val 00:05:44.548 20:59:00 -- accel/accel.sh@20 -- # val=1 00:05:44.548 20:59:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.548 20:59:00 -- accel/accel.sh@19 -- # IFS=: 00:05:44.548 20:59:00 -- accel/accel.sh@19 -- # read -r var val 00:05:44.548 20:59:00 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:44.548 20:59:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.548 20:59:00 -- accel/accel.sh@19 -- # IFS=: 00:05:44.548 20:59:00 -- accel/accel.sh@19 -- # read -r var val 00:05:44.548 20:59:00 -- accel/accel.sh@20 -- # val=Yes 00:05:44.548 20:59:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.548 20:59:00 -- accel/accel.sh@19 -- # IFS=: 00:05:44.548 20:59:00 -- accel/accel.sh@19 -- # read -r var val 00:05:44.548 20:59:00 -- accel/accel.sh@20 -- # val= 00:05:44.548 20:59:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.548 20:59:00 -- accel/accel.sh@19 -- # IFS=: 00:05:44.548 20:59:00 -- accel/accel.sh@19 -- # read -r var val 00:05:44.548 20:59:00 -- accel/accel.sh@20 -- # val= 00:05:44.548 20:59:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.548 20:59:00 -- accel/accel.sh@19 -- # IFS=: 00:05:44.548 20:59:00 -- accel/accel.sh@19 -- # read -r var val 00:05:45.923 20:59:01 -- accel/accel.sh@20 -- # val= 00:05:45.923 20:59:01 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.923 20:59:01 -- accel/accel.sh@19 -- # IFS=: 00:05:45.923 20:59:01 -- accel/accel.sh@19 -- # read -r var val 00:05:45.923 20:59:01 -- accel/accel.sh@20 -- # val= 00:05:45.923 20:59:01 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.923 20:59:01 -- accel/accel.sh@19 -- # IFS=: 00:05:45.923 20:59:01 -- accel/accel.sh@19 -- # read -r var val 00:05:45.923 20:59:01 -- accel/accel.sh@20 -- # val= 00:05:45.923 20:59:01 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.923 20:59:01 -- accel/accel.sh@19 -- # IFS=: 00:05:45.923 20:59:01 -- accel/accel.sh@19 -- # read -r var val 00:05:45.923 20:59:01 -- accel/accel.sh@20 -- # val= 00:05:45.923 20:59:01 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.923 20:59:01 -- accel/accel.sh@19 -- # IFS=: 00:05:45.923 20:59:01 -- accel/accel.sh@19 -- # read -r var val 00:05:45.923 20:59:01 -- accel/accel.sh@20 -- # val= 00:05:45.923 20:59:01 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.923 20:59:01 -- accel/accel.sh@19 -- # IFS=: 00:05:45.923 20:59:01 -- accel/accel.sh@19 -- # read -r var val 00:05:45.923 20:59:01 -- accel/accel.sh@20 -- # val= 00:05:45.923 20:59:01 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.923 20:59:01 -- accel/accel.sh@19 -- # IFS=: 00:05:45.923 20:59:01 -- accel/accel.sh@19 -- # read -r var val 00:05:45.923 20:59:01 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:45.923 20:59:01 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:45.923 20:59:01 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:45.923 00:05:45.923 real 0m1.377s 00:05:45.923 user 0m1.265s 00:05:45.923 sys 0m0.125s 00:05:45.923 20:59:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:45.923 20:59:01 -- common/autotest_common.sh@10 -- # set +x 00:05:45.923 ************************************ 00:05:45.923 END TEST accel_fill 00:05:45.923 ************************************ 00:05:45.923 20:59:01 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:45.923 20:59:01 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:45.923 20:59:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:45.923 20:59:01 -- common/autotest_common.sh@10 -- # set +x 00:05:45.923 ************************************ 00:05:45.923 START TEST accel_copy_crc32c 00:05:45.923 ************************************ 00:05:45.923 20:59:01 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:05:45.923 20:59:01 -- accel/accel.sh@16 -- # local accel_opc 00:05:45.923 20:59:01 -- accel/accel.sh@17 -- # local accel_module 00:05:45.923 20:59:01 -- accel/accel.sh@19 -- # IFS=: 00:05:45.923 20:59:01 -- accel/accel.sh@19 -- # read -r var val 00:05:45.923 20:59:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:45.924 20:59:01 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:45.924 20:59:01 -- accel/accel.sh@12 -- # build_accel_config 00:05:45.924 20:59:01 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:45.924 20:59:01 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:45.924 20:59:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.924 20:59:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.924 20:59:01 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:45.924 20:59:01 -- accel/accel.sh@40 -- # local IFS=, 00:05:45.924 20:59:01 -- accel/accel.sh@41 -- # jq -r . 00:05:45.924 [2024-04-18 20:59:01.799778] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:05:45.924 [2024-04-18 20:59:01.799830] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2875698 ] 00:05:45.924 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.182 [2024-04-18 20:59:01.860956] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.182 [2024-04-18 20:59:01.930656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.182 20:59:01 -- accel/accel.sh@20 -- # val= 00:05:46.182 20:59:01 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.182 20:59:01 -- accel/accel.sh@19 -- # IFS=: 00:05:46.182 20:59:01 -- accel/accel.sh@19 -- # read -r var val 00:05:46.182 20:59:01 -- accel/accel.sh@20 -- # val= 00:05:46.182 20:59:01 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.182 20:59:01 -- accel/accel.sh@19 -- # IFS=: 00:05:46.182 20:59:01 -- accel/accel.sh@19 -- # read -r var val 00:05:46.182 20:59:01 -- accel/accel.sh@20 -- # val=0x1 00:05:46.182 20:59:01 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.182 20:59:01 -- accel/accel.sh@19 -- # IFS=: 00:05:46.182 20:59:01 -- accel/accel.sh@19 -- # read -r var val 00:05:46.182 20:59:01 -- accel/accel.sh@20 -- # val= 00:05:46.182 20:59:01 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.182 20:59:01 -- accel/accel.sh@19 -- # IFS=: 00:05:46.182 20:59:01 -- accel/accel.sh@19 -- # read -r var val 00:05:46.182 20:59:01 -- accel/accel.sh@20 -- # val= 00:05:46.182 20:59:01 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.182 20:59:01 -- accel/accel.sh@19 -- # IFS=: 00:05:46.182 20:59:01 -- accel/accel.sh@19 -- # read -r var val 00:05:46.182 20:59:01 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:46.182 20:59:01 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.182 20:59:01 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:46.182 20:59:01 -- accel/accel.sh@19 -- # IFS=: 00:05:46.182 20:59:01 -- accel/accel.sh@19 -- # read -r var val 00:05:46.182 20:59:01 -- accel/accel.sh@20 -- # val=0 00:05:46.182 20:59:01 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.182 20:59:01 -- accel/accel.sh@19 -- # IFS=: 00:05:46.182 20:59:01 -- accel/accel.sh@19 -- # read -r var val 00:05:46.182 20:59:01 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:46.182 20:59:01 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.182 20:59:01 -- accel/accel.sh@19 -- # IFS=: 00:05:46.182 20:59:01 -- accel/accel.sh@19 -- # read -r var val 00:05:46.182 20:59:01 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:46.182 20:59:01 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.182 20:59:01 -- accel/accel.sh@19 -- # IFS=: 00:05:46.182 20:59:01 -- accel/accel.sh@19 -- # read -r var val 00:05:46.182 20:59:01 -- accel/accel.sh@20 -- # val= 00:05:46.182 20:59:01 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.182 20:59:01 -- accel/accel.sh@19 -- # IFS=: 00:05:46.182 20:59:01 -- accel/accel.sh@19 -- # read -r var val 00:05:46.182 20:59:01 -- accel/accel.sh@20 -- # val=software 00:05:46.182 20:59:01 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.182 20:59:01 -- accel/accel.sh@22 -- # accel_module=software 00:05:46.182 20:59:01 -- accel/accel.sh@19 -- # IFS=: 00:05:46.182 20:59:01 -- accel/accel.sh@19 -- # read -r var val 00:05:46.182 20:59:01 -- accel/accel.sh@20 -- # val=32 00:05:46.182 20:59:01 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.182 20:59:01 -- accel/accel.sh@19 -- # IFS=: 00:05:46.182 20:59:01 -- accel/accel.sh@19 -- # read -r var val 00:05:46.182 20:59:01 -- accel/accel.sh@20 -- # val=32 00:05:46.182 20:59:01 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.182 20:59:01 -- accel/accel.sh@19 -- # IFS=: 00:05:46.182 20:59:01 -- accel/accel.sh@19 -- # read -r var val 00:05:46.182 20:59:01 -- accel/accel.sh@20 -- # val=1 00:05:46.182 20:59:01 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.182 20:59:01 -- accel/accel.sh@19 -- # IFS=: 00:05:46.182 20:59:01 -- accel/accel.sh@19 -- # read -r var val 00:05:46.182 20:59:01 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:46.182 20:59:01 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.182 20:59:01 -- accel/accel.sh@19 -- # IFS=: 00:05:46.182 20:59:01 -- accel/accel.sh@19 -- # read -r var val 00:05:46.182 20:59:01 -- accel/accel.sh@20 -- # val=Yes 00:05:46.182 20:59:01 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.182 20:59:01 -- accel/accel.sh@19 -- # IFS=: 00:05:46.182 20:59:01 -- accel/accel.sh@19 -- # read -r var val 00:05:46.182 20:59:01 -- accel/accel.sh@20 -- # val= 00:05:46.182 20:59:01 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.182 20:59:01 -- accel/accel.sh@19 -- # IFS=: 00:05:46.182 20:59:01 -- accel/accel.sh@19 -- # read -r var val 00:05:46.182 20:59:01 -- accel/accel.sh@20 -- # val= 00:05:46.182 20:59:01 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.182 20:59:01 -- accel/accel.sh@19 -- # IFS=: 00:05:46.182 20:59:01 -- accel/accel.sh@19 -- # read -r var val 00:05:47.556 20:59:03 -- accel/accel.sh@20 -- # val= 00:05:47.556 20:59:03 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.556 20:59:03 -- accel/accel.sh@19 -- # IFS=: 00:05:47.556 20:59:03 -- accel/accel.sh@19 -- # read -r var val 00:05:47.556 20:59:03 -- accel/accel.sh@20 -- # val= 00:05:47.556 20:59:03 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.556 20:59:03 -- accel/accel.sh@19 -- # IFS=: 00:05:47.556 20:59:03 -- accel/accel.sh@19 -- # read -r var val 00:05:47.556 20:59:03 -- accel/accel.sh@20 -- # val= 00:05:47.556 20:59:03 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.556 20:59:03 -- accel/accel.sh@19 -- # IFS=: 00:05:47.556 20:59:03 -- accel/accel.sh@19 -- # read -r var val 00:05:47.556 20:59:03 -- accel/accel.sh@20 -- # val= 00:05:47.556 20:59:03 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.556 20:59:03 -- accel/accel.sh@19 -- # IFS=: 00:05:47.556 20:59:03 -- accel/accel.sh@19 -- # read -r var val 00:05:47.556 20:59:03 -- accel/accel.sh@20 -- # val= 00:05:47.557 20:59:03 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.557 20:59:03 -- accel/accel.sh@19 -- # IFS=: 00:05:47.557 20:59:03 -- accel/accel.sh@19 -- # read -r var val 00:05:47.557 20:59:03 -- accel/accel.sh@20 -- # val= 00:05:47.557 20:59:03 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.557 20:59:03 -- accel/accel.sh@19 -- # IFS=: 00:05:47.557 20:59:03 -- accel/accel.sh@19 -- # read -r var val 00:05:47.557 20:59:03 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:47.557 20:59:03 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:47.557 20:59:03 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:47.557 00:05:47.557 real 0m1.363s 00:05:47.557 user 0m1.258s 00:05:47.557 sys 0m0.119s 00:05:47.557 20:59:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:47.557 20:59:03 -- common/autotest_common.sh@10 -- # set +x 00:05:47.557 ************************************ 00:05:47.557 END TEST accel_copy_crc32c 00:05:47.557 ************************************ 00:05:47.557 20:59:03 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:47.557 20:59:03 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:47.557 20:59:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:47.557 20:59:03 -- common/autotest_common.sh@10 -- # set +x 00:05:47.557 ************************************ 00:05:47.557 START TEST accel_copy_crc32c_C2 00:05:47.557 ************************************ 00:05:47.557 20:59:03 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:47.557 20:59:03 -- accel/accel.sh@16 -- # local accel_opc 00:05:47.557 20:59:03 -- accel/accel.sh@17 -- # local accel_module 00:05:47.557 20:59:03 -- accel/accel.sh@19 -- # IFS=: 00:05:47.557 20:59:03 -- accel/accel.sh@19 -- # read -r var val 00:05:47.557 20:59:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:47.557 20:59:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:47.557 20:59:03 -- accel/accel.sh@12 -- # build_accel_config 00:05:47.557 20:59:03 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:47.557 20:59:03 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:47.557 20:59:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.557 20:59:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.557 20:59:03 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:47.557 20:59:03 -- accel/accel.sh@40 -- # local IFS=, 00:05:47.557 20:59:03 -- accel/accel.sh@41 -- # jq -r . 00:05:47.557 [2024-04-18 20:59:03.330098] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:05:47.557 [2024-04-18 20:59:03.330161] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2875980 ] 00:05:47.557 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.557 [2024-04-18 20:59:03.390733] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.557 [2024-04-18 20:59:03.463720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.824 20:59:03 -- accel/accel.sh@20 -- # val= 00:05:47.824 20:59:03 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.824 20:59:03 -- accel/accel.sh@19 -- # IFS=: 00:05:47.824 20:59:03 -- accel/accel.sh@19 -- # read -r var val 00:05:47.825 20:59:03 -- accel/accel.sh@20 -- # val= 00:05:47.825 20:59:03 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.825 20:59:03 -- accel/accel.sh@19 -- # IFS=: 00:05:47.825 20:59:03 -- accel/accel.sh@19 -- # read -r var val 00:05:47.825 20:59:03 -- accel/accel.sh@20 -- # val=0x1 00:05:47.825 20:59:03 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.825 20:59:03 -- accel/accel.sh@19 -- # IFS=: 00:05:47.825 20:59:03 -- accel/accel.sh@19 -- # read -r var val 00:05:47.825 20:59:03 -- accel/accel.sh@20 -- # val= 00:05:47.825 20:59:03 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.825 20:59:03 -- accel/accel.sh@19 -- # IFS=: 00:05:47.825 20:59:03 -- accel/accel.sh@19 -- # read -r var val 00:05:47.825 20:59:03 -- accel/accel.sh@20 -- # val= 00:05:47.825 20:59:03 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.825 20:59:03 -- accel/accel.sh@19 -- # IFS=: 00:05:47.825 20:59:03 -- accel/accel.sh@19 -- # read -r var val 00:05:47.825 20:59:03 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:47.825 20:59:03 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.825 20:59:03 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:47.825 20:59:03 -- accel/accel.sh@19 -- # IFS=: 00:05:47.825 20:59:03 -- accel/accel.sh@19 -- # read -r var val 00:05:47.825 20:59:03 -- accel/accel.sh@20 -- # val=0 00:05:47.825 20:59:03 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.825 20:59:03 -- accel/accel.sh@19 -- # IFS=: 00:05:47.825 20:59:03 -- accel/accel.sh@19 -- # read -r var val 00:05:47.825 20:59:03 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:47.825 20:59:03 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.825 20:59:03 -- accel/accel.sh@19 -- # IFS=: 00:05:47.825 20:59:03 -- accel/accel.sh@19 -- # read -r var val 00:05:47.825 20:59:03 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:47.825 20:59:03 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.825 20:59:03 -- accel/accel.sh@19 -- # IFS=: 00:05:47.825 20:59:03 -- accel/accel.sh@19 -- # read -r var val 00:05:47.825 20:59:03 -- accel/accel.sh@20 -- # val= 00:05:47.825 20:59:03 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.825 20:59:03 -- accel/accel.sh@19 -- # IFS=: 00:05:47.825 20:59:03 -- accel/accel.sh@19 -- # read -r var val 00:05:47.825 20:59:03 -- accel/accel.sh@20 -- # val=software 00:05:47.825 20:59:03 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.825 20:59:03 -- accel/accel.sh@22 -- # accel_module=software 00:05:47.825 20:59:03 -- accel/accel.sh@19 -- # IFS=: 00:05:47.825 20:59:03 -- accel/accel.sh@19 -- # read -r var val 00:05:47.825 20:59:03 -- accel/accel.sh@20 -- # val=32 00:05:47.825 20:59:03 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.825 20:59:03 -- accel/accel.sh@19 -- # IFS=: 00:05:47.825 20:59:03 -- accel/accel.sh@19 -- # read -r var val 00:05:47.825 20:59:03 -- accel/accel.sh@20 -- # val=32 00:05:47.825 20:59:03 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.825 20:59:03 -- accel/accel.sh@19 -- # IFS=: 00:05:47.825 20:59:03 -- accel/accel.sh@19 -- # read -r var val 00:05:47.825 20:59:03 -- accel/accel.sh@20 -- # val=1 00:05:47.825 20:59:03 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.825 20:59:03 -- accel/accel.sh@19 -- # IFS=: 00:05:47.825 20:59:03 -- accel/accel.sh@19 -- # read -r var val 00:05:47.825 20:59:03 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:47.825 20:59:03 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.825 20:59:03 -- accel/accel.sh@19 -- # IFS=: 00:05:47.825 20:59:03 -- accel/accel.sh@19 -- # read -r var val 00:05:47.825 20:59:03 -- accel/accel.sh@20 -- # val=Yes 00:05:47.825 20:59:03 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.825 20:59:03 -- accel/accel.sh@19 -- # IFS=: 00:05:47.825 20:59:03 -- accel/accel.sh@19 -- # read -r var val 00:05:47.825 20:59:03 -- accel/accel.sh@20 -- # val= 00:05:47.825 20:59:03 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.825 20:59:03 -- accel/accel.sh@19 -- # IFS=: 00:05:47.825 20:59:03 -- accel/accel.sh@19 -- # read -r var val 00:05:47.825 20:59:03 -- accel/accel.sh@20 -- # val= 00:05:47.825 20:59:03 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.825 20:59:03 -- accel/accel.sh@19 -- # IFS=: 00:05:47.825 20:59:03 -- accel/accel.sh@19 -- # read -r var val 00:05:48.761 20:59:04 -- accel/accel.sh@20 -- # val= 00:05:48.761 20:59:04 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.761 20:59:04 -- accel/accel.sh@19 -- # IFS=: 00:05:48.761 20:59:04 -- accel/accel.sh@19 -- # read -r var val 00:05:48.761 20:59:04 -- accel/accel.sh@20 -- # val= 00:05:48.761 20:59:04 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.761 20:59:04 -- accel/accel.sh@19 -- # IFS=: 00:05:48.761 20:59:04 -- accel/accel.sh@19 -- # read -r var val 00:05:48.761 20:59:04 -- accel/accel.sh@20 -- # val= 00:05:48.761 20:59:04 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.761 20:59:04 -- accel/accel.sh@19 -- # IFS=: 00:05:48.761 20:59:04 -- accel/accel.sh@19 -- # read -r var val 00:05:48.761 20:59:04 -- accel/accel.sh@20 -- # val= 00:05:48.761 20:59:04 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.761 20:59:04 -- accel/accel.sh@19 -- # IFS=: 00:05:48.761 20:59:04 -- accel/accel.sh@19 -- # read -r var val 00:05:48.761 20:59:04 -- accel/accel.sh@20 -- # val= 00:05:48.761 20:59:04 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.761 20:59:04 -- accel/accel.sh@19 -- # IFS=: 00:05:48.761 20:59:04 -- accel/accel.sh@19 -- # read -r var val 00:05:48.761 20:59:04 -- accel/accel.sh@20 -- # val= 00:05:48.761 20:59:04 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.761 20:59:04 -- accel/accel.sh@19 -- # IFS=: 00:05:48.761 20:59:04 -- accel/accel.sh@19 -- # read -r var val 00:05:48.761 20:59:04 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:48.761 20:59:04 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:48.761 20:59:04 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:48.761 00:05:48.761 real 0m1.365s 00:05:48.761 user 0m1.265s 00:05:48.761 sys 0m0.112s 00:05:48.761 20:59:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:48.761 20:59:04 -- common/autotest_common.sh@10 -- # set +x 00:05:48.761 ************************************ 00:05:48.761 END TEST accel_copy_crc32c_C2 00:05:48.761 ************************************ 00:05:49.019 20:59:04 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:49.019 20:59:04 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:49.019 20:59:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:49.019 20:59:04 -- common/autotest_common.sh@10 -- # set +x 00:05:49.019 ************************************ 00:05:49.019 START TEST accel_dualcast 00:05:49.019 ************************************ 00:05:49.019 20:59:04 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:05:49.020 20:59:04 -- accel/accel.sh@16 -- # local accel_opc 00:05:49.020 20:59:04 -- accel/accel.sh@17 -- # local accel_module 00:05:49.020 20:59:04 -- accel/accel.sh@19 -- # IFS=: 00:05:49.020 20:59:04 -- accel/accel.sh@19 -- # read -r var val 00:05:49.020 20:59:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:49.020 20:59:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:49.020 20:59:04 -- accel/accel.sh@12 -- # build_accel_config 00:05:49.020 20:59:04 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:49.020 20:59:04 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:49.020 20:59:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.020 20:59:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.020 20:59:04 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:49.020 20:59:04 -- accel/accel.sh@40 -- # local IFS=, 00:05:49.020 20:59:04 -- accel/accel.sh@41 -- # jq -r . 00:05:49.020 [2024-04-18 20:59:04.871750] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:05:49.020 [2024-04-18 20:59:04.871802] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2876240 ] 00:05:49.020 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.020 [2024-04-18 20:59:04.933040] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.279 [2024-04-18 20:59:05.012346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.279 20:59:05 -- accel/accel.sh@20 -- # val= 00:05:49.279 20:59:05 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.279 20:59:05 -- accel/accel.sh@19 -- # IFS=: 00:05:49.279 20:59:05 -- accel/accel.sh@19 -- # read -r var val 00:05:49.279 20:59:05 -- accel/accel.sh@20 -- # val= 00:05:49.279 20:59:05 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.279 20:59:05 -- accel/accel.sh@19 -- # IFS=: 00:05:49.279 20:59:05 -- accel/accel.sh@19 -- # read -r var val 00:05:49.279 20:59:05 -- accel/accel.sh@20 -- # val=0x1 00:05:49.279 20:59:05 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.279 20:59:05 -- accel/accel.sh@19 -- # IFS=: 00:05:49.279 20:59:05 -- accel/accel.sh@19 -- # read -r var val 00:05:49.279 20:59:05 -- accel/accel.sh@20 -- # val= 00:05:49.279 20:59:05 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.279 20:59:05 -- accel/accel.sh@19 -- # IFS=: 00:05:49.279 20:59:05 -- accel/accel.sh@19 -- # read -r var val 00:05:49.279 20:59:05 -- accel/accel.sh@20 -- # val= 00:05:49.279 20:59:05 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.279 20:59:05 -- accel/accel.sh@19 -- # IFS=: 00:05:49.279 20:59:05 -- accel/accel.sh@19 -- # read -r var val 00:05:49.279 20:59:05 -- accel/accel.sh@20 -- # val=dualcast 00:05:49.279 20:59:05 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.279 20:59:05 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:49.279 20:59:05 -- accel/accel.sh@19 -- # IFS=: 00:05:49.279 20:59:05 -- accel/accel.sh@19 -- # read -r var val 00:05:49.279 20:59:05 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:49.279 20:59:05 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.279 20:59:05 -- accel/accel.sh@19 -- # IFS=: 00:05:49.279 20:59:05 -- accel/accel.sh@19 -- # read -r var val 00:05:49.279 20:59:05 -- accel/accel.sh@20 -- # val= 00:05:49.279 20:59:05 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.279 20:59:05 -- accel/accel.sh@19 -- # IFS=: 00:05:49.279 20:59:05 -- accel/accel.sh@19 -- # read -r var val 00:05:49.279 20:59:05 -- accel/accel.sh@20 -- # val=software 00:05:49.279 20:59:05 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.279 20:59:05 -- accel/accel.sh@22 -- # accel_module=software 00:05:49.279 20:59:05 -- accel/accel.sh@19 -- # IFS=: 00:05:49.279 20:59:05 -- accel/accel.sh@19 -- # read -r var val 00:05:49.279 20:59:05 -- accel/accel.sh@20 -- # val=32 00:05:49.279 20:59:05 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.279 20:59:05 -- accel/accel.sh@19 -- # IFS=: 00:05:49.279 20:59:05 -- accel/accel.sh@19 -- # read -r var val 00:05:49.279 20:59:05 -- accel/accel.sh@20 -- # val=32 00:05:49.279 20:59:05 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.279 20:59:05 -- accel/accel.sh@19 -- # IFS=: 00:05:49.279 20:59:05 -- accel/accel.sh@19 -- # read -r var val 00:05:49.279 20:59:05 -- accel/accel.sh@20 -- # val=1 00:05:49.279 20:59:05 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.279 20:59:05 -- accel/accel.sh@19 -- # IFS=: 00:05:49.279 20:59:05 -- accel/accel.sh@19 -- # read -r var val 00:05:49.279 20:59:05 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:49.279 20:59:05 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.279 20:59:05 -- accel/accel.sh@19 -- # IFS=: 00:05:49.279 20:59:05 -- accel/accel.sh@19 -- # read -r var val 00:05:49.279 20:59:05 -- accel/accel.sh@20 -- # val=Yes 00:05:49.279 20:59:05 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.279 20:59:05 -- accel/accel.sh@19 -- # IFS=: 00:05:49.279 20:59:05 -- accel/accel.sh@19 -- # read -r var val 00:05:49.279 20:59:05 -- accel/accel.sh@20 -- # val= 00:05:49.279 20:59:05 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.279 20:59:05 -- accel/accel.sh@19 -- # IFS=: 00:05:49.279 20:59:05 -- accel/accel.sh@19 -- # read -r var val 00:05:49.279 20:59:05 -- accel/accel.sh@20 -- # val= 00:05:49.279 20:59:05 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.279 20:59:05 -- accel/accel.sh@19 -- # IFS=: 00:05:49.279 20:59:05 -- accel/accel.sh@19 -- # read -r var val 00:05:50.662 20:59:06 -- accel/accel.sh@20 -- # val= 00:05:50.662 20:59:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.662 20:59:06 -- accel/accel.sh@19 -- # IFS=: 00:05:50.662 20:59:06 -- accel/accel.sh@19 -- # read -r var val 00:05:50.662 20:59:06 -- accel/accel.sh@20 -- # val= 00:05:50.662 20:59:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.663 20:59:06 -- accel/accel.sh@19 -- # IFS=: 00:05:50.663 20:59:06 -- accel/accel.sh@19 -- # read -r var val 00:05:50.663 20:59:06 -- accel/accel.sh@20 -- # val= 00:05:50.663 20:59:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.663 20:59:06 -- accel/accel.sh@19 -- # IFS=: 00:05:50.663 20:59:06 -- accel/accel.sh@19 -- # read -r var val 00:05:50.663 20:59:06 -- accel/accel.sh@20 -- # val= 00:05:50.663 20:59:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.663 20:59:06 -- accel/accel.sh@19 -- # IFS=: 00:05:50.663 20:59:06 -- accel/accel.sh@19 -- # read -r var val 00:05:50.663 20:59:06 -- accel/accel.sh@20 -- # val= 00:05:50.663 20:59:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.663 20:59:06 -- accel/accel.sh@19 -- # IFS=: 00:05:50.663 20:59:06 -- accel/accel.sh@19 -- # read -r var val 00:05:50.663 20:59:06 -- accel/accel.sh@20 -- # val= 00:05:50.663 20:59:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.663 20:59:06 -- accel/accel.sh@19 -- # IFS=: 00:05:50.663 20:59:06 -- accel/accel.sh@19 -- # read -r var val 00:05:50.663 20:59:06 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:50.663 20:59:06 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:50.663 20:59:06 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:50.663 00:05:50.663 real 0m1.373s 00:05:50.663 user 0m1.260s 00:05:50.663 sys 0m0.125s 00:05:50.663 20:59:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:50.663 20:59:06 -- common/autotest_common.sh@10 -- # set +x 00:05:50.663 ************************************ 00:05:50.663 END TEST accel_dualcast 00:05:50.663 ************************************ 00:05:50.663 20:59:06 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:50.663 20:59:06 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:50.663 20:59:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:50.663 20:59:06 -- common/autotest_common.sh@10 -- # set +x 00:05:50.663 ************************************ 00:05:50.663 START TEST accel_compare 00:05:50.663 ************************************ 00:05:50.663 20:59:06 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:05:50.663 20:59:06 -- accel/accel.sh@16 -- # local accel_opc 00:05:50.663 20:59:06 -- accel/accel.sh@17 -- # local accel_module 00:05:50.663 20:59:06 -- accel/accel.sh@19 -- # IFS=: 00:05:50.663 20:59:06 -- accel/accel.sh@19 -- # read -r var val 00:05:50.663 20:59:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:50.663 20:59:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:50.663 20:59:06 -- accel/accel.sh@12 -- # build_accel_config 00:05:50.663 20:59:06 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:50.663 20:59:06 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:50.663 20:59:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.663 20:59:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.663 20:59:06 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:50.663 20:59:06 -- accel/accel.sh@40 -- # local IFS=, 00:05:50.663 20:59:06 -- accel/accel.sh@41 -- # jq -r . 00:05:50.663 [2024-04-18 20:59:06.400985] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:05:50.663 [2024-04-18 20:59:06.401041] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2876497 ] 00:05:50.663 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.663 [2024-04-18 20:59:06.463606] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.663 [2024-04-18 20:59:06.538056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.663 20:59:06 -- accel/accel.sh@20 -- # val= 00:05:50.663 20:59:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.663 20:59:06 -- accel/accel.sh@19 -- # IFS=: 00:05:50.663 20:59:06 -- accel/accel.sh@19 -- # read -r var val 00:05:50.663 20:59:06 -- accel/accel.sh@20 -- # val= 00:05:50.663 20:59:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.663 20:59:06 -- accel/accel.sh@19 -- # IFS=: 00:05:50.663 20:59:06 -- accel/accel.sh@19 -- # read -r var val 00:05:50.663 20:59:06 -- accel/accel.sh@20 -- # val=0x1 00:05:50.663 20:59:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.663 20:59:06 -- accel/accel.sh@19 -- # IFS=: 00:05:50.663 20:59:06 -- accel/accel.sh@19 -- # read -r var val 00:05:50.663 20:59:06 -- accel/accel.sh@20 -- # val= 00:05:50.663 20:59:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.663 20:59:06 -- accel/accel.sh@19 -- # IFS=: 00:05:50.663 20:59:06 -- accel/accel.sh@19 -- # read -r var val 00:05:50.663 20:59:06 -- accel/accel.sh@20 -- # val= 00:05:50.663 20:59:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.663 20:59:06 -- accel/accel.sh@19 -- # IFS=: 00:05:50.663 20:59:06 -- accel/accel.sh@19 -- # read -r var val 00:05:50.663 20:59:06 -- accel/accel.sh@20 -- # val=compare 00:05:50.663 20:59:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.663 20:59:06 -- accel/accel.sh@23 -- # accel_opc=compare 00:05:50.663 20:59:06 -- accel/accel.sh@19 -- # IFS=: 00:05:50.663 20:59:06 -- accel/accel.sh@19 -- # read -r var val 00:05:50.663 20:59:06 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:50.663 20:59:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.663 20:59:06 -- accel/accel.sh@19 -- # IFS=: 00:05:50.663 20:59:06 -- accel/accel.sh@19 -- # read -r var val 00:05:50.663 20:59:06 -- accel/accel.sh@20 -- # val= 00:05:50.663 20:59:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.663 20:59:06 -- accel/accel.sh@19 -- # IFS=: 00:05:50.663 20:59:06 -- accel/accel.sh@19 -- # read -r var val 00:05:50.663 20:59:06 -- accel/accel.sh@20 -- # val=software 00:05:50.663 20:59:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.663 20:59:06 -- accel/accel.sh@22 -- # accel_module=software 00:05:50.663 20:59:06 -- accel/accel.sh@19 -- # IFS=: 00:05:50.663 20:59:06 -- accel/accel.sh@19 -- # read -r var val 00:05:50.663 20:59:06 -- accel/accel.sh@20 -- # val=32 00:05:50.921 20:59:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.921 20:59:06 -- accel/accel.sh@19 -- # IFS=: 00:05:50.921 20:59:06 -- accel/accel.sh@19 -- # read -r var val 00:05:50.921 20:59:06 -- accel/accel.sh@20 -- # val=32 00:05:50.921 20:59:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.921 20:59:06 -- accel/accel.sh@19 -- # IFS=: 00:05:50.921 20:59:06 -- accel/accel.sh@19 -- # read -r var val 00:05:50.921 20:59:06 -- accel/accel.sh@20 -- # val=1 00:05:50.921 20:59:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.921 20:59:06 -- accel/accel.sh@19 -- # IFS=: 00:05:50.921 20:59:06 -- accel/accel.sh@19 -- # read -r var val 00:05:50.921 20:59:06 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:50.921 20:59:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.921 20:59:06 -- accel/accel.sh@19 -- # IFS=: 00:05:50.921 20:59:06 -- accel/accel.sh@19 -- # read -r var val 00:05:50.921 20:59:06 -- accel/accel.sh@20 -- # val=Yes 00:05:50.921 20:59:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.921 20:59:06 -- accel/accel.sh@19 -- # IFS=: 00:05:50.921 20:59:06 -- accel/accel.sh@19 -- # read -r var val 00:05:50.921 20:59:06 -- accel/accel.sh@20 -- # val= 00:05:50.921 20:59:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.921 20:59:06 -- accel/accel.sh@19 -- # IFS=: 00:05:50.921 20:59:06 -- accel/accel.sh@19 -- # read -r var val 00:05:50.921 20:59:06 -- accel/accel.sh@20 -- # val= 00:05:50.921 20:59:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.921 20:59:06 -- accel/accel.sh@19 -- # IFS=: 00:05:50.921 20:59:06 -- accel/accel.sh@19 -- # read -r var val 00:05:51.853 20:59:07 -- accel/accel.sh@20 -- # val= 00:05:51.854 20:59:07 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.854 20:59:07 -- accel/accel.sh@19 -- # IFS=: 00:05:51.854 20:59:07 -- accel/accel.sh@19 -- # read -r var val 00:05:51.854 20:59:07 -- accel/accel.sh@20 -- # val= 00:05:51.854 20:59:07 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.854 20:59:07 -- accel/accel.sh@19 -- # IFS=: 00:05:51.854 20:59:07 -- accel/accel.sh@19 -- # read -r var val 00:05:51.854 20:59:07 -- accel/accel.sh@20 -- # val= 00:05:51.854 20:59:07 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.854 20:59:07 -- accel/accel.sh@19 -- # IFS=: 00:05:51.854 20:59:07 -- accel/accel.sh@19 -- # read -r var val 00:05:51.854 20:59:07 -- accel/accel.sh@20 -- # val= 00:05:51.854 20:59:07 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.854 20:59:07 -- accel/accel.sh@19 -- # IFS=: 00:05:51.854 20:59:07 -- accel/accel.sh@19 -- # read -r var val 00:05:51.854 20:59:07 -- accel/accel.sh@20 -- # val= 00:05:51.854 20:59:07 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.854 20:59:07 -- accel/accel.sh@19 -- # IFS=: 00:05:51.854 20:59:07 -- accel/accel.sh@19 -- # read -r var val 00:05:51.854 20:59:07 -- accel/accel.sh@20 -- # val= 00:05:51.854 20:59:07 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.854 20:59:07 -- accel/accel.sh@19 -- # IFS=: 00:05:51.854 20:59:07 -- accel/accel.sh@19 -- # read -r var val 00:05:51.854 20:59:07 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:51.854 20:59:07 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:51.854 20:59:07 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:51.854 00:05:51.854 real 0m1.367s 00:05:51.854 user 0m1.256s 00:05:51.854 sys 0m0.122s 00:05:51.854 20:59:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:51.854 20:59:07 -- common/autotest_common.sh@10 -- # set +x 00:05:51.854 ************************************ 00:05:51.854 END TEST accel_compare 00:05:51.854 ************************************ 00:05:51.854 20:59:07 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:51.854 20:59:07 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:51.854 20:59:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:51.854 20:59:07 -- common/autotest_common.sh@10 -- # set +x 00:05:52.112 ************************************ 00:05:52.112 START TEST accel_xor 00:05:52.112 ************************************ 00:05:52.112 20:59:07 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:05:52.112 20:59:07 -- accel/accel.sh@16 -- # local accel_opc 00:05:52.112 20:59:07 -- accel/accel.sh@17 -- # local accel_module 00:05:52.112 20:59:07 -- accel/accel.sh@19 -- # IFS=: 00:05:52.112 20:59:07 -- accel/accel.sh@19 -- # read -r var val 00:05:52.112 20:59:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:52.113 20:59:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:52.113 20:59:07 -- accel/accel.sh@12 -- # build_accel_config 00:05:52.113 20:59:07 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:52.113 20:59:07 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:52.113 20:59:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.113 20:59:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.113 20:59:07 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:52.113 20:59:07 -- accel/accel.sh@40 -- # local IFS=, 00:05:52.113 20:59:07 -- accel/accel.sh@41 -- # jq -r . 00:05:52.113 [2024-04-18 20:59:07.917421] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:05:52.113 [2024-04-18 20:59:07.917488] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2876754 ] 00:05:52.113 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.113 [2024-04-18 20:59:07.976522] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.371 [2024-04-18 20:59:08.048799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.371 20:59:08 -- accel/accel.sh@20 -- # val= 00:05:52.371 20:59:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.371 20:59:08 -- accel/accel.sh@19 -- # IFS=: 00:05:52.371 20:59:08 -- accel/accel.sh@19 -- # read -r var val 00:05:52.371 20:59:08 -- accel/accel.sh@20 -- # val= 00:05:52.371 20:59:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.371 20:59:08 -- accel/accel.sh@19 -- # IFS=: 00:05:52.371 20:59:08 -- accel/accel.sh@19 -- # read -r var val 00:05:52.371 20:59:08 -- accel/accel.sh@20 -- # val=0x1 00:05:52.371 20:59:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.371 20:59:08 -- accel/accel.sh@19 -- # IFS=: 00:05:52.371 20:59:08 -- accel/accel.sh@19 -- # read -r var val 00:05:52.371 20:59:08 -- accel/accel.sh@20 -- # val= 00:05:52.371 20:59:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.371 20:59:08 -- accel/accel.sh@19 -- # IFS=: 00:05:52.371 20:59:08 -- accel/accel.sh@19 -- # read -r var val 00:05:52.371 20:59:08 -- accel/accel.sh@20 -- # val= 00:05:52.371 20:59:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.371 20:59:08 -- accel/accel.sh@19 -- # IFS=: 00:05:52.371 20:59:08 -- accel/accel.sh@19 -- # read -r var val 00:05:52.371 20:59:08 -- accel/accel.sh@20 -- # val=xor 00:05:52.371 20:59:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.371 20:59:08 -- accel/accel.sh@23 -- # accel_opc=xor 00:05:52.371 20:59:08 -- accel/accel.sh@19 -- # IFS=: 00:05:52.371 20:59:08 -- accel/accel.sh@19 -- # read -r var val 00:05:52.371 20:59:08 -- accel/accel.sh@20 -- # val=2 00:05:52.371 20:59:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.371 20:59:08 -- accel/accel.sh@19 -- # IFS=: 00:05:52.371 20:59:08 -- accel/accel.sh@19 -- # read -r var val 00:05:52.371 20:59:08 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:52.371 20:59:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.371 20:59:08 -- accel/accel.sh@19 -- # IFS=: 00:05:52.371 20:59:08 -- accel/accel.sh@19 -- # read -r var val 00:05:52.371 20:59:08 -- accel/accel.sh@20 -- # val= 00:05:52.371 20:59:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.371 20:59:08 -- accel/accel.sh@19 -- # IFS=: 00:05:52.371 20:59:08 -- accel/accel.sh@19 -- # read -r var val 00:05:52.371 20:59:08 -- accel/accel.sh@20 -- # val=software 00:05:52.371 20:59:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.371 20:59:08 -- accel/accel.sh@22 -- # accel_module=software 00:05:52.371 20:59:08 -- accel/accel.sh@19 -- # IFS=: 00:05:52.371 20:59:08 -- accel/accel.sh@19 -- # read -r var val 00:05:52.371 20:59:08 -- accel/accel.sh@20 -- # val=32 00:05:52.371 20:59:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.371 20:59:08 -- accel/accel.sh@19 -- # IFS=: 00:05:52.371 20:59:08 -- accel/accel.sh@19 -- # read -r var val 00:05:52.371 20:59:08 -- accel/accel.sh@20 -- # val=32 00:05:52.371 20:59:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.371 20:59:08 -- accel/accel.sh@19 -- # IFS=: 00:05:52.371 20:59:08 -- accel/accel.sh@19 -- # read -r var val 00:05:52.371 20:59:08 -- accel/accel.sh@20 -- # val=1 00:05:52.371 20:59:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.371 20:59:08 -- accel/accel.sh@19 -- # IFS=: 00:05:52.371 20:59:08 -- accel/accel.sh@19 -- # read -r var val 00:05:52.371 20:59:08 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:52.371 20:59:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.371 20:59:08 -- accel/accel.sh@19 -- # IFS=: 00:05:52.371 20:59:08 -- accel/accel.sh@19 -- # read -r var val 00:05:52.371 20:59:08 -- accel/accel.sh@20 -- # val=Yes 00:05:52.371 20:59:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.371 20:59:08 -- accel/accel.sh@19 -- # IFS=: 00:05:52.371 20:59:08 -- accel/accel.sh@19 -- # read -r var val 00:05:52.371 20:59:08 -- accel/accel.sh@20 -- # val= 00:05:52.371 20:59:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.371 20:59:08 -- accel/accel.sh@19 -- # IFS=: 00:05:52.371 20:59:08 -- accel/accel.sh@19 -- # read -r var val 00:05:52.371 20:59:08 -- accel/accel.sh@20 -- # val= 00:05:52.371 20:59:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.371 20:59:08 -- accel/accel.sh@19 -- # IFS=: 00:05:52.371 20:59:08 -- accel/accel.sh@19 -- # read -r var val 00:05:53.745 20:59:09 -- accel/accel.sh@20 -- # val= 00:05:53.745 20:59:09 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.745 20:59:09 -- accel/accel.sh@19 -- # IFS=: 00:05:53.745 20:59:09 -- accel/accel.sh@19 -- # read -r var val 00:05:53.745 20:59:09 -- accel/accel.sh@20 -- # val= 00:05:53.745 20:59:09 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.745 20:59:09 -- accel/accel.sh@19 -- # IFS=: 00:05:53.745 20:59:09 -- accel/accel.sh@19 -- # read -r var val 00:05:53.745 20:59:09 -- accel/accel.sh@20 -- # val= 00:05:53.745 20:59:09 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.745 20:59:09 -- accel/accel.sh@19 -- # IFS=: 00:05:53.745 20:59:09 -- accel/accel.sh@19 -- # read -r var val 00:05:53.745 20:59:09 -- accel/accel.sh@20 -- # val= 00:05:53.745 20:59:09 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.745 20:59:09 -- accel/accel.sh@19 -- # IFS=: 00:05:53.745 20:59:09 -- accel/accel.sh@19 -- # read -r var val 00:05:53.745 20:59:09 -- accel/accel.sh@20 -- # val= 00:05:53.745 20:59:09 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.745 20:59:09 -- accel/accel.sh@19 -- # IFS=: 00:05:53.745 20:59:09 -- accel/accel.sh@19 -- # read -r var val 00:05:53.745 20:59:09 -- accel/accel.sh@20 -- # val= 00:05:53.745 20:59:09 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.745 20:59:09 -- accel/accel.sh@19 -- # IFS=: 00:05:53.745 20:59:09 -- accel/accel.sh@19 -- # read -r var val 00:05:53.745 20:59:09 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:53.745 20:59:09 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:53.745 20:59:09 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:53.745 00:05:53.745 real 0m1.362s 00:05:53.745 user 0m1.260s 00:05:53.745 sys 0m0.113s 00:05:53.745 20:59:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:53.745 20:59:09 -- common/autotest_common.sh@10 -- # set +x 00:05:53.745 ************************************ 00:05:53.745 END TEST accel_xor 00:05:53.745 ************************************ 00:05:53.745 20:59:09 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:53.745 20:59:09 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:53.745 20:59:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:53.745 20:59:09 -- common/autotest_common.sh@10 -- # set +x 00:05:53.745 ************************************ 00:05:53.745 START TEST accel_xor 00:05:53.745 ************************************ 00:05:53.745 20:59:09 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:05:53.745 20:59:09 -- accel/accel.sh@16 -- # local accel_opc 00:05:53.745 20:59:09 -- accel/accel.sh@17 -- # local accel_module 00:05:53.745 20:59:09 -- accel/accel.sh@19 -- # IFS=: 00:05:53.745 20:59:09 -- accel/accel.sh@19 -- # read -r var val 00:05:53.745 20:59:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:53.745 20:59:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:53.745 20:59:09 -- accel/accel.sh@12 -- # build_accel_config 00:05:53.745 20:59:09 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.745 20:59:09 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.745 20:59:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.745 20:59:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.745 20:59:09 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:53.745 20:59:09 -- accel/accel.sh@40 -- # local IFS=, 00:05:53.745 20:59:09 -- accel/accel.sh@41 -- # jq -r . 00:05:53.745 [2024-04-18 20:59:09.435441] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:05:53.745 [2024-04-18 20:59:09.435487] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2877014 ] 00:05:53.745 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.745 [2024-04-18 20:59:09.493658] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.745 [2024-04-18 20:59:09.565788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.745 20:59:09 -- accel/accel.sh@20 -- # val= 00:05:53.745 20:59:09 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.745 20:59:09 -- accel/accel.sh@19 -- # IFS=: 00:05:53.745 20:59:09 -- accel/accel.sh@19 -- # read -r var val 00:05:53.745 20:59:09 -- accel/accel.sh@20 -- # val= 00:05:53.745 20:59:09 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.745 20:59:09 -- accel/accel.sh@19 -- # IFS=: 00:05:53.745 20:59:09 -- accel/accel.sh@19 -- # read -r var val 00:05:53.745 20:59:09 -- accel/accel.sh@20 -- # val=0x1 00:05:53.745 20:59:09 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.745 20:59:09 -- accel/accel.sh@19 -- # IFS=: 00:05:53.745 20:59:09 -- accel/accel.sh@19 -- # read -r var val 00:05:53.745 20:59:09 -- accel/accel.sh@20 -- # val= 00:05:53.745 20:59:09 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.745 20:59:09 -- accel/accel.sh@19 -- # IFS=: 00:05:53.745 20:59:09 -- accel/accel.sh@19 -- # read -r var val 00:05:53.745 20:59:09 -- accel/accel.sh@20 -- # val= 00:05:53.745 20:59:09 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.745 20:59:09 -- accel/accel.sh@19 -- # IFS=: 00:05:53.745 20:59:09 -- accel/accel.sh@19 -- # read -r var val 00:05:53.745 20:59:09 -- accel/accel.sh@20 -- # val=xor 00:05:53.745 20:59:09 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.745 20:59:09 -- accel/accel.sh@23 -- # accel_opc=xor 00:05:53.745 20:59:09 -- accel/accel.sh@19 -- # IFS=: 00:05:53.745 20:59:09 -- accel/accel.sh@19 -- # read -r var val 00:05:53.745 20:59:09 -- accel/accel.sh@20 -- # val=3 00:05:53.745 20:59:09 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.745 20:59:09 -- accel/accel.sh@19 -- # IFS=: 00:05:53.745 20:59:09 -- accel/accel.sh@19 -- # read -r var val 00:05:53.745 20:59:09 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:53.745 20:59:09 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.745 20:59:09 -- accel/accel.sh@19 -- # IFS=: 00:05:53.745 20:59:09 -- accel/accel.sh@19 -- # read -r var val 00:05:53.745 20:59:09 -- accel/accel.sh@20 -- # val= 00:05:53.745 20:59:09 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.745 20:59:09 -- accel/accel.sh@19 -- # IFS=: 00:05:53.745 20:59:09 -- accel/accel.sh@19 -- # read -r var val 00:05:53.745 20:59:09 -- accel/accel.sh@20 -- # val=software 00:05:53.745 20:59:09 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.745 20:59:09 -- accel/accel.sh@22 -- # accel_module=software 00:05:53.745 20:59:09 -- accel/accel.sh@19 -- # IFS=: 00:05:53.745 20:59:09 -- accel/accel.sh@19 -- # read -r var val 00:05:53.745 20:59:09 -- accel/accel.sh@20 -- # val=32 00:05:53.745 20:59:09 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.745 20:59:09 -- accel/accel.sh@19 -- # IFS=: 00:05:53.745 20:59:09 -- accel/accel.sh@19 -- # read -r var val 00:05:53.745 20:59:09 -- accel/accel.sh@20 -- # val=32 00:05:53.745 20:59:09 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.745 20:59:09 -- accel/accel.sh@19 -- # IFS=: 00:05:53.745 20:59:09 -- accel/accel.sh@19 -- # read -r var val 00:05:53.745 20:59:09 -- accel/accel.sh@20 -- # val=1 00:05:53.745 20:59:09 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.745 20:59:09 -- accel/accel.sh@19 -- # IFS=: 00:05:53.745 20:59:09 -- accel/accel.sh@19 -- # read -r var val 00:05:53.745 20:59:09 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:53.745 20:59:09 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.745 20:59:09 -- accel/accel.sh@19 -- # IFS=: 00:05:53.745 20:59:09 -- accel/accel.sh@19 -- # read -r var val 00:05:53.745 20:59:09 -- accel/accel.sh@20 -- # val=Yes 00:05:53.745 20:59:09 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.745 20:59:09 -- accel/accel.sh@19 -- # IFS=: 00:05:53.745 20:59:09 -- accel/accel.sh@19 -- # read -r var val 00:05:53.745 20:59:09 -- accel/accel.sh@20 -- # val= 00:05:53.745 20:59:09 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.745 20:59:09 -- accel/accel.sh@19 -- # IFS=: 00:05:53.745 20:59:09 -- accel/accel.sh@19 -- # read -r var val 00:05:53.745 20:59:09 -- accel/accel.sh@20 -- # val= 00:05:53.745 20:59:09 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.746 20:59:09 -- accel/accel.sh@19 -- # IFS=: 00:05:53.746 20:59:09 -- accel/accel.sh@19 -- # read -r var val 00:05:55.119 20:59:10 -- accel/accel.sh@20 -- # val= 00:05:55.119 20:59:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.119 20:59:10 -- accel/accel.sh@19 -- # IFS=: 00:05:55.119 20:59:10 -- accel/accel.sh@19 -- # read -r var val 00:05:55.119 20:59:10 -- accel/accel.sh@20 -- # val= 00:05:55.119 20:59:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.120 20:59:10 -- accel/accel.sh@19 -- # IFS=: 00:05:55.120 20:59:10 -- accel/accel.sh@19 -- # read -r var val 00:05:55.120 20:59:10 -- accel/accel.sh@20 -- # val= 00:05:55.120 20:59:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.120 20:59:10 -- accel/accel.sh@19 -- # IFS=: 00:05:55.120 20:59:10 -- accel/accel.sh@19 -- # read -r var val 00:05:55.120 20:59:10 -- accel/accel.sh@20 -- # val= 00:05:55.120 20:59:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.120 20:59:10 -- accel/accel.sh@19 -- # IFS=: 00:05:55.120 20:59:10 -- accel/accel.sh@19 -- # read -r var val 00:05:55.120 20:59:10 -- accel/accel.sh@20 -- # val= 00:05:55.120 20:59:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.120 20:59:10 -- accel/accel.sh@19 -- # IFS=: 00:05:55.120 20:59:10 -- accel/accel.sh@19 -- # read -r var val 00:05:55.120 20:59:10 -- accel/accel.sh@20 -- # val= 00:05:55.120 20:59:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.120 20:59:10 -- accel/accel.sh@19 -- # IFS=: 00:05:55.120 20:59:10 -- accel/accel.sh@19 -- # read -r var val 00:05:55.120 20:59:10 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:55.120 20:59:10 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:55.120 20:59:10 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:55.120 00:05:55.120 real 0m1.358s 00:05:55.120 user 0m1.254s 00:05:55.120 sys 0m0.116s 00:05:55.120 20:59:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:55.120 20:59:10 -- common/autotest_common.sh@10 -- # set +x 00:05:55.120 ************************************ 00:05:55.120 END TEST accel_xor 00:05:55.120 ************************************ 00:05:55.120 20:59:10 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:55.120 20:59:10 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:55.120 20:59:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.120 20:59:10 -- common/autotest_common.sh@10 -- # set +x 00:05:55.120 ************************************ 00:05:55.120 START TEST accel_dif_verify 00:05:55.120 ************************************ 00:05:55.120 20:59:10 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:05:55.120 20:59:10 -- accel/accel.sh@16 -- # local accel_opc 00:05:55.120 20:59:10 -- accel/accel.sh@17 -- # local accel_module 00:05:55.120 20:59:10 -- accel/accel.sh@19 -- # IFS=: 00:05:55.120 20:59:10 -- accel/accel.sh@19 -- # read -r var val 00:05:55.120 20:59:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:55.120 20:59:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:55.120 20:59:10 -- accel/accel.sh@12 -- # build_accel_config 00:05:55.120 20:59:10 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:55.120 20:59:10 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:55.120 20:59:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.120 20:59:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.120 20:59:10 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:55.120 20:59:10 -- accel/accel.sh@40 -- # local IFS=, 00:05:55.120 20:59:10 -- accel/accel.sh@41 -- # jq -r . 00:05:55.120 [2024-04-18 20:59:10.961498] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:05:55.120 [2024-04-18 20:59:10.961721] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2877273 ] 00:05:55.120 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.120 [2024-04-18 20:59:11.024972] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.379 [2024-04-18 20:59:11.101400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.379 20:59:11 -- accel/accel.sh@20 -- # val= 00:05:55.379 20:59:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.379 20:59:11 -- accel/accel.sh@19 -- # IFS=: 00:05:55.379 20:59:11 -- accel/accel.sh@19 -- # read -r var val 00:05:55.379 20:59:11 -- accel/accel.sh@20 -- # val= 00:05:55.379 20:59:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.379 20:59:11 -- accel/accel.sh@19 -- # IFS=: 00:05:55.379 20:59:11 -- accel/accel.sh@19 -- # read -r var val 00:05:55.379 20:59:11 -- accel/accel.sh@20 -- # val=0x1 00:05:55.379 20:59:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.379 20:59:11 -- accel/accel.sh@19 -- # IFS=: 00:05:55.379 20:59:11 -- accel/accel.sh@19 -- # read -r var val 00:05:55.379 20:59:11 -- accel/accel.sh@20 -- # val= 00:05:55.379 20:59:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.379 20:59:11 -- accel/accel.sh@19 -- # IFS=: 00:05:55.379 20:59:11 -- accel/accel.sh@19 -- # read -r var val 00:05:55.379 20:59:11 -- accel/accel.sh@20 -- # val= 00:05:55.379 20:59:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.379 20:59:11 -- accel/accel.sh@19 -- # IFS=: 00:05:55.379 20:59:11 -- accel/accel.sh@19 -- # read -r var val 00:05:55.379 20:59:11 -- accel/accel.sh@20 -- # val=dif_verify 00:05:55.379 20:59:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.379 20:59:11 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:55.379 20:59:11 -- accel/accel.sh@19 -- # IFS=: 00:05:55.379 20:59:11 -- accel/accel.sh@19 -- # read -r var val 00:05:55.379 20:59:11 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:55.379 20:59:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.379 20:59:11 -- accel/accel.sh@19 -- # IFS=: 00:05:55.379 20:59:11 -- accel/accel.sh@19 -- # read -r var val 00:05:55.379 20:59:11 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:55.379 20:59:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.379 20:59:11 -- accel/accel.sh@19 -- # IFS=: 00:05:55.379 20:59:11 -- accel/accel.sh@19 -- # read -r var val 00:05:55.379 20:59:11 -- accel/accel.sh@20 -- # val='512 bytes' 00:05:55.379 20:59:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.379 20:59:11 -- accel/accel.sh@19 -- # IFS=: 00:05:55.379 20:59:11 -- accel/accel.sh@19 -- # read -r var val 00:05:55.379 20:59:11 -- accel/accel.sh@20 -- # val='8 bytes' 00:05:55.379 20:59:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.379 20:59:11 -- accel/accel.sh@19 -- # IFS=: 00:05:55.379 20:59:11 -- accel/accel.sh@19 -- # read -r var val 00:05:55.379 20:59:11 -- accel/accel.sh@20 -- # val= 00:05:55.379 20:59:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.379 20:59:11 -- accel/accel.sh@19 -- # IFS=: 00:05:55.379 20:59:11 -- accel/accel.sh@19 -- # read -r var val 00:05:55.379 20:59:11 -- accel/accel.sh@20 -- # val=software 00:05:55.379 20:59:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.379 20:59:11 -- accel/accel.sh@22 -- # accel_module=software 00:05:55.379 20:59:11 -- accel/accel.sh@19 -- # IFS=: 00:05:55.379 20:59:11 -- accel/accel.sh@19 -- # read -r var val 00:05:55.379 20:59:11 -- accel/accel.sh@20 -- # val=32 00:05:55.379 20:59:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.379 20:59:11 -- accel/accel.sh@19 -- # IFS=: 00:05:55.379 20:59:11 -- accel/accel.sh@19 -- # read -r var val 00:05:55.379 20:59:11 -- accel/accel.sh@20 -- # val=32 00:05:55.379 20:59:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.379 20:59:11 -- accel/accel.sh@19 -- # IFS=: 00:05:55.379 20:59:11 -- accel/accel.sh@19 -- # read -r var val 00:05:55.379 20:59:11 -- accel/accel.sh@20 -- # val=1 00:05:55.379 20:59:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.379 20:59:11 -- accel/accel.sh@19 -- # IFS=: 00:05:55.379 20:59:11 -- accel/accel.sh@19 -- # read -r var val 00:05:55.379 20:59:11 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:55.379 20:59:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.379 20:59:11 -- accel/accel.sh@19 -- # IFS=: 00:05:55.379 20:59:11 -- accel/accel.sh@19 -- # read -r var val 00:05:55.379 20:59:11 -- accel/accel.sh@20 -- # val=No 00:05:55.379 20:59:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.379 20:59:11 -- accel/accel.sh@19 -- # IFS=: 00:05:55.379 20:59:11 -- accel/accel.sh@19 -- # read -r var val 00:05:55.379 20:59:11 -- accel/accel.sh@20 -- # val= 00:05:55.379 20:59:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.379 20:59:11 -- accel/accel.sh@19 -- # IFS=: 00:05:55.379 20:59:11 -- accel/accel.sh@19 -- # read -r var val 00:05:55.379 20:59:11 -- accel/accel.sh@20 -- # val= 00:05:55.379 20:59:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.379 20:59:11 -- accel/accel.sh@19 -- # IFS=: 00:05:55.379 20:59:11 -- accel/accel.sh@19 -- # read -r var val 00:05:56.753 20:59:12 -- accel/accel.sh@20 -- # val= 00:05:56.753 20:59:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.753 20:59:12 -- accel/accel.sh@19 -- # IFS=: 00:05:56.753 20:59:12 -- accel/accel.sh@19 -- # read -r var val 00:05:56.753 20:59:12 -- accel/accel.sh@20 -- # val= 00:05:56.753 20:59:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.753 20:59:12 -- accel/accel.sh@19 -- # IFS=: 00:05:56.753 20:59:12 -- accel/accel.sh@19 -- # read -r var val 00:05:56.753 20:59:12 -- accel/accel.sh@20 -- # val= 00:05:56.753 20:59:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.753 20:59:12 -- accel/accel.sh@19 -- # IFS=: 00:05:56.753 20:59:12 -- accel/accel.sh@19 -- # read -r var val 00:05:56.753 20:59:12 -- accel/accel.sh@20 -- # val= 00:05:56.753 20:59:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.753 20:59:12 -- accel/accel.sh@19 -- # IFS=: 00:05:56.753 20:59:12 -- accel/accel.sh@19 -- # read -r var val 00:05:56.753 20:59:12 -- accel/accel.sh@20 -- # val= 00:05:56.753 20:59:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.753 20:59:12 -- accel/accel.sh@19 -- # IFS=: 00:05:56.753 20:59:12 -- accel/accel.sh@19 -- # read -r var val 00:05:56.753 20:59:12 -- accel/accel.sh@20 -- # val= 00:05:56.753 20:59:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.753 20:59:12 -- accel/accel.sh@19 -- # IFS=: 00:05:56.753 20:59:12 -- accel/accel.sh@19 -- # read -r var val 00:05:56.753 20:59:12 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:56.753 20:59:12 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:56.753 20:59:12 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:56.753 00:05:56.753 real 0m1.371s 00:05:56.753 user 0m1.260s 00:05:56.753 sys 0m0.123s 00:05:56.753 20:59:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:56.753 20:59:12 -- common/autotest_common.sh@10 -- # set +x 00:05:56.753 ************************************ 00:05:56.753 END TEST accel_dif_verify 00:05:56.753 ************************************ 00:05:56.753 20:59:12 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:56.753 20:59:12 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:56.753 20:59:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.753 20:59:12 -- common/autotest_common.sh@10 -- # set +x 00:05:56.753 ************************************ 00:05:56.753 START TEST accel_dif_generate 00:05:56.753 ************************************ 00:05:56.753 20:59:12 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:05:56.753 20:59:12 -- accel/accel.sh@16 -- # local accel_opc 00:05:56.753 20:59:12 -- accel/accel.sh@17 -- # local accel_module 00:05:56.753 20:59:12 -- accel/accel.sh@19 -- # IFS=: 00:05:56.753 20:59:12 -- accel/accel.sh@19 -- # read -r var val 00:05:56.753 20:59:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:56.753 20:59:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:56.753 20:59:12 -- accel/accel.sh@12 -- # build_accel_config 00:05:56.753 20:59:12 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:56.753 20:59:12 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:56.753 20:59:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.753 20:59:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.753 20:59:12 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:56.753 20:59:12 -- accel/accel.sh@40 -- # local IFS=, 00:05:56.753 20:59:12 -- accel/accel.sh@41 -- # jq -r . 00:05:56.753 [2024-04-18 20:59:12.498397] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:05:56.754 [2024-04-18 20:59:12.498465] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2877597 ] 00:05:56.754 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.754 [2024-04-18 20:59:12.559759] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.754 [2024-04-18 20:59:12.637186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.754 20:59:12 -- accel/accel.sh@20 -- # val= 00:05:56.754 20:59:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.754 20:59:12 -- accel/accel.sh@19 -- # IFS=: 00:05:56.754 20:59:12 -- accel/accel.sh@19 -- # read -r var val 00:05:56.754 20:59:12 -- accel/accel.sh@20 -- # val= 00:05:56.754 20:59:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.754 20:59:12 -- accel/accel.sh@19 -- # IFS=: 00:05:56.754 20:59:12 -- accel/accel.sh@19 -- # read -r var val 00:05:56.754 20:59:12 -- accel/accel.sh@20 -- # val=0x1 00:05:56.754 20:59:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.754 20:59:12 -- accel/accel.sh@19 -- # IFS=: 00:05:56.754 20:59:12 -- accel/accel.sh@19 -- # read -r var val 00:05:56.754 20:59:12 -- accel/accel.sh@20 -- # val= 00:05:56.754 20:59:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.754 20:59:12 -- accel/accel.sh@19 -- # IFS=: 00:05:56.754 20:59:12 -- accel/accel.sh@19 -- # read -r var val 00:05:56.754 20:59:12 -- accel/accel.sh@20 -- # val= 00:05:56.754 20:59:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.754 20:59:12 -- accel/accel.sh@19 -- # IFS=: 00:05:56.754 20:59:12 -- accel/accel.sh@19 -- # read -r var val 00:05:56.754 20:59:12 -- accel/accel.sh@20 -- # val=dif_generate 00:05:57.013 20:59:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.013 20:59:12 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:57.013 20:59:12 -- accel/accel.sh@19 -- # IFS=: 00:05:57.013 20:59:12 -- accel/accel.sh@19 -- # read -r var val 00:05:57.013 20:59:12 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:57.013 20:59:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.013 20:59:12 -- accel/accel.sh@19 -- # IFS=: 00:05:57.013 20:59:12 -- accel/accel.sh@19 -- # read -r var val 00:05:57.013 20:59:12 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:57.013 20:59:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.013 20:59:12 -- accel/accel.sh@19 -- # IFS=: 00:05:57.013 20:59:12 -- accel/accel.sh@19 -- # read -r var val 00:05:57.013 20:59:12 -- accel/accel.sh@20 -- # val='512 bytes' 00:05:57.013 20:59:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.013 20:59:12 -- accel/accel.sh@19 -- # IFS=: 00:05:57.013 20:59:12 -- accel/accel.sh@19 -- # read -r var val 00:05:57.013 20:59:12 -- accel/accel.sh@20 -- # val='8 bytes' 00:05:57.013 20:59:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.013 20:59:12 -- accel/accel.sh@19 -- # IFS=: 00:05:57.013 20:59:12 -- accel/accel.sh@19 -- # read -r var val 00:05:57.013 20:59:12 -- accel/accel.sh@20 -- # val= 00:05:57.013 20:59:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.013 20:59:12 -- accel/accel.sh@19 -- # IFS=: 00:05:57.013 20:59:12 -- accel/accel.sh@19 -- # read -r var val 00:05:57.013 20:59:12 -- accel/accel.sh@20 -- # val=software 00:05:57.013 20:59:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.013 20:59:12 -- accel/accel.sh@22 -- # accel_module=software 00:05:57.013 20:59:12 -- accel/accel.sh@19 -- # IFS=: 00:05:57.013 20:59:12 -- accel/accel.sh@19 -- # read -r var val 00:05:57.013 20:59:12 -- accel/accel.sh@20 -- # val=32 00:05:57.013 20:59:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.013 20:59:12 -- accel/accel.sh@19 -- # IFS=: 00:05:57.013 20:59:12 -- accel/accel.sh@19 -- # read -r var val 00:05:57.013 20:59:12 -- accel/accel.sh@20 -- # val=32 00:05:57.013 20:59:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.013 20:59:12 -- accel/accel.sh@19 -- # IFS=: 00:05:57.013 20:59:12 -- accel/accel.sh@19 -- # read -r var val 00:05:57.013 20:59:12 -- accel/accel.sh@20 -- # val=1 00:05:57.013 20:59:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.013 20:59:12 -- accel/accel.sh@19 -- # IFS=: 00:05:57.013 20:59:12 -- accel/accel.sh@19 -- # read -r var val 00:05:57.013 20:59:12 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:57.013 20:59:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.013 20:59:12 -- accel/accel.sh@19 -- # IFS=: 00:05:57.014 20:59:12 -- accel/accel.sh@19 -- # read -r var val 00:05:57.014 20:59:12 -- accel/accel.sh@20 -- # val=No 00:05:57.014 20:59:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.014 20:59:12 -- accel/accel.sh@19 -- # IFS=: 00:05:57.014 20:59:12 -- accel/accel.sh@19 -- # read -r var val 00:05:57.014 20:59:12 -- accel/accel.sh@20 -- # val= 00:05:57.014 20:59:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.014 20:59:12 -- accel/accel.sh@19 -- # IFS=: 00:05:57.014 20:59:12 -- accel/accel.sh@19 -- # read -r var val 00:05:57.014 20:59:12 -- accel/accel.sh@20 -- # val= 00:05:57.014 20:59:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.014 20:59:12 -- accel/accel.sh@19 -- # IFS=: 00:05:57.014 20:59:12 -- accel/accel.sh@19 -- # read -r var val 00:05:57.981 20:59:13 -- accel/accel.sh@20 -- # val= 00:05:57.981 20:59:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.981 20:59:13 -- accel/accel.sh@19 -- # IFS=: 00:05:57.981 20:59:13 -- accel/accel.sh@19 -- # read -r var val 00:05:57.981 20:59:13 -- accel/accel.sh@20 -- # val= 00:05:57.981 20:59:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.981 20:59:13 -- accel/accel.sh@19 -- # IFS=: 00:05:57.981 20:59:13 -- accel/accel.sh@19 -- # read -r var val 00:05:57.981 20:59:13 -- accel/accel.sh@20 -- # val= 00:05:57.981 20:59:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.981 20:59:13 -- accel/accel.sh@19 -- # IFS=: 00:05:57.981 20:59:13 -- accel/accel.sh@19 -- # read -r var val 00:05:57.981 20:59:13 -- accel/accel.sh@20 -- # val= 00:05:57.981 20:59:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.981 20:59:13 -- accel/accel.sh@19 -- # IFS=: 00:05:57.981 20:59:13 -- accel/accel.sh@19 -- # read -r var val 00:05:57.981 20:59:13 -- accel/accel.sh@20 -- # val= 00:05:57.981 20:59:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.981 20:59:13 -- accel/accel.sh@19 -- # IFS=: 00:05:57.981 20:59:13 -- accel/accel.sh@19 -- # read -r var val 00:05:57.981 20:59:13 -- accel/accel.sh@20 -- # val= 00:05:57.981 20:59:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.981 20:59:13 -- accel/accel.sh@19 -- # IFS=: 00:05:57.981 20:59:13 -- accel/accel.sh@19 -- # read -r var val 00:05:57.981 20:59:13 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:57.981 20:59:13 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:57.981 20:59:13 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:57.981 00:05:57.981 real 0m1.371s 00:05:57.981 user 0m1.265s 00:05:57.981 sys 0m0.119s 00:05:57.981 20:59:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:57.981 20:59:13 -- common/autotest_common.sh@10 -- # set +x 00:05:57.981 ************************************ 00:05:57.981 END TEST accel_dif_generate 00:05:57.981 ************************************ 00:05:57.981 20:59:13 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:57.981 20:59:13 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:57.981 20:59:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:57.981 20:59:13 -- common/autotest_common.sh@10 -- # set +x 00:05:58.240 ************************************ 00:05:58.240 START TEST accel_dif_generate_copy 00:05:58.240 ************************************ 00:05:58.240 20:59:14 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:05:58.240 20:59:14 -- accel/accel.sh@16 -- # local accel_opc 00:05:58.240 20:59:14 -- accel/accel.sh@17 -- # local accel_module 00:05:58.240 20:59:14 -- accel/accel.sh@19 -- # IFS=: 00:05:58.240 20:59:14 -- accel/accel.sh@19 -- # read -r var val 00:05:58.240 20:59:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:58.240 20:59:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:58.240 20:59:14 -- accel/accel.sh@12 -- # build_accel_config 00:05:58.240 20:59:14 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:58.240 20:59:14 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:58.240 20:59:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.240 20:59:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.240 20:59:14 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:58.240 20:59:14 -- accel/accel.sh@40 -- # local IFS=, 00:05:58.240 20:59:14 -- accel/accel.sh@41 -- # jq -r . 00:05:58.240 [2024-04-18 20:59:14.027434] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:05:58.240 [2024-04-18 20:59:14.027483] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2877951 ] 00:05:58.240 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.240 [2024-04-18 20:59:14.087126] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.240 [2024-04-18 20:59:14.157447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.498 20:59:14 -- accel/accel.sh@20 -- # val= 00:05:58.498 20:59:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.498 20:59:14 -- accel/accel.sh@19 -- # IFS=: 00:05:58.498 20:59:14 -- accel/accel.sh@19 -- # read -r var val 00:05:58.498 20:59:14 -- accel/accel.sh@20 -- # val= 00:05:58.498 20:59:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.498 20:59:14 -- accel/accel.sh@19 -- # IFS=: 00:05:58.498 20:59:14 -- accel/accel.sh@19 -- # read -r var val 00:05:58.498 20:59:14 -- accel/accel.sh@20 -- # val=0x1 00:05:58.498 20:59:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.498 20:59:14 -- accel/accel.sh@19 -- # IFS=: 00:05:58.498 20:59:14 -- accel/accel.sh@19 -- # read -r var val 00:05:58.498 20:59:14 -- accel/accel.sh@20 -- # val= 00:05:58.498 20:59:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.498 20:59:14 -- accel/accel.sh@19 -- # IFS=: 00:05:58.498 20:59:14 -- accel/accel.sh@19 -- # read -r var val 00:05:58.498 20:59:14 -- accel/accel.sh@20 -- # val= 00:05:58.498 20:59:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.498 20:59:14 -- accel/accel.sh@19 -- # IFS=: 00:05:58.498 20:59:14 -- accel/accel.sh@19 -- # read -r var val 00:05:58.498 20:59:14 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:58.498 20:59:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.498 20:59:14 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:58.498 20:59:14 -- accel/accel.sh@19 -- # IFS=: 00:05:58.498 20:59:14 -- accel/accel.sh@19 -- # read -r var val 00:05:58.498 20:59:14 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:58.498 20:59:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.498 20:59:14 -- accel/accel.sh@19 -- # IFS=: 00:05:58.498 20:59:14 -- accel/accel.sh@19 -- # read -r var val 00:05:58.498 20:59:14 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:58.498 20:59:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.498 20:59:14 -- accel/accel.sh@19 -- # IFS=: 00:05:58.498 20:59:14 -- accel/accel.sh@19 -- # read -r var val 00:05:58.498 20:59:14 -- accel/accel.sh@20 -- # val= 00:05:58.498 20:59:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.498 20:59:14 -- accel/accel.sh@19 -- # IFS=: 00:05:58.498 20:59:14 -- accel/accel.sh@19 -- # read -r var val 00:05:58.498 20:59:14 -- accel/accel.sh@20 -- # val=software 00:05:58.498 20:59:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.498 20:59:14 -- accel/accel.sh@22 -- # accel_module=software 00:05:58.498 20:59:14 -- accel/accel.sh@19 -- # IFS=: 00:05:58.498 20:59:14 -- accel/accel.sh@19 -- # read -r var val 00:05:58.498 20:59:14 -- accel/accel.sh@20 -- # val=32 00:05:58.498 20:59:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.498 20:59:14 -- accel/accel.sh@19 -- # IFS=: 00:05:58.498 20:59:14 -- accel/accel.sh@19 -- # read -r var val 00:05:58.498 20:59:14 -- accel/accel.sh@20 -- # val=32 00:05:58.498 20:59:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.498 20:59:14 -- accel/accel.sh@19 -- # IFS=: 00:05:58.498 20:59:14 -- accel/accel.sh@19 -- # read -r var val 00:05:58.498 20:59:14 -- accel/accel.sh@20 -- # val=1 00:05:58.498 20:59:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.498 20:59:14 -- accel/accel.sh@19 -- # IFS=: 00:05:58.498 20:59:14 -- accel/accel.sh@19 -- # read -r var val 00:05:58.498 20:59:14 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:58.498 20:59:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.498 20:59:14 -- accel/accel.sh@19 -- # IFS=: 00:05:58.498 20:59:14 -- accel/accel.sh@19 -- # read -r var val 00:05:58.498 20:59:14 -- accel/accel.sh@20 -- # val=No 00:05:58.498 20:59:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.498 20:59:14 -- accel/accel.sh@19 -- # IFS=: 00:05:58.498 20:59:14 -- accel/accel.sh@19 -- # read -r var val 00:05:58.498 20:59:14 -- accel/accel.sh@20 -- # val= 00:05:58.498 20:59:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.498 20:59:14 -- accel/accel.sh@19 -- # IFS=: 00:05:58.498 20:59:14 -- accel/accel.sh@19 -- # read -r var val 00:05:58.498 20:59:14 -- accel/accel.sh@20 -- # val= 00:05:58.498 20:59:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.498 20:59:14 -- accel/accel.sh@19 -- # IFS=: 00:05:58.499 20:59:14 -- accel/accel.sh@19 -- # read -r var val 00:05:59.443 20:59:15 -- accel/accel.sh@20 -- # val= 00:05:59.443 20:59:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.443 20:59:15 -- accel/accel.sh@19 -- # IFS=: 00:05:59.443 20:59:15 -- accel/accel.sh@19 -- # read -r var val 00:05:59.443 20:59:15 -- accel/accel.sh@20 -- # val= 00:05:59.443 20:59:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.443 20:59:15 -- accel/accel.sh@19 -- # IFS=: 00:05:59.443 20:59:15 -- accel/accel.sh@19 -- # read -r var val 00:05:59.443 20:59:15 -- accel/accel.sh@20 -- # val= 00:05:59.443 20:59:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.443 20:59:15 -- accel/accel.sh@19 -- # IFS=: 00:05:59.443 20:59:15 -- accel/accel.sh@19 -- # read -r var val 00:05:59.443 20:59:15 -- accel/accel.sh@20 -- # val= 00:05:59.443 20:59:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.443 20:59:15 -- accel/accel.sh@19 -- # IFS=: 00:05:59.443 20:59:15 -- accel/accel.sh@19 -- # read -r var val 00:05:59.443 20:59:15 -- accel/accel.sh@20 -- # val= 00:05:59.443 20:59:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.443 20:59:15 -- accel/accel.sh@19 -- # IFS=: 00:05:59.443 20:59:15 -- accel/accel.sh@19 -- # read -r var val 00:05:59.443 20:59:15 -- accel/accel.sh@20 -- # val= 00:05:59.443 20:59:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.443 20:59:15 -- accel/accel.sh@19 -- # IFS=: 00:05:59.443 20:59:15 -- accel/accel.sh@19 -- # read -r var val 00:05:59.443 20:59:15 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:59.443 20:59:15 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:59.443 20:59:15 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:59.443 00:05:59.443 real 0m1.362s 00:05:59.443 user 0m1.259s 00:05:59.443 sys 0m0.115s 00:05:59.443 20:59:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:59.443 20:59:15 -- common/autotest_common.sh@10 -- # set +x 00:05:59.443 ************************************ 00:05:59.443 END TEST accel_dif_generate_copy 00:05:59.443 ************************************ 00:05:59.701 20:59:15 -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:59.702 20:59:15 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:59.702 20:59:15 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:59.702 20:59:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.702 20:59:15 -- common/autotest_common.sh@10 -- # set +x 00:05:59.702 ************************************ 00:05:59.702 START TEST accel_comp 00:05:59.702 ************************************ 00:05:59.702 20:59:15 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:59.702 20:59:15 -- accel/accel.sh@16 -- # local accel_opc 00:05:59.702 20:59:15 -- accel/accel.sh@17 -- # local accel_module 00:05:59.702 20:59:15 -- accel/accel.sh@19 -- # IFS=: 00:05:59.702 20:59:15 -- accel/accel.sh@19 -- # read -r var val 00:05:59.702 20:59:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:59.702 20:59:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:59.702 20:59:15 -- accel/accel.sh@12 -- # build_accel_config 00:05:59.702 20:59:15 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:59.702 20:59:15 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:59.702 20:59:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.702 20:59:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.702 20:59:15 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:59.702 20:59:15 -- accel/accel.sh@40 -- # local IFS=, 00:05:59.702 20:59:15 -- accel/accel.sh@41 -- # jq -r . 00:05:59.702 [2024-04-18 20:59:15.564658] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:05:59.702 [2024-04-18 20:59:15.564708] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2878258 ] 00:05:59.702 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.702 [2024-04-18 20:59:15.622874] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.960 [2024-04-18 20:59:15.695630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.960 20:59:15 -- accel/accel.sh@20 -- # val= 00:05:59.960 20:59:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.960 20:59:15 -- accel/accel.sh@19 -- # IFS=: 00:05:59.960 20:59:15 -- accel/accel.sh@19 -- # read -r var val 00:05:59.960 20:59:15 -- accel/accel.sh@20 -- # val= 00:05:59.960 20:59:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.960 20:59:15 -- accel/accel.sh@19 -- # IFS=: 00:05:59.960 20:59:15 -- accel/accel.sh@19 -- # read -r var val 00:05:59.960 20:59:15 -- accel/accel.sh@20 -- # val= 00:05:59.960 20:59:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.960 20:59:15 -- accel/accel.sh@19 -- # IFS=: 00:05:59.960 20:59:15 -- accel/accel.sh@19 -- # read -r var val 00:05:59.960 20:59:15 -- accel/accel.sh@20 -- # val=0x1 00:05:59.960 20:59:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.960 20:59:15 -- accel/accel.sh@19 -- # IFS=: 00:05:59.960 20:59:15 -- accel/accel.sh@19 -- # read -r var val 00:05:59.960 20:59:15 -- accel/accel.sh@20 -- # val= 00:05:59.960 20:59:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.960 20:59:15 -- accel/accel.sh@19 -- # IFS=: 00:05:59.960 20:59:15 -- accel/accel.sh@19 -- # read -r var val 00:05:59.960 20:59:15 -- accel/accel.sh@20 -- # val= 00:05:59.960 20:59:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.960 20:59:15 -- accel/accel.sh@19 -- # IFS=: 00:05:59.960 20:59:15 -- accel/accel.sh@19 -- # read -r var val 00:05:59.960 20:59:15 -- accel/accel.sh@20 -- # val=compress 00:05:59.960 20:59:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.960 20:59:15 -- accel/accel.sh@23 -- # accel_opc=compress 00:05:59.960 20:59:15 -- accel/accel.sh@19 -- # IFS=: 00:05:59.960 20:59:15 -- accel/accel.sh@19 -- # read -r var val 00:05:59.960 20:59:15 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:59.960 20:59:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.960 20:59:15 -- accel/accel.sh@19 -- # IFS=: 00:05:59.960 20:59:15 -- accel/accel.sh@19 -- # read -r var val 00:05:59.960 20:59:15 -- accel/accel.sh@20 -- # val= 00:05:59.960 20:59:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.960 20:59:15 -- accel/accel.sh@19 -- # IFS=: 00:05:59.960 20:59:15 -- accel/accel.sh@19 -- # read -r var val 00:05:59.960 20:59:15 -- accel/accel.sh@20 -- # val=software 00:05:59.960 20:59:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.960 20:59:15 -- accel/accel.sh@22 -- # accel_module=software 00:05:59.960 20:59:15 -- accel/accel.sh@19 -- # IFS=: 00:05:59.960 20:59:15 -- accel/accel.sh@19 -- # read -r var val 00:05:59.960 20:59:15 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:59.960 20:59:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.960 20:59:15 -- accel/accel.sh@19 -- # IFS=: 00:05:59.960 20:59:15 -- accel/accel.sh@19 -- # read -r var val 00:05:59.960 20:59:15 -- accel/accel.sh@20 -- # val=32 00:05:59.960 20:59:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.960 20:59:15 -- accel/accel.sh@19 -- # IFS=: 00:05:59.960 20:59:15 -- accel/accel.sh@19 -- # read -r var val 00:05:59.960 20:59:15 -- accel/accel.sh@20 -- # val=32 00:05:59.960 20:59:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.960 20:59:15 -- accel/accel.sh@19 -- # IFS=: 00:05:59.960 20:59:15 -- accel/accel.sh@19 -- # read -r var val 00:05:59.960 20:59:15 -- accel/accel.sh@20 -- # val=1 00:05:59.960 20:59:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.960 20:59:15 -- accel/accel.sh@19 -- # IFS=: 00:05:59.960 20:59:15 -- accel/accel.sh@19 -- # read -r var val 00:05:59.960 20:59:15 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:59.960 20:59:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.960 20:59:15 -- accel/accel.sh@19 -- # IFS=: 00:05:59.960 20:59:15 -- accel/accel.sh@19 -- # read -r var val 00:05:59.960 20:59:15 -- accel/accel.sh@20 -- # val=No 00:05:59.960 20:59:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.960 20:59:15 -- accel/accel.sh@19 -- # IFS=: 00:05:59.960 20:59:15 -- accel/accel.sh@19 -- # read -r var val 00:05:59.960 20:59:15 -- accel/accel.sh@20 -- # val= 00:05:59.960 20:59:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.960 20:59:15 -- accel/accel.sh@19 -- # IFS=: 00:05:59.960 20:59:15 -- accel/accel.sh@19 -- # read -r var val 00:05:59.960 20:59:15 -- accel/accel.sh@20 -- # val= 00:05:59.960 20:59:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.960 20:59:15 -- accel/accel.sh@19 -- # IFS=: 00:05:59.960 20:59:15 -- accel/accel.sh@19 -- # read -r var val 00:06:01.334 20:59:16 -- accel/accel.sh@20 -- # val= 00:06:01.334 20:59:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.334 20:59:16 -- accel/accel.sh@19 -- # IFS=: 00:06:01.334 20:59:16 -- accel/accel.sh@19 -- # read -r var val 00:06:01.334 20:59:16 -- accel/accel.sh@20 -- # val= 00:06:01.334 20:59:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.334 20:59:16 -- accel/accel.sh@19 -- # IFS=: 00:06:01.334 20:59:16 -- accel/accel.sh@19 -- # read -r var val 00:06:01.334 20:59:16 -- accel/accel.sh@20 -- # val= 00:06:01.334 20:59:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.334 20:59:16 -- accel/accel.sh@19 -- # IFS=: 00:06:01.334 20:59:16 -- accel/accel.sh@19 -- # read -r var val 00:06:01.334 20:59:16 -- accel/accel.sh@20 -- # val= 00:06:01.334 20:59:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.334 20:59:16 -- accel/accel.sh@19 -- # IFS=: 00:06:01.334 20:59:16 -- accel/accel.sh@19 -- # read -r var val 00:06:01.334 20:59:16 -- accel/accel.sh@20 -- # val= 00:06:01.334 20:59:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.334 20:59:16 -- accel/accel.sh@19 -- # IFS=: 00:06:01.334 20:59:16 -- accel/accel.sh@19 -- # read -r var val 00:06:01.334 20:59:16 -- accel/accel.sh@20 -- # val= 00:06:01.334 20:59:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.334 20:59:16 -- accel/accel.sh@19 -- # IFS=: 00:06:01.334 20:59:16 -- accel/accel.sh@19 -- # read -r var val 00:06:01.334 20:59:16 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:01.334 20:59:16 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:01.334 20:59:16 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:01.334 00:06:01.334 real 0m1.364s 00:06:01.334 user 0m1.266s 00:06:01.334 sys 0m0.112s 00:06:01.334 20:59:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:01.334 20:59:16 -- common/autotest_common.sh@10 -- # set +x 00:06:01.334 ************************************ 00:06:01.334 END TEST accel_comp 00:06:01.334 ************************************ 00:06:01.334 20:59:16 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:01.334 20:59:16 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:01.334 20:59:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.334 20:59:16 -- common/autotest_common.sh@10 -- # set +x 00:06:01.334 ************************************ 00:06:01.334 START TEST accel_decomp 00:06:01.334 ************************************ 00:06:01.334 20:59:17 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:01.334 20:59:17 -- accel/accel.sh@16 -- # local accel_opc 00:06:01.334 20:59:17 -- accel/accel.sh@17 -- # local accel_module 00:06:01.334 20:59:17 -- accel/accel.sh@19 -- # IFS=: 00:06:01.334 20:59:17 -- accel/accel.sh@19 -- # read -r var val 00:06:01.334 20:59:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:01.334 20:59:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:01.334 20:59:17 -- accel/accel.sh@12 -- # build_accel_config 00:06:01.334 20:59:17 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:01.334 20:59:17 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:01.334 20:59:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.334 20:59:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.334 20:59:17 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:01.334 20:59:17 -- accel/accel.sh@40 -- # local IFS=, 00:06:01.334 20:59:17 -- accel/accel.sh@41 -- # jq -r . 00:06:01.334 [2024-04-18 20:59:17.093660] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:06:01.334 [2024-04-18 20:59:17.093718] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2878519 ] 00:06:01.334 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.334 [2024-04-18 20:59:17.153933] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.334 [2024-04-18 20:59:17.224793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.592 20:59:17 -- accel/accel.sh@20 -- # val= 00:06:01.592 20:59:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.592 20:59:17 -- accel/accel.sh@19 -- # IFS=: 00:06:01.592 20:59:17 -- accel/accel.sh@19 -- # read -r var val 00:06:01.592 20:59:17 -- accel/accel.sh@20 -- # val= 00:06:01.592 20:59:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.592 20:59:17 -- accel/accel.sh@19 -- # IFS=: 00:06:01.592 20:59:17 -- accel/accel.sh@19 -- # read -r var val 00:06:01.592 20:59:17 -- accel/accel.sh@20 -- # val= 00:06:01.592 20:59:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.592 20:59:17 -- accel/accel.sh@19 -- # IFS=: 00:06:01.592 20:59:17 -- accel/accel.sh@19 -- # read -r var val 00:06:01.592 20:59:17 -- accel/accel.sh@20 -- # val=0x1 00:06:01.592 20:59:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.592 20:59:17 -- accel/accel.sh@19 -- # IFS=: 00:06:01.592 20:59:17 -- accel/accel.sh@19 -- # read -r var val 00:06:01.592 20:59:17 -- accel/accel.sh@20 -- # val= 00:06:01.592 20:59:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.592 20:59:17 -- accel/accel.sh@19 -- # IFS=: 00:06:01.592 20:59:17 -- accel/accel.sh@19 -- # read -r var val 00:06:01.592 20:59:17 -- accel/accel.sh@20 -- # val= 00:06:01.592 20:59:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.592 20:59:17 -- accel/accel.sh@19 -- # IFS=: 00:06:01.592 20:59:17 -- accel/accel.sh@19 -- # read -r var val 00:06:01.592 20:59:17 -- accel/accel.sh@20 -- # val=decompress 00:06:01.592 20:59:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.592 20:59:17 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:01.592 20:59:17 -- accel/accel.sh@19 -- # IFS=: 00:06:01.592 20:59:17 -- accel/accel.sh@19 -- # read -r var val 00:06:01.592 20:59:17 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:01.592 20:59:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.592 20:59:17 -- accel/accel.sh@19 -- # IFS=: 00:06:01.592 20:59:17 -- accel/accel.sh@19 -- # read -r var val 00:06:01.592 20:59:17 -- accel/accel.sh@20 -- # val= 00:06:01.592 20:59:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.592 20:59:17 -- accel/accel.sh@19 -- # IFS=: 00:06:01.592 20:59:17 -- accel/accel.sh@19 -- # read -r var val 00:06:01.592 20:59:17 -- accel/accel.sh@20 -- # val=software 00:06:01.592 20:59:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.592 20:59:17 -- accel/accel.sh@22 -- # accel_module=software 00:06:01.592 20:59:17 -- accel/accel.sh@19 -- # IFS=: 00:06:01.592 20:59:17 -- accel/accel.sh@19 -- # read -r var val 00:06:01.592 20:59:17 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:01.592 20:59:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.592 20:59:17 -- accel/accel.sh@19 -- # IFS=: 00:06:01.592 20:59:17 -- accel/accel.sh@19 -- # read -r var val 00:06:01.592 20:59:17 -- accel/accel.sh@20 -- # val=32 00:06:01.592 20:59:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.592 20:59:17 -- accel/accel.sh@19 -- # IFS=: 00:06:01.592 20:59:17 -- accel/accel.sh@19 -- # read -r var val 00:06:01.592 20:59:17 -- accel/accel.sh@20 -- # val=32 00:06:01.592 20:59:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.592 20:59:17 -- accel/accel.sh@19 -- # IFS=: 00:06:01.592 20:59:17 -- accel/accel.sh@19 -- # read -r var val 00:06:01.592 20:59:17 -- accel/accel.sh@20 -- # val=1 00:06:01.592 20:59:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.592 20:59:17 -- accel/accel.sh@19 -- # IFS=: 00:06:01.592 20:59:17 -- accel/accel.sh@19 -- # read -r var val 00:06:01.592 20:59:17 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:01.592 20:59:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.592 20:59:17 -- accel/accel.sh@19 -- # IFS=: 00:06:01.592 20:59:17 -- accel/accel.sh@19 -- # read -r var val 00:06:01.592 20:59:17 -- accel/accel.sh@20 -- # val=Yes 00:06:01.592 20:59:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.592 20:59:17 -- accel/accel.sh@19 -- # IFS=: 00:06:01.592 20:59:17 -- accel/accel.sh@19 -- # read -r var val 00:06:01.592 20:59:17 -- accel/accel.sh@20 -- # val= 00:06:01.592 20:59:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.592 20:59:17 -- accel/accel.sh@19 -- # IFS=: 00:06:01.592 20:59:17 -- accel/accel.sh@19 -- # read -r var val 00:06:01.592 20:59:17 -- accel/accel.sh@20 -- # val= 00:06:01.592 20:59:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.592 20:59:17 -- accel/accel.sh@19 -- # IFS=: 00:06:01.592 20:59:17 -- accel/accel.sh@19 -- # read -r var val 00:06:02.525 20:59:18 -- accel/accel.sh@20 -- # val= 00:06:02.525 20:59:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.525 20:59:18 -- accel/accel.sh@19 -- # IFS=: 00:06:02.525 20:59:18 -- accel/accel.sh@19 -- # read -r var val 00:06:02.525 20:59:18 -- accel/accel.sh@20 -- # val= 00:06:02.525 20:59:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.525 20:59:18 -- accel/accel.sh@19 -- # IFS=: 00:06:02.525 20:59:18 -- accel/accel.sh@19 -- # read -r var val 00:06:02.525 20:59:18 -- accel/accel.sh@20 -- # val= 00:06:02.525 20:59:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.525 20:59:18 -- accel/accel.sh@19 -- # IFS=: 00:06:02.525 20:59:18 -- accel/accel.sh@19 -- # read -r var val 00:06:02.525 20:59:18 -- accel/accel.sh@20 -- # val= 00:06:02.525 20:59:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.525 20:59:18 -- accel/accel.sh@19 -- # IFS=: 00:06:02.525 20:59:18 -- accel/accel.sh@19 -- # read -r var val 00:06:02.525 20:59:18 -- accel/accel.sh@20 -- # val= 00:06:02.525 20:59:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.525 20:59:18 -- accel/accel.sh@19 -- # IFS=: 00:06:02.525 20:59:18 -- accel/accel.sh@19 -- # read -r var val 00:06:02.525 20:59:18 -- accel/accel.sh@20 -- # val= 00:06:02.525 20:59:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.525 20:59:18 -- accel/accel.sh@19 -- # IFS=: 00:06:02.525 20:59:18 -- accel/accel.sh@19 -- # read -r var val 00:06:02.525 20:59:18 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:02.525 20:59:18 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:02.525 20:59:18 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:02.525 00:06:02.525 real 0m1.363s 00:06:02.525 user 0m1.258s 00:06:02.525 sys 0m0.120s 00:06:02.525 20:59:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:02.525 20:59:18 -- common/autotest_common.sh@10 -- # set +x 00:06:02.525 ************************************ 00:06:02.525 END TEST accel_decomp 00:06:02.525 ************************************ 00:06:02.783 20:59:18 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:02.784 20:59:18 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:02.784 20:59:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.784 20:59:18 -- common/autotest_common.sh@10 -- # set +x 00:06:02.784 ************************************ 00:06:02.784 START TEST accel_decmop_full 00:06:02.784 ************************************ 00:06:02.784 20:59:18 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:02.784 20:59:18 -- accel/accel.sh@16 -- # local accel_opc 00:06:02.784 20:59:18 -- accel/accel.sh@17 -- # local accel_module 00:06:02.784 20:59:18 -- accel/accel.sh@19 -- # IFS=: 00:06:02.784 20:59:18 -- accel/accel.sh@19 -- # read -r var val 00:06:02.784 20:59:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:02.784 20:59:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:02.784 20:59:18 -- accel/accel.sh@12 -- # build_accel_config 00:06:02.784 20:59:18 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.784 20:59:18 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.784 20:59:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.784 20:59:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.784 20:59:18 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.784 20:59:18 -- accel/accel.sh@40 -- # local IFS=, 00:06:02.784 20:59:18 -- accel/accel.sh@41 -- # jq -r . 00:06:02.784 [2024-04-18 20:59:18.621734] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:06:02.784 [2024-04-18 20:59:18.621805] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2878772 ] 00:06:02.784 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.784 [2024-04-18 20:59:18.686868] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.041 [2024-04-18 20:59:18.763427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.041 20:59:18 -- accel/accel.sh@20 -- # val= 00:06:03.041 20:59:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.041 20:59:18 -- accel/accel.sh@19 -- # IFS=: 00:06:03.041 20:59:18 -- accel/accel.sh@19 -- # read -r var val 00:06:03.041 20:59:18 -- accel/accel.sh@20 -- # val= 00:06:03.041 20:59:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.041 20:59:18 -- accel/accel.sh@19 -- # IFS=: 00:06:03.041 20:59:18 -- accel/accel.sh@19 -- # read -r var val 00:06:03.041 20:59:18 -- accel/accel.sh@20 -- # val= 00:06:03.041 20:59:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.041 20:59:18 -- accel/accel.sh@19 -- # IFS=: 00:06:03.041 20:59:18 -- accel/accel.sh@19 -- # read -r var val 00:06:03.041 20:59:18 -- accel/accel.sh@20 -- # val=0x1 00:06:03.041 20:59:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.041 20:59:18 -- accel/accel.sh@19 -- # IFS=: 00:06:03.042 20:59:18 -- accel/accel.sh@19 -- # read -r var val 00:06:03.042 20:59:18 -- accel/accel.sh@20 -- # val= 00:06:03.042 20:59:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.042 20:59:18 -- accel/accel.sh@19 -- # IFS=: 00:06:03.042 20:59:18 -- accel/accel.sh@19 -- # read -r var val 00:06:03.042 20:59:18 -- accel/accel.sh@20 -- # val= 00:06:03.042 20:59:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.042 20:59:18 -- accel/accel.sh@19 -- # IFS=: 00:06:03.042 20:59:18 -- accel/accel.sh@19 -- # read -r var val 00:06:03.042 20:59:18 -- accel/accel.sh@20 -- # val=decompress 00:06:03.042 20:59:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.042 20:59:18 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:03.042 20:59:18 -- accel/accel.sh@19 -- # IFS=: 00:06:03.042 20:59:18 -- accel/accel.sh@19 -- # read -r var val 00:06:03.042 20:59:18 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:03.042 20:59:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.042 20:59:18 -- accel/accel.sh@19 -- # IFS=: 00:06:03.042 20:59:18 -- accel/accel.sh@19 -- # read -r var val 00:06:03.042 20:59:18 -- accel/accel.sh@20 -- # val= 00:06:03.042 20:59:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.042 20:59:18 -- accel/accel.sh@19 -- # IFS=: 00:06:03.042 20:59:18 -- accel/accel.sh@19 -- # read -r var val 00:06:03.042 20:59:18 -- accel/accel.sh@20 -- # val=software 00:06:03.042 20:59:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.042 20:59:18 -- accel/accel.sh@22 -- # accel_module=software 00:06:03.042 20:59:18 -- accel/accel.sh@19 -- # IFS=: 00:06:03.042 20:59:18 -- accel/accel.sh@19 -- # read -r var val 00:06:03.042 20:59:18 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:03.042 20:59:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.042 20:59:18 -- accel/accel.sh@19 -- # IFS=: 00:06:03.042 20:59:18 -- accel/accel.sh@19 -- # read -r var val 00:06:03.042 20:59:18 -- accel/accel.sh@20 -- # val=32 00:06:03.042 20:59:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.042 20:59:18 -- accel/accel.sh@19 -- # IFS=: 00:06:03.042 20:59:18 -- accel/accel.sh@19 -- # read -r var val 00:06:03.042 20:59:18 -- accel/accel.sh@20 -- # val=32 00:06:03.042 20:59:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.042 20:59:18 -- accel/accel.sh@19 -- # IFS=: 00:06:03.042 20:59:18 -- accel/accel.sh@19 -- # read -r var val 00:06:03.042 20:59:18 -- accel/accel.sh@20 -- # val=1 00:06:03.042 20:59:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.042 20:59:18 -- accel/accel.sh@19 -- # IFS=: 00:06:03.042 20:59:18 -- accel/accel.sh@19 -- # read -r var val 00:06:03.042 20:59:18 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:03.042 20:59:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.042 20:59:18 -- accel/accel.sh@19 -- # IFS=: 00:06:03.042 20:59:18 -- accel/accel.sh@19 -- # read -r var val 00:06:03.042 20:59:18 -- accel/accel.sh@20 -- # val=Yes 00:06:03.042 20:59:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.042 20:59:18 -- accel/accel.sh@19 -- # IFS=: 00:06:03.042 20:59:18 -- accel/accel.sh@19 -- # read -r var val 00:06:03.042 20:59:18 -- accel/accel.sh@20 -- # val= 00:06:03.042 20:59:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.042 20:59:18 -- accel/accel.sh@19 -- # IFS=: 00:06:03.042 20:59:18 -- accel/accel.sh@19 -- # read -r var val 00:06:03.042 20:59:18 -- accel/accel.sh@20 -- # val= 00:06:03.042 20:59:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.042 20:59:18 -- accel/accel.sh@19 -- # IFS=: 00:06:03.042 20:59:18 -- accel/accel.sh@19 -- # read -r var val 00:06:04.413 20:59:19 -- accel/accel.sh@20 -- # val= 00:06:04.413 20:59:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.413 20:59:19 -- accel/accel.sh@19 -- # IFS=: 00:06:04.413 20:59:19 -- accel/accel.sh@19 -- # read -r var val 00:06:04.413 20:59:19 -- accel/accel.sh@20 -- # val= 00:06:04.413 20:59:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.413 20:59:19 -- accel/accel.sh@19 -- # IFS=: 00:06:04.413 20:59:19 -- accel/accel.sh@19 -- # read -r var val 00:06:04.413 20:59:19 -- accel/accel.sh@20 -- # val= 00:06:04.413 20:59:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.413 20:59:19 -- accel/accel.sh@19 -- # IFS=: 00:06:04.413 20:59:19 -- accel/accel.sh@19 -- # read -r var val 00:06:04.413 20:59:19 -- accel/accel.sh@20 -- # val= 00:06:04.413 20:59:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.413 20:59:19 -- accel/accel.sh@19 -- # IFS=: 00:06:04.413 20:59:19 -- accel/accel.sh@19 -- # read -r var val 00:06:04.413 20:59:19 -- accel/accel.sh@20 -- # val= 00:06:04.413 20:59:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.413 20:59:19 -- accel/accel.sh@19 -- # IFS=: 00:06:04.413 20:59:19 -- accel/accel.sh@19 -- # read -r var val 00:06:04.413 20:59:19 -- accel/accel.sh@20 -- # val= 00:06:04.413 20:59:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.413 20:59:19 -- accel/accel.sh@19 -- # IFS=: 00:06:04.413 20:59:19 -- accel/accel.sh@19 -- # read -r var val 00:06:04.413 20:59:19 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:04.413 20:59:19 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:04.413 20:59:19 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:04.413 00:06:04.413 real 0m1.385s 00:06:04.413 user 0m1.275s 00:06:04.413 sys 0m0.124s 00:06:04.413 20:59:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:04.413 20:59:19 -- common/autotest_common.sh@10 -- # set +x 00:06:04.413 ************************************ 00:06:04.413 END TEST accel_decmop_full 00:06:04.413 ************************************ 00:06:04.413 20:59:20 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:04.413 20:59:20 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:04.413 20:59:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.413 20:59:20 -- common/autotest_common.sh@10 -- # set +x 00:06:04.413 ************************************ 00:06:04.413 START TEST accel_decomp_mcore 00:06:04.413 ************************************ 00:06:04.413 20:59:20 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:04.413 20:59:20 -- accel/accel.sh@16 -- # local accel_opc 00:06:04.413 20:59:20 -- accel/accel.sh@17 -- # local accel_module 00:06:04.413 20:59:20 -- accel/accel.sh@19 -- # IFS=: 00:06:04.413 20:59:20 -- accel/accel.sh@19 -- # read -r var val 00:06:04.413 20:59:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:04.413 20:59:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:04.413 20:59:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:04.413 20:59:20 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.413 20:59:20 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.413 20:59:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.413 20:59:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.413 20:59:20 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.413 20:59:20 -- accel/accel.sh@40 -- # local IFS=, 00:06:04.413 20:59:20 -- accel/accel.sh@41 -- # jq -r . 00:06:04.413 [2024-04-18 20:59:20.172280] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:06:04.413 [2024-04-18 20:59:20.172334] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2879033 ] 00:06:04.413 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.413 [2024-04-18 20:59:20.236482] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:04.413 [2024-04-18 20:59:20.313259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.413 [2024-04-18 20:59:20.313276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.413 [2024-04-18 20:59:20.313362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:04.413 [2024-04-18 20:59:20.313364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.671 20:59:20 -- accel/accel.sh@20 -- # val= 00:06:04.671 20:59:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.671 20:59:20 -- accel/accel.sh@19 -- # IFS=: 00:06:04.671 20:59:20 -- accel/accel.sh@19 -- # read -r var val 00:06:04.671 20:59:20 -- accel/accel.sh@20 -- # val= 00:06:04.671 20:59:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.671 20:59:20 -- accel/accel.sh@19 -- # IFS=: 00:06:04.671 20:59:20 -- accel/accel.sh@19 -- # read -r var val 00:06:04.671 20:59:20 -- accel/accel.sh@20 -- # val= 00:06:04.671 20:59:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.671 20:59:20 -- accel/accel.sh@19 -- # IFS=: 00:06:04.671 20:59:20 -- accel/accel.sh@19 -- # read -r var val 00:06:04.671 20:59:20 -- accel/accel.sh@20 -- # val=0xf 00:06:04.671 20:59:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.671 20:59:20 -- accel/accel.sh@19 -- # IFS=: 00:06:04.671 20:59:20 -- accel/accel.sh@19 -- # read -r var val 00:06:04.671 20:59:20 -- accel/accel.sh@20 -- # val= 00:06:04.671 20:59:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.671 20:59:20 -- accel/accel.sh@19 -- # IFS=: 00:06:04.671 20:59:20 -- accel/accel.sh@19 -- # read -r var val 00:06:04.671 20:59:20 -- accel/accel.sh@20 -- # val= 00:06:04.671 20:59:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.671 20:59:20 -- accel/accel.sh@19 -- # IFS=: 00:06:04.671 20:59:20 -- accel/accel.sh@19 -- # read -r var val 00:06:04.671 20:59:20 -- accel/accel.sh@20 -- # val=decompress 00:06:04.671 20:59:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.671 20:59:20 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:04.671 20:59:20 -- accel/accel.sh@19 -- # IFS=: 00:06:04.671 20:59:20 -- accel/accel.sh@19 -- # read -r var val 00:06:04.671 20:59:20 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:04.671 20:59:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.671 20:59:20 -- accel/accel.sh@19 -- # IFS=: 00:06:04.671 20:59:20 -- accel/accel.sh@19 -- # read -r var val 00:06:04.671 20:59:20 -- accel/accel.sh@20 -- # val= 00:06:04.671 20:59:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.671 20:59:20 -- accel/accel.sh@19 -- # IFS=: 00:06:04.671 20:59:20 -- accel/accel.sh@19 -- # read -r var val 00:06:04.671 20:59:20 -- accel/accel.sh@20 -- # val=software 00:06:04.671 20:59:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.671 20:59:20 -- accel/accel.sh@22 -- # accel_module=software 00:06:04.671 20:59:20 -- accel/accel.sh@19 -- # IFS=: 00:06:04.671 20:59:20 -- accel/accel.sh@19 -- # read -r var val 00:06:04.671 20:59:20 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:04.671 20:59:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.671 20:59:20 -- accel/accel.sh@19 -- # IFS=: 00:06:04.671 20:59:20 -- accel/accel.sh@19 -- # read -r var val 00:06:04.671 20:59:20 -- accel/accel.sh@20 -- # val=32 00:06:04.671 20:59:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.671 20:59:20 -- accel/accel.sh@19 -- # IFS=: 00:06:04.671 20:59:20 -- accel/accel.sh@19 -- # read -r var val 00:06:04.671 20:59:20 -- accel/accel.sh@20 -- # val=32 00:06:04.671 20:59:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.671 20:59:20 -- accel/accel.sh@19 -- # IFS=: 00:06:04.671 20:59:20 -- accel/accel.sh@19 -- # read -r var val 00:06:04.671 20:59:20 -- accel/accel.sh@20 -- # val=1 00:06:04.671 20:59:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.671 20:59:20 -- accel/accel.sh@19 -- # IFS=: 00:06:04.671 20:59:20 -- accel/accel.sh@19 -- # read -r var val 00:06:04.671 20:59:20 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:04.671 20:59:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.671 20:59:20 -- accel/accel.sh@19 -- # IFS=: 00:06:04.671 20:59:20 -- accel/accel.sh@19 -- # read -r var val 00:06:04.671 20:59:20 -- accel/accel.sh@20 -- # val=Yes 00:06:04.671 20:59:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.671 20:59:20 -- accel/accel.sh@19 -- # IFS=: 00:06:04.671 20:59:20 -- accel/accel.sh@19 -- # read -r var val 00:06:04.671 20:59:20 -- accel/accel.sh@20 -- # val= 00:06:04.671 20:59:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.671 20:59:20 -- accel/accel.sh@19 -- # IFS=: 00:06:04.671 20:59:20 -- accel/accel.sh@19 -- # read -r var val 00:06:04.671 20:59:20 -- accel/accel.sh@20 -- # val= 00:06:04.671 20:59:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.671 20:59:20 -- accel/accel.sh@19 -- # IFS=: 00:06:04.671 20:59:20 -- accel/accel.sh@19 -- # read -r var val 00:06:05.605 20:59:21 -- accel/accel.sh@20 -- # val= 00:06:05.605 20:59:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.605 20:59:21 -- accel/accel.sh@19 -- # IFS=: 00:06:05.605 20:59:21 -- accel/accel.sh@19 -- # read -r var val 00:06:05.605 20:59:21 -- accel/accel.sh@20 -- # val= 00:06:05.605 20:59:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.605 20:59:21 -- accel/accel.sh@19 -- # IFS=: 00:06:05.605 20:59:21 -- accel/accel.sh@19 -- # read -r var val 00:06:05.605 20:59:21 -- accel/accel.sh@20 -- # val= 00:06:05.605 20:59:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.605 20:59:21 -- accel/accel.sh@19 -- # IFS=: 00:06:05.605 20:59:21 -- accel/accel.sh@19 -- # read -r var val 00:06:05.605 20:59:21 -- accel/accel.sh@20 -- # val= 00:06:05.605 20:59:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.605 20:59:21 -- accel/accel.sh@19 -- # IFS=: 00:06:05.605 20:59:21 -- accel/accel.sh@19 -- # read -r var val 00:06:05.605 20:59:21 -- accel/accel.sh@20 -- # val= 00:06:05.605 20:59:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.605 20:59:21 -- accel/accel.sh@19 -- # IFS=: 00:06:05.605 20:59:21 -- accel/accel.sh@19 -- # read -r var val 00:06:05.605 20:59:21 -- accel/accel.sh@20 -- # val= 00:06:05.605 20:59:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.605 20:59:21 -- accel/accel.sh@19 -- # IFS=: 00:06:05.605 20:59:21 -- accel/accel.sh@19 -- # read -r var val 00:06:05.605 20:59:21 -- accel/accel.sh@20 -- # val= 00:06:05.605 20:59:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.605 20:59:21 -- accel/accel.sh@19 -- # IFS=: 00:06:05.605 20:59:21 -- accel/accel.sh@19 -- # read -r var val 00:06:05.605 20:59:21 -- accel/accel.sh@20 -- # val= 00:06:05.605 20:59:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.605 20:59:21 -- accel/accel.sh@19 -- # IFS=: 00:06:05.605 20:59:21 -- accel/accel.sh@19 -- # read -r var val 00:06:05.605 20:59:21 -- accel/accel.sh@20 -- # val= 00:06:05.605 20:59:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.605 20:59:21 -- accel/accel.sh@19 -- # IFS=: 00:06:05.605 20:59:21 -- accel/accel.sh@19 -- # read -r var val 00:06:05.605 20:59:21 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:05.605 20:59:21 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:05.605 20:59:21 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:05.605 00:06:05.605 real 0m1.383s 00:06:05.605 user 0m4.593s 00:06:05.605 sys 0m0.134s 00:06:05.605 20:59:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:05.605 20:59:21 -- common/autotest_common.sh@10 -- # set +x 00:06:05.605 ************************************ 00:06:05.605 END TEST accel_decomp_mcore 00:06:05.605 ************************************ 00:06:05.863 20:59:21 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:05.863 20:59:21 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:05.863 20:59:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.863 20:59:21 -- common/autotest_common.sh@10 -- # set +x 00:06:05.863 ************************************ 00:06:05.863 START TEST accel_decomp_full_mcore 00:06:05.863 ************************************ 00:06:05.863 20:59:21 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:05.863 20:59:21 -- accel/accel.sh@16 -- # local accel_opc 00:06:05.863 20:59:21 -- accel/accel.sh@17 -- # local accel_module 00:06:05.863 20:59:21 -- accel/accel.sh@19 -- # IFS=: 00:06:05.863 20:59:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:05.863 20:59:21 -- accel/accel.sh@19 -- # read -r var val 00:06:05.863 20:59:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:05.863 20:59:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:05.863 20:59:21 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.863 20:59:21 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.863 20:59:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.863 20:59:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.863 20:59:21 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.863 20:59:21 -- accel/accel.sh@40 -- # local IFS=, 00:06:05.863 20:59:21 -- accel/accel.sh@41 -- # jq -r . 00:06:05.863 [2024-04-18 20:59:21.719972] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:06:05.863 [2024-04-18 20:59:21.720030] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2879290 ] 00:06:05.863 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.863 [2024-04-18 20:59:21.784535] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:06.121 [2024-04-18 20:59:21.863846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.121 [2024-04-18 20:59:21.863943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:06.121 [2024-04-18 20:59:21.864027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:06.121 [2024-04-18 20:59:21.864029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.121 20:59:21 -- accel/accel.sh@20 -- # val= 00:06:06.121 20:59:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.121 20:59:21 -- accel/accel.sh@19 -- # IFS=: 00:06:06.121 20:59:21 -- accel/accel.sh@19 -- # read -r var val 00:06:06.121 20:59:21 -- accel/accel.sh@20 -- # val= 00:06:06.121 20:59:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.121 20:59:21 -- accel/accel.sh@19 -- # IFS=: 00:06:06.121 20:59:21 -- accel/accel.sh@19 -- # read -r var val 00:06:06.121 20:59:21 -- accel/accel.sh@20 -- # val= 00:06:06.121 20:59:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.121 20:59:21 -- accel/accel.sh@19 -- # IFS=: 00:06:06.121 20:59:21 -- accel/accel.sh@19 -- # read -r var val 00:06:06.121 20:59:21 -- accel/accel.sh@20 -- # val=0xf 00:06:06.121 20:59:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.121 20:59:21 -- accel/accel.sh@19 -- # IFS=: 00:06:06.121 20:59:21 -- accel/accel.sh@19 -- # read -r var val 00:06:06.121 20:59:21 -- accel/accel.sh@20 -- # val= 00:06:06.121 20:59:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.121 20:59:21 -- accel/accel.sh@19 -- # IFS=: 00:06:06.121 20:59:21 -- accel/accel.sh@19 -- # read -r var val 00:06:06.121 20:59:21 -- accel/accel.sh@20 -- # val= 00:06:06.121 20:59:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.121 20:59:21 -- accel/accel.sh@19 -- # IFS=: 00:06:06.121 20:59:21 -- accel/accel.sh@19 -- # read -r var val 00:06:06.121 20:59:21 -- accel/accel.sh@20 -- # val=decompress 00:06:06.121 20:59:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.121 20:59:21 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:06.121 20:59:21 -- accel/accel.sh@19 -- # IFS=: 00:06:06.121 20:59:21 -- accel/accel.sh@19 -- # read -r var val 00:06:06.121 20:59:21 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:06.121 20:59:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.121 20:59:21 -- accel/accel.sh@19 -- # IFS=: 00:06:06.121 20:59:21 -- accel/accel.sh@19 -- # read -r var val 00:06:06.121 20:59:21 -- accel/accel.sh@20 -- # val= 00:06:06.121 20:59:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.121 20:59:21 -- accel/accel.sh@19 -- # IFS=: 00:06:06.121 20:59:21 -- accel/accel.sh@19 -- # read -r var val 00:06:06.121 20:59:21 -- accel/accel.sh@20 -- # val=software 00:06:06.121 20:59:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.121 20:59:21 -- accel/accel.sh@22 -- # accel_module=software 00:06:06.121 20:59:21 -- accel/accel.sh@19 -- # IFS=: 00:06:06.121 20:59:21 -- accel/accel.sh@19 -- # read -r var val 00:06:06.121 20:59:21 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:06.121 20:59:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.121 20:59:21 -- accel/accel.sh@19 -- # IFS=: 00:06:06.121 20:59:21 -- accel/accel.sh@19 -- # read -r var val 00:06:06.121 20:59:21 -- accel/accel.sh@20 -- # val=32 00:06:06.121 20:59:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.121 20:59:21 -- accel/accel.sh@19 -- # IFS=: 00:06:06.121 20:59:21 -- accel/accel.sh@19 -- # read -r var val 00:06:06.121 20:59:21 -- accel/accel.sh@20 -- # val=32 00:06:06.121 20:59:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.121 20:59:21 -- accel/accel.sh@19 -- # IFS=: 00:06:06.121 20:59:21 -- accel/accel.sh@19 -- # read -r var val 00:06:06.121 20:59:21 -- accel/accel.sh@20 -- # val=1 00:06:06.121 20:59:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.121 20:59:21 -- accel/accel.sh@19 -- # IFS=: 00:06:06.121 20:59:21 -- accel/accel.sh@19 -- # read -r var val 00:06:06.121 20:59:21 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:06.122 20:59:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.122 20:59:21 -- accel/accel.sh@19 -- # IFS=: 00:06:06.122 20:59:21 -- accel/accel.sh@19 -- # read -r var val 00:06:06.122 20:59:21 -- accel/accel.sh@20 -- # val=Yes 00:06:06.122 20:59:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.122 20:59:21 -- accel/accel.sh@19 -- # IFS=: 00:06:06.122 20:59:21 -- accel/accel.sh@19 -- # read -r var val 00:06:06.122 20:59:21 -- accel/accel.sh@20 -- # val= 00:06:06.122 20:59:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.122 20:59:21 -- accel/accel.sh@19 -- # IFS=: 00:06:06.122 20:59:21 -- accel/accel.sh@19 -- # read -r var val 00:06:06.122 20:59:21 -- accel/accel.sh@20 -- # val= 00:06:06.122 20:59:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.122 20:59:21 -- accel/accel.sh@19 -- # IFS=: 00:06:06.122 20:59:21 -- accel/accel.sh@19 -- # read -r var val 00:06:07.494 20:59:23 -- accel/accel.sh@20 -- # val= 00:06:07.494 20:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.494 20:59:23 -- accel/accel.sh@19 -- # IFS=: 00:06:07.494 20:59:23 -- accel/accel.sh@19 -- # read -r var val 00:06:07.494 20:59:23 -- accel/accel.sh@20 -- # val= 00:06:07.494 20:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.494 20:59:23 -- accel/accel.sh@19 -- # IFS=: 00:06:07.494 20:59:23 -- accel/accel.sh@19 -- # read -r var val 00:06:07.494 20:59:23 -- accel/accel.sh@20 -- # val= 00:06:07.494 20:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.494 20:59:23 -- accel/accel.sh@19 -- # IFS=: 00:06:07.494 20:59:23 -- accel/accel.sh@19 -- # read -r var val 00:06:07.494 20:59:23 -- accel/accel.sh@20 -- # val= 00:06:07.494 20:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.494 20:59:23 -- accel/accel.sh@19 -- # IFS=: 00:06:07.494 20:59:23 -- accel/accel.sh@19 -- # read -r var val 00:06:07.494 20:59:23 -- accel/accel.sh@20 -- # val= 00:06:07.494 20:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.494 20:59:23 -- accel/accel.sh@19 -- # IFS=: 00:06:07.494 20:59:23 -- accel/accel.sh@19 -- # read -r var val 00:06:07.494 20:59:23 -- accel/accel.sh@20 -- # val= 00:06:07.494 20:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.494 20:59:23 -- accel/accel.sh@19 -- # IFS=: 00:06:07.494 20:59:23 -- accel/accel.sh@19 -- # read -r var val 00:06:07.494 20:59:23 -- accel/accel.sh@20 -- # val= 00:06:07.494 20:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.494 20:59:23 -- accel/accel.sh@19 -- # IFS=: 00:06:07.494 20:59:23 -- accel/accel.sh@19 -- # read -r var val 00:06:07.494 20:59:23 -- accel/accel.sh@20 -- # val= 00:06:07.494 20:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.494 20:59:23 -- accel/accel.sh@19 -- # IFS=: 00:06:07.494 20:59:23 -- accel/accel.sh@19 -- # read -r var val 00:06:07.494 20:59:23 -- accel/accel.sh@20 -- # val= 00:06:07.494 20:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.494 20:59:23 -- accel/accel.sh@19 -- # IFS=: 00:06:07.494 20:59:23 -- accel/accel.sh@19 -- # read -r var val 00:06:07.494 20:59:23 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:07.494 20:59:23 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:07.494 20:59:23 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:07.494 00:06:07.494 real 0m1.394s 00:06:07.494 user 0m4.637s 00:06:07.494 sys 0m0.131s 00:06:07.494 20:59:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:07.494 20:59:23 -- common/autotest_common.sh@10 -- # set +x 00:06:07.494 ************************************ 00:06:07.494 END TEST accel_decomp_full_mcore 00:06:07.494 ************************************ 00:06:07.494 20:59:23 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:07.494 20:59:23 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:07.494 20:59:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.494 20:59:23 -- common/autotest_common.sh@10 -- # set +x 00:06:07.494 ************************************ 00:06:07.494 START TEST accel_decomp_mthread 00:06:07.494 ************************************ 00:06:07.494 20:59:23 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:07.494 20:59:23 -- accel/accel.sh@16 -- # local accel_opc 00:06:07.494 20:59:23 -- accel/accel.sh@17 -- # local accel_module 00:06:07.494 20:59:23 -- accel/accel.sh@19 -- # IFS=: 00:06:07.494 20:59:23 -- accel/accel.sh@19 -- # read -r var val 00:06:07.494 20:59:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:07.494 20:59:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:07.494 20:59:23 -- accel/accel.sh@12 -- # build_accel_config 00:06:07.494 20:59:23 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.494 20:59:23 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.494 20:59:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.494 20:59:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.494 20:59:23 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.494 20:59:23 -- accel/accel.sh@40 -- # local IFS=, 00:06:07.494 20:59:23 -- accel/accel.sh@41 -- # jq -r . 00:06:07.494 [2024-04-18 20:59:23.287559] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:06:07.495 [2024-04-18 20:59:23.287618] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2879555 ] 00:06:07.495 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.495 [2024-04-18 20:59:23.348204] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.495 [2024-04-18 20:59:23.420063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.753 20:59:23 -- accel/accel.sh@20 -- # val= 00:06:07.753 20:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.753 20:59:23 -- accel/accel.sh@19 -- # IFS=: 00:06:07.753 20:59:23 -- accel/accel.sh@19 -- # read -r var val 00:06:07.753 20:59:23 -- accel/accel.sh@20 -- # val= 00:06:07.753 20:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.753 20:59:23 -- accel/accel.sh@19 -- # IFS=: 00:06:07.753 20:59:23 -- accel/accel.sh@19 -- # read -r var val 00:06:07.753 20:59:23 -- accel/accel.sh@20 -- # val= 00:06:07.753 20:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.753 20:59:23 -- accel/accel.sh@19 -- # IFS=: 00:06:07.753 20:59:23 -- accel/accel.sh@19 -- # read -r var val 00:06:07.753 20:59:23 -- accel/accel.sh@20 -- # val=0x1 00:06:07.753 20:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.753 20:59:23 -- accel/accel.sh@19 -- # IFS=: 00:06:07.753 20:59:23 -- accel/accel.sh@19 -- # read -r var val 00:06:07.753 20:59:23 -- accel/accel.sh@20 -- # val= 00:06:07.753 20:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.753 20:59:23 -- accel/accel.sh@19 -- # IFS=: 00:06:07.753 20:59:23 -- accel/accel.sh@19 -- # read -r var val 00:06:07.753 20:59:23 -- accel/accel.sh@20 -- # val= 00:06:07.753 20:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.753 20:59:23 -- accel/accel.sh@19 -- # IFS=: 00:06:07.753 20:59:23 -- accel/accel.sh@19 -- # read -r var val 00:06:07.753 20:59:23 -- accel/accel.sh@20 -- # val=decompress 00:06:07.753 20:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.753 20:59:23 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:07.753 20:59:23 -- accel/accel.sh@19 -- # IFS=: 00:06:07.753 20:59:23 -- accel/accel.sh@19 -- # read -r var val 00:06:07.753 20:59:23 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:07.753 20:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.753 20:59:23 -- accel/accel.sh@19 -- # IFS=: 00:06:07.753 20:59:23 -- accel/accel.sh@19 -- # read -r var val 00:06:07.753 20:59:23 -- accel/accel.sh@20 -- # val= 00:06:07.753 20:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.753 20:59:23 -- accel/accel.sh@19 -- # IFS=: 00:06:07.753 20:59:23 -- accel/accel.sh@19 -- # read -r var val 00:06:07.753 20:59:23 -- accel/accel.sh@20 -- # val=software 00:06:07.753 20:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.753 20:59:23 -- accel/accel.sh@22 -- # accel_module=software 00:06:07.753 20:59:23 -- accel/accel.sh@19 -- # IFS=: 00:06:07.753 20:59:23 -- accel/accel.sh@19 -- # read -r var val 00:06:07.753 20:59:23 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:07.753 20:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.753 20:59:23 -- accel/accel.sh@19 -- # IFS=: 00:06:07.753 20:59:23 -- accel/accel.sh@19 -- # read -r var val 00:06:07.753 20:59:23 -- accel/accel.sh@20 -- # val=32 00:06:07.753 20:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.753 20:59:23 -- accel/accel.sh@19 -- # IFS=: 00:06:07.753 20:59:23 -- accel/accel.sh@19 -- # read -r var val 00:06:07.753 20:59:23 -- accel/accel.sh@20 -- # val=32 00:06:07.753 20:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.753 20:59:23 -- accel/accel.sh@19 -- # IFS=: 00:06:07.753 20:59:23 -- accel/accel.sh@19 -- # read -r var val 00:06:07.753 20:59:23 -- accel/accel.sh@20 -- # val=2 00:06:07.753 20:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.753 20:59:23 -- accel/accel.sh@19 -- # IFS=: 00:06:07.753 20:59:23 -- accel/accel.sh@19 -- # read -r var val 00:06:07.753 20:59:23 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:07.753 20:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.753 20:59:23 -- accel/accel.sh@19 -- # IFS=: 00:06:07.753 20:59:23 -- accel/accel.sh@19 -- # read -r var val 00:06:07.753 20:59:23 -- accel/accel.sh@20 -- # val=Yes 00:06:07.753 20:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.753 20:59:23 -- accel/accel.sh@19 -- # IFS=: 00:06:07.753 20:59:23 -- accel/accel.sh@19 -- # read -r var val 00:06:07.753 20:59:23 -- accel/accel.sh@20 -- # val= 00:06:07.753 20:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.753 20:59:23 -- accel/accel.sh@19 -- # IFS=: 00:06:07.753 20:59:23 -- accel/accel.sh@19 -- # read -r var val 00:06:07.753 20:59:23 -- accel/accel.sh@20 -- # val= 00:06:07.753 20:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.753 20:59:23 -- accel/accel.sh@19 -- # IFS=: 00:06:07.753 20:59:23 -- accel/accel.sh@19 -- # read -r var val 00:06:09.128 20:59:24 -- accel/accel.sh@20 -- # val= 00:06:09.128 20:59:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.128 20:59:24 -- accel/accel.sh@19 -- # IFS=: 00:06:09.128 20:59:24 -- accel/accel.sh@19 -- # read -r var val 00:06:09.128 20:59:24 -- accel/accel.sh@20 -- # val= 00:06:09.128 20:59:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.128 20:59:24 -- accel/accel.sh@19 -- # IFS=: 00:06:09.128 20:59:24 -- accel/accel.sh@19 -- # read -r var val 00:06:09.128 20:59:24 -- accel/accel.sh@20 -- # val= 00:06:09.128 20:59:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.128 20:59:24 -- accel/accel.sh@19 -- # IFS=: 00:06:09.128 20:59:24 -- accel/accel.sh@19 -- # read -r var val 00:06:09.128 20:59:24 -- accel/accel.sh@20 -- # val= 00:06:09.128 20:59:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.128 20:59:24 -- accel/accel.sh@19 -- # IFS=: 00:06:09.128 20:59:24 -- accel/accel.sh@19 -- # read -r var val 00:06:09.128 20:59:24 -- accel/accel.sh@20 -- # val= 00:06:09.128 20:59:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.128 20:59:24 -- accel/accel.sh@19 -- # IFS=: 00:06:09.128 20:59:24 -- accel/accel.sh@19 -- # read -r var val 00:06:09.128 20:59:24 -- accel/accel.sh@20 -- # val= 00:06:09.128 20:59:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.128 20:59:24 -- accel/accel.sh@19 -- # IFS=: 00:06:09.128 20:59:24 -- accel/accel.sh@19 -- # read -r var val 00:06:09.128 20:59:24 -- accel/accel.sh@20 -- # val= 00:06:09.128 20:59:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.128 20:59:24 -- accel/accel.sh@19 -- # IFS=: 00:06:09.128 20:59:24 -- accel/accel.sh@19 -- # read -r var val 00:06:09.128 20:59:24 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:09.128 20:59:24 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:09.128 20:59:24 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.128 00:06:09.128 real 0m1.368s 00:06:09.128 user 0m1.260s 00:06:09.128 sys 0m0.120s 00:06:09.128 20:59:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:09.128 20:59:24 -- common/autotest_common.sh@10 -- # set +x 00:06:09.128 ************************************ 00:06:09.128 END TEST accel_decomp_mthread 00:06:09.128 ************************************ 00:06:09.128 20:59:24 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:09.128 20:59:24 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:09.128 20:59:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:09.128 20:59:24 -- common/autotest_common.sh@10 -- # set +x 00:06:09.128 ************************************ 00:06:09.128 START TEST accel_deomp_full_mthread 00:06:09.128 ************************************ 00:06:09.128 20:59:24 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:09.128 20:59:24 -- accel/accel.sh@16 -- # local accel_opc 00:06:09.128 20:59:24 -- accel/accel.sh@17 -- # local accel_module 00:06:09.128 20:59:24 -- accel/accel.sh@19 -- # IFS=: 00:06:09.128 20:59:24 -- accel/accel.sh@19 -- # read -r var val 00:06:09.128 20:59:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:09.128 20:59:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:09.128 20:59:24 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.128 20:59:24 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.128 20:59:24 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.128 20:59:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.128 20:59:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.128 20:59:24 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.128 20:59:24 -- accel/accel.sh@40 -- # local IFS=, 00:06:09.128 20:59:24 -- accel/accel.sh@41 -- # jq -r . 00:06:09.128 [2024-04-18 20:59:24.837861] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:06:09.128 [2024-04-18 20:59:24.837934] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2879829 ] 00:06:09.128 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.128 [2024-04-18 20:59:24.900190] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.128 [2024-04-18 20:59:24.976255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.128 20:59:25 -- accel/accel.sh@20 -- # val= 00:06:09.128 20:59:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.128 20:59:25 -- accel/accel.sh@19 -- # IFS=: 00:06:09.128 20:59:25 -- accel/accel.sh@19 -- # read -r var val 00:06:09.128 20:59:25 -- accel/accel.sh@20 -- # val= 00:06:09.128 20:59:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.128 20:59:25 -- accel/accel.sh@19 -- # IFS=: 00:06:09.128 20:59:25 -- accel/accel.sh@19 -- # read -r var val 00:06:09.128 20:59:25 -- accel/accel.sh@20 -- # val= 00:06:09.128 20:59:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.128 20:59:25 -- accel/accel.sh@19 -- # IFS=: 00:06:09.128 20:59:25 -- accel/accel.sh@19 -- # read -r var val 00:06:09.128 20:59:25 -- accel/accel.sh@20 -- # val=0x1 00:06:09.128 20:59:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.128 20:59:25 -- accel/accel.sh@19 -- # IFS=: 00:06:09.128 20:59:25 -- accel/accel.sh@19 -- # read -r var val 00:06:09.128 20:59:25 -- accel/accel.sh@20 -- # val= 00:06:09.128 20:59:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.128 20:59:25 -- accel/accel.sh@19 -- # IFS=: 00:06:09.128 20:59:25 -- accel/accel.sh@19 -- # read -r var val 00:06:09.128 20:59:25 -- accel/accel.sh@20 -- # val= 00:06:09.128 20:59:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.128 20:59:25 -- accel/accel.sh@19 -- # IFS=: 00:06:09.128 20:59:25 -- accel/accel.sh@19 -- # read -r var val 00:06:09.128 20:59:25 -- accel/accel.sh@20 -- # val=decompress 00:06:09.128 20:59:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.128 20:59:25 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:09.128 20:59:25 -- accel/accel.sh@19 -- # IFS=: 00:06:09.128 20:59:25 -- accel/accel.sh@19 -- # read -r var val 00:06:09.128 20:59:25 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:09.128 20:59:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.128 20:59:25 -- accel/accel.sh@19 -- # IFS=: 00:06:09.128 20:59:25 -- accel/accel.sh@19 -- # read -r var val 00:06:09.128 20:59:25 -- accel/accel.sh@20 -- # val= 00:06:09.128 20:59:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.128 20:59:25 -- accel/accel.sh@19 -- # IFS=: 00:06:09.128 20:59:25 -- accel/accel.sh@19 -- # read -r var val 00:06:09.128 20:59:25 -- accel/accel.sh@20 -- # val=software 00:06:09.128 20:59:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.128 20:59:25 -- accel/accel.sh@22 -- # accel_module=software 00:06:09.128 20:59:25 -- accel/accel.sh@19 -- # IFS=: 00:06:09.128 20:59:25 -- accel/accel.sh@19 -- # read -r var val 00:06:09.128 20:59:25 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:09.128 20:59:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.128 20:59:25 -- accel/accel.sh@19 -- # IFS=: 00:06:09.128 20:59:25 -- accel/accel.sh@19 -- # read -r var val 00:06:09.128 20:59:25 -- accel/accel.sh@20 -- # val=32 00:06:09.128 20:59:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.128 20:59:25 -- accel/accel.sh@19 -- # IFS=: 00:06:09.128 20:59:25 -- accel/accel.sh@19 -- # read -r var val 00:06:09.128 20:59:25 -- accel/accel.sh@20 -- # val=32 00:06:09.128 20:59:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.128 20:59:25 -- accel/accel.sh@19 -- # IFS=: 00:06:09.128 20:59:25 -- accel/accel.sh@19 -- # read -r var val 00:06:09.128 20:59:25 -- accel/accel.sh@20 -- # val=2 00:06:09.128 20:59:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.128 20:59:25 -- accel/accel.sh@19 -- # IFS=: 00:06:09.128 20:59:25 -- accel/accel.sh@19 -- # read -r var val 00:06:09.128 20:59:25 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:09.128 20:59:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.128 20:59:25 -- accel/accel.sh@19 -- # IFS=: 00:06:09.128 20:59:25 -- accel/accel.sh@19 -- # read -r var val 00:06:09.128 20:59:25 -- accel/accel.sh@20 -- # val=Yes 00:06:09.128 20:59:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.128 20:59:25 -- accel/accel.sh@19 -- # IFS=: 00:06:09.128 20:59:25 -- accel/accel.sh@19 -- # read -r var val 00:06:09.128 20:59:25 -- accel/accel.sh@20 -- # val= 00:06:09.128 20:59:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.128 20:59:25 -- accel/accel.sh@19 -- # IFS=: 00:06:09.128 20:59:25 -- accel/accel.sh@19 -- # read -r var val 00:06:09.128 20:59:25 -- accel/accel.sh@20 -- # val= 00:06:09.128 20:59:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.128 20:59:25 -- accel/accel.sh@19 -- # IFS=: 00:06:09.128 20:59:25 -- accel/accel.sh@19 -- # read -r var val 00:06:10.501 20:59:26 -- accel/accel.sh@20 -- # val= 00:06:10.501 20:59:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.501 20:59:26 -- accel/accel.sh@19 -- # IFS=: 00:06:10.501 20:59:26 -- accel/accel.sh@19 -- # read -r var val 00:06:10.501 20:59:26 -- accel/accel.sh@20 -- # val= 00:06:10.501 20:59:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.501 20:59:26 -- accel/accel.sh@19 -- # IFS=: 00:06:10.501 20:59:26 -- accel/accel.sh@19 -- # read -r var val 00:06:10.501 20:59:26 -- accel/accel.sh@20 -- # val= 00:06:10.501 20:59:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.501 20:59:26 -- accel/accel.sh@19 -- # IFS=: 00:06:10.501 20:59:26 -- accel/accel.sh@19 -- # read -r var val 00:06:10.501 20:59:26 -- accel/accel.sh@20 -- # val= 00:06:10.501 20:59:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.501 20:59:26 -- accel/accel.sh@19 -- # IFS=: 00:06:10.501 20:59:26 -- accel/accel.sh@19 -- # read -r var val 00:06:10.501 20:59:26 -- accel/accel.sh@20 -- # val= 00:06:10.501 20:59:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.501 20:59:26 -- accel/accel.sh@19 -- # IFS=: 00:06:10.501 20:59:26 -- accel/accel.sh@19 -- # read -r var val 00:06:10.501 20:59:26 -- accel/accel.sh@20 -- # val= 00:06:10.501 20:59:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.501 20:59:26 -- accel/accel.sh@19 -- # IFS=: 00:06:10.501 20:59:26 -- accel/accel.sh@19 -- # read -r var val 00:06:10.501 20:59:26 -- accel/accel.sh@20 -- # val= 00:06:10.501 20:59:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.501 20:59:26 -- accel/accel.sh@19 -- # IFS=: 00:06:10.501 20:59:26 -- accel/accel.sh@19 -- # read -r var val 00:06:10.501 20:59:26 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:10.501 20:59:26 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:10.501 20:59:26 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:10.501 00:06:10.501 real 0m1.401s 00:06:10.501 user 0m1.293s 00:06:10.501 sys 0m0.120s 00:06:10.501 20:59:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:10.501 20:59:26 -- common/autotest_common.sh@10 -- # set +x 00:06:10.501 ************************************ 00:06:10.501 END TEST accel_deomp_full_mthread 00:06:10.501 ************************************ 00:06:10.501 20:59:26 -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:10.501 20:59:26 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:10.501 20:59:26 -- accel/accel.sh@137 -- # build_accel_config 00:06:10.501 20:59:26 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:10.501 20:59:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:10.501 20:59:26 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.501 20:59:26 -- common/autotest_common.sh@10 -- # set +x 00:06:10.501 20:59:26 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.501 20:59:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.501 20:59:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.501 20:59:26 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.501 20:59:26 -- accel/accel.sh@40 -- # local IFS=, 00:06:10.501 20:59:26 -- accel/accel.sh@41 -- # jq -r . 00:06:10.501 ************************************ 00:06:10.501 START TEST accel_dif_functional_tests 00:06:10.501 ************************************ 00:06:10.501 20:59:26 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:10.501 [2024-04-18 20:59:26.421495] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:06:10.501 [2024-04-18 20:59:26.421557] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2880191 ] 00:06:10.759 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.759 [2024-04-18 20:59:26.480379] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:10.759 [2024-04-18 20:59:26.553552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.759 [2024-04-18 20:59:26.553649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.759 [2024-04-18 20:59:26.553651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.759 00:06:10.759 00:06:10.759 CUnit - A unit testing framework for C - Version 2.1-3 00:06:10.759 http://cunit.sourceforge.net/ 00:06:10.759 00:06:10.759 00:06:10.759 Suite: accel_dif 00:06:10.759 Test: verify: DIF generated, GUARD check ...passed 00:06:10.759 Test: verify: DIF generated, APPTAG check ...passed 00:06:10.759 Test: verify: DIF generated, REFTAG check ...passed 00:06:10.759 Test: verify: DIF not generated, GUARD check ...[2024-04-18 20:59:26.622558] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:10.759 [2024-04-18 20:59:26.622603] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:10.759 passed 00:06:10.759 Test: verify: DIF not generated, APPTAG check ...[2024-04-18 20:59:26.622635] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:10.759 [2024-04-18 20:59:26.622650] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:10.759 passed 00:06:10.759 Test: verify: DIF not generated, REFTAG check ...[2024-04-18 20:59:26.622667] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:10.759 [2024-04-18 20:59:26.622683] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:10.759 passed 00:06:10.759 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:10.759 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-18 20:59:26.622725] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:10.759 passed 00:06:10.759 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:10.759 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:10.759 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:10.759 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-18 20:59:26.622825] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:10.759 passed 00:06:10.759 Test: generate copy: DIF generated, GUARD check ...passed 00:06:10.759 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:10.759 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:10.759 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:10.759 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:10.759 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:10.759 Test: generate copy: iovecs-len validate ...[2024-04-18 20:59:26.622987] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:10.759 passed 00:06:10.759 Test: generate copy: buffer alignment validate ...passed 00:06:10.759 00:06:10.759 Run Summary: Type Total Ran Passed Failed Inactive 00:06:10.759 suites 1 1 n/a 0 0 00:06:10.759 tests 20 20 20 0 0 00:06:10.759 asserts 204 204 204 0 n/a 00:06:10.759 00:06:10.759 Elapsed time = 0.002 seconds 00:06:11.017 00:06:11.017 real 0m0.437s 00:06:11.017 user 0m0.606s 00:06:11.017 sys 0m0.149s 00:06:11.017 20:59:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:11.017 20:59:26 -- common/autotest_common.sh@10 -- # set +x 00:06:11.017 ************************************ 00:06:11.017 END TEST accel_dif_functional_tests 00:06:11.017 ************************************ 00:06:11.017 00:06:11.017 real 0m34.253s 00:06:11.017 user 0m36.315s 00:06:11.017 sys 0m5.583s 00:06:11.017 20:59:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:11.017 20:59:26 -- common/autotest_common.sh@10 -- # set +x 00:06:11.017 ************************************ 00:06:11.017 END TEST accel 00:06:11.017 ************************************ 00:06:11.017 20:59:26 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:11.017 20:59:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:11.017 20:59:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:11.017 20:59:26 -- common/autotest_common.sh@10 -- # set +x 00:06:11.274 ************************************ 00:06:11.274 START TEST accel_rpc 00:06:11.274 ************************************ 00:06:11.274 20:59:27 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:11.274 * Looking for test storage... 00:06:11.274 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:11.274 20:59:27 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:11.274 20:59:27 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2880364 00:06:11.274 20:59:27 -- accel/accel_rpc.sh@15 -- # waitforlisten 2880364 00:06:11.274 20:59:27 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:11.274 20:59:27 -- common/autotest_common.sh@817 -- # '[' -z 2880364 ']' 00:06:11.274 20:59:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.274 20:59:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:11.274 20:59:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.274 20:59:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:11.274 20:59:27 -- common/autotest_common.sh@10 -- # set +x 00:06:11.274 [2024-04-18 20:59:27.160657] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:06:11.275 [2024-04-18 20:59:27.160706] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2880364 ] 00:06:11.275 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.532 [2024-04-18 20:59:27.219895] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.532 [2024-04-18 20:59:27.298333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.098 20:59:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:12.098 20:59:27 -- common/autotest_common.sh@850 -- # return 0 00:06:12.098 20:59:27 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:12.098 20:59:27 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:12.098 20:59:27 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:12.098 20:59:27 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:12.098 20:59:27 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:12.098 20:59:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:12.098 20:59:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:12.098 20:59:27 -- common/autotest_common.sh@10 -- # set +x 00:06:12.357 ************************************ 00:06:12.357 START TEST accel_assign_opcode 00:06:12.357 ************************************ 00:06:12.357 20:59:28 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:06:12.357 20:59:28 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:12.357 20:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:12.357 20:59:28 -- common/autotest_common.sh@10 -- # set +x 00:06:12.357 [2024-04-18 20:59:28.100680] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:12.357 20:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:12.357 20:59:28 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:12.357 20:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:12.357 20:59:28 -- common/autotest_common.sh@10 -- # set +x 00:06:12.357 [2024-04-18 20:59:28.108689] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:12.357 20:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:12.357 20:59:28 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:12.357 20:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:12.357 20:59:28 -- common/autotest_common.sh@10 -- # set +x 00:06:12.357 20:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:12.357 20:59:28 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:12.357 20:59:28 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:12.357 20:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:12.616 20:59:28 -- accel/accel_rpc.sh@42 -- # grep software 00:06:12.616 20:59:28 -- common/autotest_common.sh@10 -- # set +x 00:06:12.616 20:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:12.616 software 00:06:12.616 00:06:12.616 real 0m0.235s 00:06:12.616 user 0m0.047s 00:06:12.616 sys 0m0.010s 00:06:12.616 20:59:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:12.616 20:59:28 -- common/autotest_common.sh@10 -- # set +x 00:06:12.616 ************************************ 00:06:12.616 END TEST accel_assign_opcode 00:06:12.616 ************************************ 00:06:12.616 20:59:28 -- accel/accel_rpc.sh@55 -- # killprocess 2880364 00:06:12.616 20:59:28 -- common/autotest_common.sh@936 -- # '[' -z 2880364 ']' 00:06:12.616 20:59:28 -- common/autotest_common.sh@940 -- # kill -0 2880364 00:06:12.616 20:59:28 -- common/autotest_common.sh@941 -- # uname 00:06:12.616 20:59:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:12.616 20:59:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2880364 00:06:12.616 20:59:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:12.616 20:59:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:12.616 20:59:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2880364' 00:06:12.616 killing process with pid 2880364 00:06:12.616 20:59:28 -- common/autotest_common.sh@955 -- # kill 2880364 00:06:12.616 20:59:28 -- common/autotest_common.sh@960 -- # wait 2880364 00:06:12.874 00:06:12.874 real 0m1.712s 00:06:12.874 user 0m1.827s 00:06:12.874 sys 0m0.473s 00:06:12.874 20:59:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:12.874 20:59:28 -- common/autotest_common.sh@10 -- # set +x 00:06:12.874 ************************************ 00:06:12.874 END TEST accel_rpc 00:06:12.874 ************************************ 00:06:12.874 20:59:28 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:12.874 20:59:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:12.874 20:59:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:12.874 20:59:28 -- common/autotest_common.sh@10 -- # set +x 00:06:13.168 ************************************ 00:06:13.168 START TEST app_cmdline 00:06:13.168 ************************************ 00:06:13.168 20:59:28 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:13.168 * Looking for test storage... 00:06:13.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:13.168 20:59:28 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:13.168 20:59:28 -- app/cmdline.sh@17 -- # spdk_tgt_pid=2880709 00:06:13.168 20:59:28 -- app/cmdline.sh@18 -- # waitforlisten 2880709 00:06:13.168 20:59:28 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:13.168 20:59:28 -- common/autotest_common.sh@817 -- # '[' -z 2880709 ']' 00:06:13.168 20:59:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.168 20:59:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:13.168 20:59:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.168 20:59:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:13.168 20:59:28 -- common/autotest_common.sh@10 -- # set +x 00:06:13.168 [2024-04-18 20:59:29.045750] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:06:13.168 [2024-04-18 20:59:29.045806] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2880709 ] 00:06:13.168 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.427 [2024-04-18 20:59:29.108957] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.427 [2024-04-18 20:59:29.186336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.995 20:59:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:13.995 20:59:29 -- common/autotest_common.sh@850 -- # return 0 00:06:13.995 20:59:29 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:14.254 { 00:06:14.254 "version": "SPDK v24.05-pre git sha1 99b3305a5", 00:06:14.254 "fields": { 00:06:14.254 "major": 24, 00:06:14.254 "minor": 5, 00:06:14.254 "patch": 0, 00:06:14.254 "suffix": "-pre", 00:06:14.254 "commit": "99b3305a5" 00:06:14.254 } 00:06:14.254 } 00:06:14.254 20:59:30 -- app/cmdline.sh@22 -- # expected_methods=() 00:06:14.254 20:59:30 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:14.254 20:59:30 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:14.254 20:59:30 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:14.254 20:59:30 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:14.254 20:59:30 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:14.254 20:59:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:14.254 20:59:30 -- app/cmdline.sh@26 -- # sort 00:06:14.254 20:59:30 -- common/autotest_common.sh@10 -- # set +x 00:06:14.254 20:59:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:14.254 20:59:30 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:14.254 20:59:30 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:14.254 20:59:30 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:14.254 20:59:30 -- common/autotest_common.sh@638 -- # local es=0 00:06:14.254 20:59:30 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:14.254 20:59:30 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:14.254 20:59:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:14.254 20:59:30 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:14.254 20:59:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:14.254 20:59:30 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:14.254 20:59:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:14.254 20:59:30 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:14.254 20:59:30 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:14.254 20:59:30 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:14.514 request: 00:06:14.514 { 00:06:14.514 "method": "env_dpdk_get_mem_stats", 00:06:14.514 "req_id": 1 00:06:14.514 } 00:06:14.514 Got JSON-RPC error response 00:06:14.514 response: 00:06:14.514 { 00:06:14.514 "code": -32601, 00:06:14.514 "message": "Method not found" 00:06:14.514 } 00:06:14.514 20:59:30 -- common/autotest_common.sh@641 -- # es=1 00:06:14.514 20:59:30 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:14.514 20:59:30 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:14.514 20:59:30 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:14.514 20:59:30 -- app/cmdline.sh@1 -- # killprocess 2880709 00:06:14.514 20:59:30 -- common/autotest_common.sh@936 -- # '[' -z 2880709 ']' 00:06:14.514 20:59:30 -- common/autotest_common.sh@940 -- # kill -0 2880709 00:06:14.514 20:59:30 -- common/autotest_common.sh@941 -- # uname 00:06:14.514 20:59:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:14.514 20:59:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2880709 00:06:14.514 20:59:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:14.514 20:59:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:14.514 20:59:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2880709' 00:06:14.514 killing process with pid 2880709 00:06:14.514 20:59:30 -- common/autotest_common.sh@955 -- # kill 2880709 00:06:14.514 20:59:30 -- common/autotest_common.sh@960 -- # wait 2880709 00:06:14.774 00:06:14.774 real 0m1.692s 00:06:14.774 user 0m1.970s 00:06:14.774 sys 0m0.446s 00:06:14.774 20:59:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:14.774 20:59:30 -- common/autotest_common.sh@10 -- # set +x 00:06:14.774 ************************************ 00:06:14.774 END TEST app_cmdline 00:06:14.774 ************************************ 00:06:14.774 20:59:30 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:14.774 20:59:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:14.774 20:59:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:14.774 20:59:30 -- common/autotest_common.sh@10 -- # set +x 00:06:15.035 ************************************ 00:06:15.035 START TEST version 00:06:15.035 ************************************ 00:06:15.035 20:59:30 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:15.035 * Looking for test storage... 00:06:15.035 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:15.035 20:59:30 -- app/version.sh@17 -- # get_header_version major 00:06:15.035 20:59:30 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:15.035 20:59:30 -- app/version.sh@14 -- # tr -d '"' 00:06:15.035 20:59:30 -- app/version.sh@14 -- # cut -f2 00:06:15.035 20:59:30 -- app/version.sh@17 -- # major=24 00:06:15.035 20:59:30 -- app/version.sh@18 -- # get_header_version minor 00:06:15.035 20:59:30 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:15.035 20:59:30 -- app/version.sh@14 -- # tr -d '"' 00:06:15.035 20:59:30 -- app/version.sh@14 -- # cut -f2 00:06:15.035 20:59:30 -- app/version.sh@18 -- # minor=5 00:06:15.035 20:59:30 -- app/version.sh@19 -- # get_header_version patch 00:06:15.035 20:59:30 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:15.035 20:59:30 -- app/version.sh@14 -- # tr -d '"' 00:06:15.035 20:59:30 -- app/version.sh@14 -- # cut -f2 00:06:15.035 20:59:30 -- app/version.sh@19 -- # patch=0 00:06:15.035 20:59:30 -- app/version.sh@20 -- # get_header_version suffix 00:06:15.035 20:59:30 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:15.035 20:59:30 -- app/version.sh@14 -- # tr -d '"' 00:06:15.035 20:59:30 -- app/version.sh@14 -- # cut -f2 00:06:15.035 20:59:30 -- app/version.sh@20 -- # suffix=-pre 00:06:15.035 20:59:30 -- app/version.sh@22 -- # version=24.5 00:06:15.035 20:59:30 -- app/version.sh@25 -- # (( patch != 0 )) 00:06:15.035 20:59:30 -- app/version.sh@28 -- # version=24.5rc0 00:06:15.035 20:59:30 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:15.035 20:59:30 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:15.035 20:59:30 -- app/version.sh@30 -- # py_version=24.5rc0 00:06:15.035 20:59:30 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:06:15.035 00:06:15.035 real 0m0.158s 00:06:15.035 user 0m0.090s 00:06:15.035 sys 0m0.098s 00:06:15.035 20:59:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:15.035 20:59:30 -- common/autotest_common.sh@10 -- # set +x 00:06:15.035 ************************************ 00:06:15.035 END TEST version 00:06:15.035 ************************************ 00:06:15.035 20:59:30 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:06:15.035 20:59:30 -- spdk/autotest.sh@194 -- # uname -s 00:06:15.035 20:59:30 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:15.035 20:59:30 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:15.035 20:59:30 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:15.035 20:59:30 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:15.035 20:59:30 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:06:15.035 20:59:30 -- spdk/autotest.sh@258 -- # timing_exit lib 00:06:15.035 20:59:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:15.035 20:59:30 -- common/autotest_common.sh@10 -- # set +x 00:06:15.295 20:59:30 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:06:15.295 20:59:30 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:06:15.295 20:59:30 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:06:15.295 20:59:30 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:06:15.295 20:59:30 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:06:15.295 20:59:30 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:06:15.295 20:59:30 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:15.295 20:59:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:15.295 20:59:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.295 20:59:30 -- common/autotest_common.sh@10 -- # set +x 00:06:15.295 ************************************ 00:06:15.295 START TEST nvmf_tcp 00:06:15.295 ************************************ 00:06:15.295 20:59:31 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:15.295 * Looking for test storage... 00:06:15.295 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:15.295 20:59:31 -- nvmf/nvmf.sh@10 -- # uname -s 00:06:15.555 20:59:31 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:15.555 20:59:31 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:15.555 20:59:31 -- nvmf/common.sh@7 -- # uname -s 00:06:15.555 20:59:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:15.555 20:59:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:15.555 20:59:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:15.555 20:59:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:15.555 20:59:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:15.555 20:59:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:15.555 20:59:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:15.555 20:59:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:15.555 20:59:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:15.555 20:59:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:15.555 20:59:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:15.555 20:59:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:15.555 20:59:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:15.555 20:59:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:15.555 20:59:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:15.555 20:59:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:15.555 20:59:31 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:15.555 20:59:31 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:15.555 20:59:31 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:15.555 20:59:31 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:15.555 20:59:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.555 20:59:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.555 20:59:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.555 20:59:31 -- paths/export.sh@5 -- # export PATH 00:06:15.555 20:59:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.555 20:59:31 -- nvmf/common.sh@47 -- # : 0 00:06:15.555 20:59:31 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:15.555 20:59:31 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:15.555 20:59:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:15.555 20:59:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:15.555 20:59:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:15.555 20:59:31 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:15.555 20:59:31 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:15.555 20:59:31 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:15.555 20:59:31 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:15.555 20:59:31 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:15.555 20:59:31 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:15.555 20:59:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:15.556 20:59:31 -- common/autotest_common.sh@10 -- # set +x 00:06:15.556 20:59:31 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:15.556 20:59:31 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:15.556 20:59:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:15.556 20:59:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.556 20:59:31 -- common/autotest_common.sh@10 -- # set +x 00:06:15.556 ************************************ 00:06:15.556 START TEST nvmf_example 00:06:15.556 ************************************ 00:06:15.556 20:59:31 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:15.556 * Looking for test storage... 00:06:15.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:15.556 20:59:31 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:15.556 20:59:31 -- nvmf/common.sh@7 -- # uname -s 00:06:15.556 20:59:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:15.556 20:59:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:15.556 20:59:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:15.556 20:59:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:15.556 20:59:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:15.556 20:59:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:15.556 20:59:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:15.556 20:59:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:15.556 20:59:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:15.815 20:59:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:15.815 20:59:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:15.815 20:59:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:15.815 20:59:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:15.815 20:59:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:15.815 20:59:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:15.815 20:59:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:15.815 20:59:31 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:15.815 20:59:31 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:15.815 20:59:31 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:15.815 20:59:31 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:15.815 20:59:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.815 20:59:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.815 20:59:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.815 20:59:31 -- paths/export.sh@5 -- # export PATH 00:06:15.815 20:59:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.815 20:59:31 -- nvmf/common.sh@47 -- # : 0 00:06:15.815 20:59:31 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:15.815 20:59:31 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:15.815 20:59:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:15.815 20:59:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:15.815 20:59:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:15.815 20:59:31 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:15.815 20:59:31 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:15.815 20:59:31 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:15.815 20:59:31 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:15.815 20:59:31 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:15.815 20:59:31 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:15.815 20:59:31 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:15.815 20:59:31 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:15.815 20:59:31 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:15.815 20:59:31 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:15.815 20:59:31 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:15.815 20:59:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:15.815 20:59:31 -- common/autotest_common.sh@10 -- # set +x 00:06:15.815 20:59:31 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:15.815 20:59:31 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:15.815 20:59:31 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:15.815 20:59:31 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:15.815 20:59:31 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:15.815 20:59:31 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:15.815 20:59:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:15.815 20:59:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:15.815 20:59:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:15.815 20:59:31 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:15.815 20:59:31 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:15.815 20:59:31 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:15.815 20:59:31 -- common/autotest_common.sh@10 -- # set +x 00:06:22.385 20:59:37 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:22.385 20:59:37 -- nvmf/common.sh@291 -- # pci_devs=() 00:06:22.385 20:59:37 -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:22.385 20:59:37 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:22.385 20:59:37 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:22.385 20:59:37 -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:22.385 20:59:37 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:22.385 20:59:37 -- nvmf/common.sh@295 -- # net_devs=() 00:06:22.385 20:59:37 -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:22.385 20:59:37 -- nvmf/common.sh@296 -- # e810=() 00:06:22.385 20:59:37 -- nvmf/common.sh@296 -- # local -ga e810 00:06:22.385 20:59:37 -- nvmf/common.sh@297 -- # x722=() 00:06:22.385 20:59:37 -- nvmf/common.sh@297 -- # local -ga x722 00:06:22.385 20:59:37 -- nvmf/common.sh@298 -- # mlx=() 00:06:22.385 20:59:37 -- nvmf/common.sh@298 -- # local -ga mlx 00:06:22.385 20:59:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:22.385 20:59:37 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:22.385 20:59:37 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:22.385 20:59:37 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:22.385 20:59:37 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:22.385 20:59:37 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:22.385 20:59:37 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:22.385 20:59:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:22.385 20:59:37 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:22.385 20:59:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:22.385 20:59:37 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:22.385 20:59:37 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:22.385 20:59:37 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:22.385 20:59:37 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:22.385 20:59:37 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:22.385 20:59:37 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:22.385 20:59:37 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:22.385 20:59:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:22.385 20:59:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:22.385 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:22.385 20:59:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:22.385 20:59:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:22.385 20:59:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:22.385 20:59:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:22.385 20:59:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:22.385 20:59:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:22.385 20:59:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:22.385 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:22.385 20:59:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:22.385 20:59:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:22.385 20:59:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:22.385 20:59:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:22.385 20:59:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:22.385 20:59:37 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:22.385 20:59:37 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:22.385 20:59:37 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:22.385 20:59:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:22.385 20:59:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:22.385 20:59:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:22.385 20:59:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:22.385 20:59:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:22.385 Found net devices under 0000:86:00.0: cvl_0_0 00:06:22.385 20:59:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:22.385 20:59:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:22.385 20:59:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:22.385 20:59:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:22.385 20:59:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:22.385 20:59:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:22.385 Found net devices under 0000:86:00.1: cvl_0_1 00:06:22.385 20:59:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:22.385 20:59:37 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:06:22.385 20:59:37 -- nvmf/common.sh@403 -- # is_hw=yes 00:06:22.385 20:59:37 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:06:22.385 20:59:37 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:06:22.385 20:59:37 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:06:22.385 20:59:37 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:22.385 20:59:37 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:22.385 20:59:37 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:22.385 20:59:37 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:22.385 20:59:37 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:22.385 20:59:37 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:22.385 20:59:37 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:22.385 20:59:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:22.385 20:59:37 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:22.385 20:59:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:22.385 20:59:37 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:22.385 20:59:37 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:22.385 20:59:37 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:22.385 20:59:37 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:22.385 20:59:37 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:22.385 20:59:37 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:22.385 20:59:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:22.385 20:59:37 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:22.386 20:59:37 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:22.386 20:59:37 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:22.386 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:22.386 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:06:22.386 00:06:22.386 --- 10.0.0.2 ping statistics --- 00:06:22.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:22.386 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:06:22.386 20:59:37 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:22.386 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:22.386 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:06:22.386 00:06:22.386 --- 10.0.0.1 ping statistics --- 00:06:22.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:22.386 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:06:22.386 20:59:37 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:22.386 20:59:37 -- nvmf/common.sh@411 -- # return 0 00:06:22.386 20:59:37 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:22.386 20:59:37 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:22.386 20:59:37 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:06:22.386 20:59:37 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:06:22.386 20:59:37 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:22.386 20:59:37 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:06:22.386 20:59:37 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:06:22.386 20:59:38 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:22.386 20:59:38 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:22.386 20:59:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:22.386 20:59:38 -- common/autotest_common.sh@10 -- # set +x 00:06:22.386 20:59:38 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:22.386 20:59:38 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:22.386 20:59:38 -- target/nvmf_example.sh@34 -- # nvmfpid=2884832 00:06:22.386 20:59:38 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:22.386 20:59:38 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:22.386 20:59:38 -- target/nvmf_example.sh@36 -- # waitforlisten 2884832 00:06:22.386 20:59:38 -- common/autotest_common.sh@817 -- # '[' -z 2884832 ']' 00:06:22.386 20:59:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.386 20:59:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:22.386 20:59:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.386 20:59:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:22.386 20:59:38 -- common/autotest_common.sh@10 -- # set +x 00:06:22.386 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.321 20:59:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:23.321 20:59:38 -- common/autotest_common.sh@850 -- # return 0 00:06:23.321 20:59:38 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:23.321 20:59:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:23.321 20:59:38 -- common/autotest_common.sh@10 -- # set +x 00:06:23.322 20:59:38 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:23.322 20:59:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:23.322 20:59:38 -- common/autotest_common.sh@10 -- # set +x 00:06:23.322 20:59:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:23.322 20:59:38 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:23.322 20:59:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:23.322 20:59:38 -- common/autotest_common.sh@10 -- # set +x 00:06:23.322 20:59:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:23.322 20:59:38 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:23.322 20:59:38 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:23.322 20:59:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:23.322 20:59:38 -- common/autotest_common.sh@10 -- # set +x 00:06:23.322 20:59:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:23.322 20:59:38 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:23.322 20:59:38 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:23.322 20:59:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:23.322 20:59:38 -- common/autotest_common.sh@10 -- # set +x 00:06:23.322 20:59:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:23.322 20:59:38 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:23.322 20:59:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:23.322 20:59:38 -- common/autotest_common.sh@10 -- # set +x 00:06:23.322 20:59:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:23.322 20:59:38 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:23.322 20:59:38 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:23.322 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.296 Initializing NVMe Controllers 00:06:33.296 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:33.296 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:33.296 Initialization complete. Launching workers. 00:06:33.296 ======================================================== 00:06:33.296 Latency(us) 00:06:33.296 Device Information : IOPS MiB/s Average min max 00:06:33.296 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14767.37 57.69 4333.92 711.08 15567.78 00:06:33.296 ======================================================== 00:06:33.296 Total : 14767.37 57.69 4333.92 711.08 15567.78 00:06:33.296 00:06:33.296 20:59:49 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:33.296 20:59:49 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:33.296 20:59:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:06:33.296 20:59:49 -- nvmf/common.sh@117 -- # sync 00:06:33.296 20:59:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:33.296 20:59:49 -- nvmf/common.sh@120 -- # set +e 00:06:33.296 20:59:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:33.296 20:59:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:33.296 rmmod nvme_tcp 00:06:33.296 rmmod nvme_fabrics 00:06:33.296 rmmod nvme_keyring 00:06:33.296 20:59:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:33.296 20:59:49 -- nvmf/common.sh@124 -- # set -e 00:06:33.296 20:59:49 -- nvmf/common.sh@125 -- # return 0 00:06:33.296 20:59:49 -- nvmf/common.sh@478 -- # '[' -n 2884832 ']' 00:06:33.296 20:59:49 -- nvmf/common.sh@479 -- # killprocess 2884832 00:06:33.296 20:59:49 -- common/autotest_common.sh@936 -- # '[' -z 2884832 ']' 00:06:33.296 20:59:49 -- common/autotest_common.sh@940 -- # kill -0 2884832 00:06:33.296 20:59:49 -- common/autotest_common.sh@941 -- # uname 00:06:33.296 20:59:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:33.296 20:59:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2884832 00:06:33.556 20:59:49 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:06:33.556 20:59:49 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:06:33.556 20:59:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2884832' 00:06:33.556 killing process with pid 2884832 00:06:33.556 20:59:49 -- common/autotest_common.sh@955 -- # kill 2884832 00:06:33.556 20:59:49 -- common/autotest_common.sh@960 -- # wait 2884832 00:06:33.556 nvmf threads initialize successfully 00:06:33.556 bdev subsystem init successfully 00:06:33.556 created a nvmf target service 00:06:33.556 create targets's poll groups done 00:06:33.556 all subsystems of target started 00:06:33.556 nvmf target is running 00:06:33.556 all subsystems of target stopped 00:06:33.556 destroy targets's poll groups done 00:06:33.556 destroyed the nvmf target service 00:06:33.556 bdev subsystem finish successfully 00:06:33.556 nvmf threads destroy successfully 00:06:33.556 20:59:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:06:33.556 20:59:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:06:33.556 20:59:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:06:33.556 20:59:49 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:33.556 20:59:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:33.556 20:59:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:33.556 20:59:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:33.556 20:59:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:36.092 20:59:51 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:36.092 20:59:51 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:36.092 20:59:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:36.092 20:59:51 -- common/autotest_common.sh@10 -- # set +x 00:06:36.092 00:06:36.092 real 0m20.163s 00:06:36.092 user 0m45.916s 00:06:36.092 sys 0m6.226s 00:06:36.092 20:59:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:36.092 20:59:51 -- common/autotest_common.sh@10 -- # set +x 00:06:36.092 ************************************ 00:06:36.092 END TEST nvmf_example 00:06:36.092 ************************************ 00:06:36.092 20:59:51 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:36.092 20:59:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:36.092 20:59:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:36.092 20:59:51 -- common/autotest_common.sh@10 -- # set +x 00:06:36.092 ************************************ 00:06:36.092 START TEST nvmf_filesystem 00:06:36.092 ************************************ 00:06:36.092 20:59:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:36.092 * Looking for test storage... 00:06:36.092 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:36.092 20:59:51 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:36.092 20:59:51 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:36.092 20:59:51 -- common/autotest_common.sh@34 -- # set -e 00:06:36.092 20:59:51 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:36.092 20:59:51 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:36.093 20:59:51 -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:36.093 20:59:51 -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:36.093 20:59:51 -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:36.093 20:59:51 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:36.093 20:59:51 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:36.093 20:59:51 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:36.093 20:59:51 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:36.093 20:59:51 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:36.093 20:59:51 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:36.093 20:59:51 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:36.093 20:59:51 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:36.093 20:59:51 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:36.093 20:59:51 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:36.093 20:59:51 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:36.093 20:59:51 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:36.093 20:59:51 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:36.093 20:59:51 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:36.093 20:59:51 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:36.093 20:59:51 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:36.093 20:59:51 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:36.093 20:59:51 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:36.093 20:59:51 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:36.093 20:59:51 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:36.093 20:59:51 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:36.093 20:59:51 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:36.093 20:59:51 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:36.093 20:59:51 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:36.093 20:59:51 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:36.093 20:59:51 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:36.093 20:59:51 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:36.093 20:59:51 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:36.093 20:59:51 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:36.093 20:59:51 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:36.093 20:59:51 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:36.093 20:59:51 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:36.093 20:59:51 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:36.093 20:59:51 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:36.093 20:59:51 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:36.093 20:59:51 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:36.093 20:59:51 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:36.093 20:59:51 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:36.093 20:59:51 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:36.093 20:59:51 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:36.093 20:59:51 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:36.093 20:59:51 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:36.093 20:59:51 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:36.093 20:59:51 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:36.093 20:59:51 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:36.093 20:59:51 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:06:36.093 20:59:51 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:06:36.093 20:59:51 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:36.093 20:59:51 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:06:36.093 20:59:51 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:06:36.093 20:59:51 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:06:36.093 20:59:51 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:06:36.093 20:59:51 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:06:36.093 20:59:51 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:06:36.093 20:59:51 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:06:36.093 20:59:51 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:06:36.093 20:59:51 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:06:36.093 20:59:51 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:06:36.093 20:59:51 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:06:36.093 20:59:51 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:06:36.093 20:59:51 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:06:36.093 20:59:51 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:06:36.093 20:59:51 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:06:36.093 20:59:51 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:06:36.093 20:59:51 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:06:36.093 20:59:51 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:06:36.093 20:59:51 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:06:36.093 20:59:51 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:36.093 20:59:51 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:06:36.093 20:59:51 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:06:36.093 20:59:51 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:06:36.093 20:59:51 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:06:36.093 20:59:51 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:06:36.093 20:59:51 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:06:36.093 20:59:51 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:06:36.093 20:59:51 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:06:36.093 20:59:51 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:06:36.093 20:59:51 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:06:36.093 20:59:51 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:06:36.093 20:59:51 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:36.093 20:59:51 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:06:36.093 20:59:51 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:06:36.093 20:59:51 -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:36.093 20:59:51 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:36.093 20:59:51 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:36.093 20:59:51 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:36.093 20:59:51 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:36.093 20:59:51 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:36.093 20:59:51 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:36.093 20:59:51 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:36.093 20:59:51 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:36.093 20:59:51 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:36.093 20:59:51 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:36.093 20:59:51 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:36.093 20:59:51 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:36.093 20:59:51 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:36.093 20:59:51 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:36.093 20:59:51 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:36.093 #define SPDK_CONFIG_H 00:06:36.093 #define SPDK_CONFIG_APPS 1 00:06:36.093 #define SPDK_CONFIG_ARCH native 00:06:36.093 #undef SPDK_CONFIG_ASAN 00:06:36.093 #undef SPDK_CONFIG_AVAHI 00:06:36.093 #undef SPDK_CONFIG_CET 00:06:36.093 #define SPDK_CONFIG_COVERAGE 1 00:06:36.093 #define SPDK_CONFIG_CROSS_PREFIX 00:06:36.093 #undef SPDK_CONFIG_CRYPTO 00:06:36.093 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:36.093 #undef SPDK_CONFIG_CUSTOMOCF 00:06:36.093 #undef SPDK_CONFIG_DAOS 00:06:36.093 #define SPDK_CONFIG_DAOS_DIR 00:06:36.093 #define SPDK_CONFIG_DEBUG 1 00:06:36.093 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:36.093 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:36.093 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:36.093 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:36.093 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:36.093 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:36.093 #define SPDK_CONFIG_EXAMPLES 1 00:06:36.093 #undef SPDK_CONFIG_FC 00:06:36.093 #define SPDK_CONFIG_FC_PATH 00:06:36.093 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:36.093 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:36.093 #undef SPDK_CONFIG_FUSE 00:06:36.093 #undef SPDK_CONFIG_FUZZER 00:06:36.093 #define SPDK_CONFIG_FUZZER_LIB 00:06:36.093 #undef SPDK_CONFIG_GOLANG 00:06:36.093 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:36.093 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:36.093 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:36.093 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:06:36.093 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:36.093 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:36.093 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:36.093 #define SPDK_CONFIG_IDXD 1 00:06:36.093 #undef SPDK_CONFIG_IDXD_KERNEL 00:06:36.093 #undef SPDK_CONFIG_IPSEC_MB 00:06:36.093 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:36.093 #define SPDK_CONFIG_ISAL 1 00:06:36.093 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:36.093 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:36.093 #define SPDK_CONFIG_LIBDIR 00:06:36.093 #undef SPDK_CONFIG_LTO 00:06:36.093 #define SPDK_CONFIG_MAX_LCORES 00:06:36.093 #define SPDK_CONFIG_NVME_CUSE 1 00:06:36.093 #undef SPDK_CONFIG_OCF 00:06:36.093 #define SPDK_CONFIG_OCF_PATH 00:06:36.093 #define SPDK_CONFIG_OPENSSL_PATH 00:06:36.093 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:36.093 #define SPDK_CONFIG_PGO_DIR 00:06:36.093 #undef SPDK_CONFIG_PGO_USE 00:06:36.093 #define SPDK_CONFIG_PREFIX /usr/local 00:06:36.093 #undef SPDK_CONFIG_RAID5F 00:06:36.093 #undef SPDK_CONFIG_RBD 00:06:36.093 #define SPDK_CONFIG_RDMA 1 00:06:36.093 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:36.093 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:36.093 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:36.093 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:36.094 #define SPDK_CONFIG_SHARED 1 00:06:36.094 #undef SPDK_CONFIG_SMA 00:06:36.094 #define SPDK_CONFIG_TESTS 1 00:06:36.094 #undef SPDK_CONFIG_TSAN 00:06:36.094 #define SPDK_CONFIG_UBLK 1 00:06:36.094 #define SPDK_CONFIG_UBSAN 1 00:06:36.094 #undef SPDK_CONFIG_UNIT_TESTS 00:06:36.094 #undef SPDK_CONFIG_URING 00:06:36.094 #define SPDK_CONFIG_URING_PATH 00:06:36.094 #undef SPDK_CONFIG_URING_ZNS 00:06:36.094 #undef SPDK_CONFIG_USDT 00:06:36.094 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:36.094 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:36.094 #define SPDK_CONFIG_VFIO_USER 1 00:06:36.094 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:36.094 #define SPDK_CONFIG_VHOST 1 00:06:36.094 #define SPDK_CONFIG_VIRTIO 1 00:06:36.094 #undef SPDK_CONFIG_VTUNE 00:06:36.094 #define SPDK_CONFIG_VTUNE_DIR 00:06:36.094 #define SPDK_CONFIG_WERROR 1 00:06:36.094 #define SPDK_CONFIG_WPDK_DIR 00:06:36.094 #undef SPDK_CONFIG_XNVME 00:06:36.094 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:36.094 20:59:51 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:36.094 20:59:51 -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:36.094 20:59:51 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:36.094 20:59:51 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:36.094 20:59:51 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:36.094 20:59:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.094 20:59:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.094 20:59:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.094 20:59:51 -- paths/export.sh@5 -- # export PATH 00:06:36.094 20:59:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.094 20:59:51 -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:36.094 20:59:51 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:36.094 20:59:51 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:36.094 20:59:51 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:36.094 20:59:51 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:36.094 20:59:51 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:36.094 20:59:51 -- pm/common@67 -- # TEST_TAG=N/A 00:06:36.094 20:59:51 -- pm/common@68 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:36.094 20:59:51 -- pm/common@70 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:36.094 20:59:51 -- pm/common@71 -- # uname -s 00:06:36.094 20:59:51 -- pm/common@71 -- # PM_OS=Linux 00:06:36.094 20:59:51 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:36.094 20:59:51 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:06:36.094 20:59:51 -- pm/common@76 -- # [[ Linux == Linux ]] 00:06:36.094 20:59:51 -- pm/common@76 -- # [[ ............................... != QEMU ]] 00:06:36.094 20:59:51 -- pm/common@76 -- # [[ ! -e /.dockerenv ]] 00:06:36.094 20:59:51 -- pm/common@79 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:36.094 20:59:51 -- pm/common@80 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:36.094 20:59:51 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:06:36.094 20:59:51 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:06:36.094 20:59:51 -- pm/common@85 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:36.094 20:59:51 -- common/autotest_common.sh@57 -- # : 0 00:06:36.094 20:59:51 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:06:36.094 20:59:51 -- common/autotest_common.sh@61 -- # : 0 00:06:36.094 20:59:51 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:36.094 20:59:51 -- common/autotest_common.sh@63 -- # : 0 00:06:36.094 20:59:51 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:06:36.094 20:59:51 -- common/autotest_common.sh@65 -- # : 1 00:06:36.094 20:59:51 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:36.094 20:59:51 -- common/autotest_common.sh@67 -- # : 0 00:06:36.094 20:59:51 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:06:36.094 20:59:51 -- common/autotest_common.sh@69 -- # : 00:06:36.094 20:59:51 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:06:36.094 20:59:51 -- common/autotest_common.sh@71 -- # : 0 00:06:36.094 20:59:51 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:06:36.094 20:59:51 -- common/autotest_common.sh@73 -- # : 0 00:06:36.094 20:59:51 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:06:36.094 20:59:51 -- common/autotest_common.sh@75 -- # : 0 00:06:36.094 20:59:51 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:06:36.094 20:59:51 -- common/autotest_common.sh@77 -- # : 0 00:06:36.094 20:59:51 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:36.094 20:59:51 -- common/autotest_common.sh@79 -- # : 0 00:06:36.094 20:59:51 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:06:36.094 20:59:51 -- common/autotest_common.sh@81 -- # : 0 00:06:36.094 20:59:51 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:06:36.094 20:59:51 -- common/autotest_common.sh@83 -- # : 0 00:06:36.094 20:59:51 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:06:36.094 20:59:51 -- common/autotest_common.sh@85 -- # : 1 00:06:36.094 20:59:51 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:06:36.094 20:59:51 -- common/autotest_common.sh@87 -- # : 0 00:06:36.094 20:59:51 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:06:36.094 20:59:51 -- common/autotest_common.sh@89 -- # : 0 00:06:36.094 20:59:51 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:06:36.094 20:59:51 -- common/autotest_common.sh@91 -- # : 1 00:06:36.094 20:59:51 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:06:36.094 20:59:51 -- common/autotest_common.sh@93 -- # : 1 00:06:36.094 20:59:51 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:06:36.094 20:59:51 -- common/autotest_common.sh@95 -- # : 0 00:06:36.094 20:59:51 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:36.094 20:59:51 -- common/autotest_common.sh@97 -- # : 0 00:06:36.094 20:59:51 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:06:36.094 20:59:51 -- common/autotest_common.sh@99 -- # : 0 00:06:36.094 20:59:51 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:06:36.094 20:59:51 -- common/autotest_common.sh@101 -- # : tcp 00:06:36.094 20:59:51 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:36.094 20:59:51 -- common/autotest_common.sh@103 -- # : 0 00:06:36.094 20:59:51 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:06:36.094 20:59:51 -- common/autotest_common.sh@105 -- # : 0 00:06:36.094 20:59:51 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:06:36.094 20:59:51 -- common/autotest_common.sh@107 -- # : 0 00:06:36.094 20:59:51 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:06:36.094 20:59:51 -- common/autotest_common.sh@109 -- # : 0 00:06:36.094 20:59:51 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:06:36.094 20:59:51 -- common/autotest_common.sh@111 -- # : 0 00:06:36.094 20:59:51 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:06:36.094 20:59:51 -- common/autotest_common.sh@113 -- # : 0 00:06:36.094 20:59:51 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:06:36.094 20:59:51 -- common/autotest_common.sh@115 -- # : 0 00:06:36.094 20:59:51 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:06:36.094 20:59:51 -- common/autotest_common.sh@117 -- # : 0 00:06:36.094 20:59:51 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:36.094 20:59:51 -- common/autotest_common.sh@119 -- # : 0 00:06:36.094 20:59:51 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:06:36.094 20:59:51 -- common/autotest_common.sh@121 -- # : 1 00:06:36.094 20:59:51 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:06:36.094 20:59:51 -- common/autotest_common.sh@123 -- # : 00:06:36.094 20:59:51 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:36.094 20:59:51 -- common/autotest_common.sh@125 -- # : 0 00:06:36.094 20:59:51 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:06:36.094 20:59:51 -- common/autotest_common.sh@127 -- # : 0 00:06:36.094 20:59:51 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:06:36.094 20:59:51 -- common/autotest_common.sh@129 -- # : 0 00:06:36.094 20:59:51 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:06:36.094 20:59:51 -- common/autotest_common.sh@131 -- # : 0 00:06:36.094 20:59:51 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:06:36.094 20:59:51 -- common/autotest_common.sh@133 -- # : 0 00:06:36.094 20:59:51 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:06:36.094 20:59:51 -- common/autotest_common.sh@135 -- # : 0 00:06:36.094 20:59:51 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:06:36.094 20:59:51 -- common/autotest_common.sh@137 -- # : 00:06:36.095 20:59:51 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:06:36.095 20:59:51 -- common/autotest_common.sh@139 -- # : true 00:06:36.095 20:59:51 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:06:36.095 20:59:51 -- common/autotest_common.sh@141 -- # : 0 00:06:36.095 20:59:51 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:06:36.095 20:59:51 -- common/autotest_common.sh@143 -- # : 0 00:06:36.095 20:59:51 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:06:36.095 20:59:51 -- common/autotest_common.sh@145 -- # : 0 00:06:36.095 20:59:51 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:06:36.095 20:59:51 -- common/autotest_common.sh@147 -- # : 0 00:06:36.095 20:59:51 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:06:36.095 20:59:51 -- common/autotest_common.sh@149 -- # : 0 00:06:36.095 20:59:51 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:06:36.095 20:59:51 -- common/autotest_common.sh@151 -- # : 0 00:06:36.095 20:59:51 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:06:36.095 20:59:51 -- common/autotest_common.sh@153 -- # : e810 00:06:36.095 20:59:51 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:06:36.095 20:59:51 -- common/autotest_common.sh@155 -- # : 0 00:06:36.095 20:59:51 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:06:36.095 20:59:51 -- common/autotest_common.sh@157 -- # : 0 00:06:36.095 20:59:51 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:06:36.095 20:59:51 -- common/autotest_common.sh@159 -- # : 0 00:06:36.095 20:59:51 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:06:36.095 20:59:51 -- common/autotest_common.sh@161 -- # : 0 00:06:36.095 20:59:51 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:06:36.095 20:59:51 -- common/autotest_common.sh@163 -- # : 0 00:06:36.095 20:59:51 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:06:36.095 20:59:51 -- common/autotest_common.sh@166 -- # : 00:06:36.095 20:59:51 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:06:36.095 20:59:51 -- common/autotest_common.sh@168 -- # : 0 00:06:36.095 20:59:51 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:06:36.095 20:59:51 -- common/autotest_common.sh@170 -- # : 0 00:06:36.095 20:59:51 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:36.095 20:59:51 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:36.095 20:59:51 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:36.095 20:59:51 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:36.095 20:59:51 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:36.095 20:59:51 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:36.095 20:59:51 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:36.095 20:59:51 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:36.095 20:59:51 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:36.095 20:59:51 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:36.095 20:59:51 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:36.095 20:59:51 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:36.095 20:59:51 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:36.095 20:59:51 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:36.095 20:59:51 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:06:36.095 20:59:51 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:36.095 20:59:51 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:36.095 20:59:51 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:36.095 20:59:51 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:36.095 20:59:51 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:36.095 20:59:51 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:06:36.095 20:59:51 -- common/autotest_common.sh@199 -- # cat 00:06:36.095 20:59:51 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:06:36.095 20:59:51 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:36.095 20:59:51 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:36.095 20:59:51 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:36.095 20:59:51 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:36.095 20:59:51 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:06:36.095 20:59:51 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:06:36.095 20:59:51 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:36.095 20:59:51 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:36.095 20:59:51 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:36.095 20:59:51 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:36.095 20:59:51 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:36.095 20:59:51 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:36.095 20:59:51 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:36.095 20:59:51 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:36.095 20:59:51 -- common/autotest_common.sh@245 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:36.095 20:59:51 -- common/autotest_common.sh@245 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:36.095 20:59:51 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:36.095 20:59:51 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:36.095 20:59:51 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:06:36.095 20:59:51 -- common/autotest_common.sh@252 -- # export valgrind= 00:06:36.095 20:59:51 -- common/autotest_common.sh@252 -- # valgrind= 00:06:36.095 20:59:51 -- common/autotest_common.sh@258 -- # uname -s 00:06:36.095 20:59:51 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:06:36.095 20:59:51 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:06:36.095 20:59:51 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:06:36.095 20:59:51 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:06:36.095 20:59:51 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:06:36.095 20:59:51 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:06:36.095 20:59:51 -- common/autotest_common.sh@268 -- # MAKE=make 00:06:36.095 20:59:51 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j96 00:06:36.095 20:59:51 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:06:36.095 20:59:51 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:06:36.095 20:59:51 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:06:36.095 20:59:51 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:06:36.095 20:59:51 -- common/autotest_common.sh@289 -- # for i in "$@" 00:06:36.095 20:59:51 -- common/autotest_common.sh@290 -- # case "$i" in 00:06:36.095 20:59:51 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=tcp 00:06:36.095 20:59:51 -- common/autotest_common.sh@307 -- # [[ -z 2887261 ]] 00:06:36.095 20:59:51 -- common/autotest_common.sh@307 -- # kill -0 2887261 00:06:36.095 20:59:51 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:06:36.095 20:59:51 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:06:36.095 20:59:51 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:06:36.095 20:59:51 -- common/autotest_common.sh@320 -- # local mount target_dir 00:06:36.095 20:59:51 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:06:36.095 20:59:51 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:06:36.095 20:59:51 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:06:36.095 20:59:51 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:06:36.095 20:59:51 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.yyPznA 00:06:36.095 20:59:51 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:36.095 20:59:51 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:06:36.095 20:59:51 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:06:36.095 20:59:51 -- common/autotest_common.sh@344 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.yyPznA/tests/target /tmp/spdk.yyPznA 00:06:36.095 20:59:51 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:06:36.095 20:59:51 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:36.096 20:59:51 -- common/autotest_common.sh@316 -- # df -T 00:06:36.096 20:59:51 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:06:36.096 20:59:51 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_devtmpfs 00:06:36.096 20:59:51 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:06:36.096 20:59:51 -- common/autotest_common.sh@351 -- # avails["$mount"]=67108864 00:06:36.096 20:59:51 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:06:36.096 20:59:51 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:06:36.096 20:59:51 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:36.096 20:59:51 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/pmem0 00:06:36.096 20:59:51 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext2 00:06:36.096 20:59:51 -- common/autotest_common.sh@351 -- # avails["$mount"]=996753408 00:06:36.096 20:59:51 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5284429824 00:06:36.096 20:59:51 -- common/autotest_common.sh@352 -- # uses["$mount"]=4287676416 00:06:36.096 20:59:51 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:36.096 20:59:51 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_root 00:06:36.096 20:59:51 -- common/autotest_common.sh@350 -- # fss["$mount"]=overlay 00:06:36.096 20:59:51 -- common/autotest_common.sh@351 -- # avails["$mount"]=186554138624 00:06:36.096 20:59:51 -- common/autotest_common.sh@351 -- # sizes["$mount"]=195974311936 00:06:36.096 20:59:51 -- common/autotest_common.sh@352 -- # uses["$mount"]=9420173312 00:06:36.096 20:59:51 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:36.096 20:59:51 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:36.096 20:59:51 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:36.096 20:59:51 -- common/autotest_common.sh@351 -- # avails["$mount"]=97933615104 00:06:36.096 20:59:51 -- common/autotest_common.sh@351 -- # sizes["$mount"]=97987153920 00:06:36.096 20:59:51 -- common/autotest_common.sh@352 -- # uses["$mount"]=53538816 00:06:36.096 20:59:51 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:36.096 20:59:51 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:36.096 20:59:51 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:36.096 20:59:51 -- common/autotest_common.sh@351 -- # avails["$mount"]=39185268736 00:06:36.096 20:59:51 -- common/autotest_common.sh@351 -- # sizes["$mount"]=39194865664 00:06:36.096 20:59:51 -- common/autotest_common.sh@352 -- # uses["$mount"]=9596928 00:06:36.096 20:59:51 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:36.096 20:59:51 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:36.096 20:59:51 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:36.096 20:59:51 -- common/autotest_common.sh@351 -- # avails["$mount"]=97986387968 00:06:36.096 20:59:51 -- common/autotest_common.sh@351 -- # sizes["$mount"]=97987158016 00:06:36.096 20:59:51 -- common/autotest_common.sh@352 -- # uses["$mount"]=770048 00:06:36.096 20:59:51 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:36.096 20:59:51 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:36.096 20:59:51 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:36.096 20:59:51 -- common/autotest_common.sh@351 -- # avails["$mount"]=19597426688 00:06:36.096 20:59:51 -- common/autotest_common.sh@351 -- # sizes["$mount"]=19597430784 00:06:36.096 20:59:51 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:06:36.096 20:59:51 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:36.096 20:59:51 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:06:36.096 * Looking for test storage... 00:06:36.096 20:59:51 -- common/autotest_common.sh@357 -- # local target_space new_size 00:06:36.096 20:59:51 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:06:36.096 20:59:51 -- common/autotest_common.sh@361 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:36.096 20:59:51 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:36.096 20:59:51 -- common/autotest_common.sh@361 -- # mount=/ 00:06:36.096 20:59:51 -- common/autotest_common.sh@363 -- # target_space=186554138624 00:06:36.096 20:59:51 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:06:36.096 20:59:51 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:06:36.096 20:59:51 -- common/autotest_common.sh@369 -- # [[ overlay == tmpfs ]] 00:06:36.096 20:59:51 -- common/autotest_common.sh@369 -- # [[ overlay == ramfs ]] 00:06:36.096 20:59:51 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:06:36.096 20:59:51 -- common/autotest_common.sh@370 -- # new_size=11634765824 00:06:36.096 20:59:51 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:36.096 20:59:51 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:36.096 20:59:51 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:36.096 20:59:51 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:36.096 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:36.096 20:59:51 -- common/autotest_common.sh@378 -- # return 0 00:06:36.096 20:59:51 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:06:36.096 20:59:51 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:06:36.096 20:59:51 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:36.096 20:59:51 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:36.096 20:59:51 -- common/autotest_common.sh@1673 -- # true 00:06:36.096 20:59:51 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:06:36.096 20:59:51 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:36.096 20:59:51 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:36.096 20:59:51 -- common/autotest_common.sh@27 -- # exec 00:06:36.096 20:59:51 -- common/autotest_common.sh@29 -- # exec 00:06:36.096 20:59:51 -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:36.096 20:59:51 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:36.096 20:59:51 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:36.096 20:59:51 -- common/autotest_common.sh@18 -- # set -x 00:06:36.096 20:59:51 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:36.096 20:59:51 -- nvmf/common.sh@7 -- # uname -s 00:06:36.096 20:59:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:36.096 20:59:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:36.096 20:59:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:36.096 20:59:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:36.096 20:59:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:36.096 20:59:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:36.096 20:59:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:36.096 20:59:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:36.096 20:59:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:36.096 20:59:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:36.096 20:59:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:36.096 20:59:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:36.096 20:59:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:36.096 20:59:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:36.096 20:59:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:36.096 20:59:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:36.096 20:59:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:36.096 20:59:51 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:36.096 20:59:51 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:36.096 20:59:51 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:36.096 20:59:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.096 20:59:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.096 20:59:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.096 20:59:51 -- paths/export.sh@5 -- # export PATH 00:06:36.096 20:59:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.096 20:59:51 -- nvmf/common.sh@47 -- # : 0 00:06:36.096 20:59:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:36.096 20:59:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:36.096 20:59:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:36.096 20:59:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:36.096 20:59:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:36.096 20:59:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:36.096 20:59:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:36.096 20:59:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:36.096 20:59:51 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:36.096 20:59:51 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:36.097 20:59:51 -- target/filesystem.sh@15 -- # nvmftestinit 00:06:36.097 20:59:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:36.097 20:59:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:36.097 20:59:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:36.097 20:59:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:36.097 20:59:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:36.097 20:59:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:36.097 20:59:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:36.097 20:59:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:36.097 20:59:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:36.097 20:59:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:36.097 20:59:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:36.097 20:59:51 -- common/autotest_common.sh@10 -- # set +x 00:06:42.660 20:59:57 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:42.660 20:59:57 -- nvmf/common.sh@291 -- # pci_devs=() 00:06:42.660 20:59:57 -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:42.660 20:59:57 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:42.660 20:59:57 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:42.660 20:59:57 -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:42.660 20:59:57 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:42.660 20:59:57 -- nvmf/common.sh@295 -- # net_devs=() 00:06:42.660 20:59:57 -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:42.660 20:59:57 -- nvmf/common.sh@296 -- # e810=() 00:06:42.660 20:59:57 -- nvmf/common.sh@296 -- # local -ga e810 00:06:42.660 20:59:57 -- nvmf/common.sh@297 -- # x722=() 00:06:42.660 20:59:57 -- nvmf/common.sh@297 -- # local -ga x722 00:06:42.660 20:59:57 -- nvmf/common.sh@298 -- # mlx=() 00:06:42.660 20:59:57 -- nvmf/common.sh@298 -- # local -ga mlx 00:06:42.660 20:59:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:42.660 20:59:57 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:42.660 20:59:57 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:42.660 20:59:57 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:42.660 20:59:57 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:42.660 20:59:57 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:42.660 20:59:57 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:42.660 20:59:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:42.660 20:59:57 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:42.660 20:59:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:42.660 20:59:57 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:42.660 20:59:57 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:42.660 20:59:57 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:42.660 20:59:57 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:42.660 20:59:57 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:42.660 20:59:57 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:42.660 20:59:57 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:42.660 20:59:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:42.660 20:59:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:42.660 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:42.660 20:59:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:42.660 20:59:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:42.660 20:59:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:42.660 20:59:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:42.660 20:59:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:42.660 20:59:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:42.660 20:59:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:42.660 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:42.660 20:59:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:42.660 20:59:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:42.660 20:59:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:42.660 20:59:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:42.660 20:59:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:42.660 20:59:57 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:42.660 20:59:57 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:42.660 20:59:57 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:42.660 20:59:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:42.660 20:59:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:42.660 20:59:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:42.660 20:59:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:42.660 20:59:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:42.661 Found net devices under 0000:86:00.0: cvl_0_0 00:06:42.661 20:59:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:42.661 20:59:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:42.661 20:59:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:42.661 20:59:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:42.661 20:59:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:42.661 20:59:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:42.661 Found net devices under 0000:86:00.1: cvl_0_1 00:06:42.661 20:59:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:42.661 20:59:57 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:06:42.661 20:59:57 -- nvmf/common.sh@403 -- # is_hw=yes 00:06:42.661 20:59:57 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:06:42.661 20:59:57 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:06:42.661 20:59:57 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:06:42.661 20:59:57 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:42.661 20:59:57 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:42.661 20:59:57 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:42.661 20:59:57 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:42.661 20:59:57 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:42.661 20:59:57 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:42.661 20:59:57 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:42.661 20:59:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:42.661 20:59:57 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:42.661 20:59:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:42.661 20:59:57 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:42.661 20:59:57 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:42.661 20:59:57 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:42.661 20:59:57 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:42.661 20:59:57 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:42.661 20:59:57 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:42.661 20:59:57 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:42.661 20:59:57 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:42.661 20:59:57 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:42.661 20:59:57 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:42.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:42.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:06:42.661 00:06:42.661 --- 10.0.0.2 ping statistics --- 00:06:42.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.661 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:06:42.661 20:59:57 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:42.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:42.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:06:42.661 00:06:42.661 --- 10.0.0.1 ping statistics --- 00:06:42.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.661 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:06:42.661 20:59:57 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:42.661 20:59:57 -- nvmf/common.sh@411 -- # return 0 00:06:42.661 20:59:57 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:42.661 20:59:57 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:42.661 20:59:57 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:06:42.661 20:59:57 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:06:42.661 20:59:57 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:42.661 20:59:57 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:06:42.661 20:59:57 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:06:42.661 20:59:57 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:42.661 20:59:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:42.661 20:59:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:42.661 20:59:57 -- common/autotest_common.sh@10 -- # set +x 00:06:42.661 ************************************ 00:06:42.661 START TEST nvmf_filesystem_no_in_capsule 00:06:42.661 ************************************ 00:06:42.661 20:59:58 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 0 00:06:42.661 20:59:58 -- target/filesystem.sh@47 -- # in_capsule=0 00:06:42.661 20:59:58 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:42.661 20:59:58 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:06:42.661 20:59:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:42.661 20:59:58 -- common/autotest_common.sh@10 -- # set +x 00:06:42.661 20:59:58 -- nvmf/common.sh@470 -- # nvmfpid=2890647 00:06:42.661 20:59:58 -- nvmf/common.sh@471 -- # waitforlisten 2890647 00:06:42.661 20:59:58 -- common/autotest_common.sh@817 -- # '[' -z 2890647 ']' 00:06:42.661 20:59:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.661 20:59:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:42.661 20:59:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.661 20:59:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:42.661 20:59:58 -- common/autotest_common.sh@10 -- # set +x 00:06:42.661 20:59:58 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:42.661 [2024-04-18 20:59:58.166173] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:06:42.661 [2024-04-18 20:59:58.166213] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:42.661 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.661 [2024-04-18 20:59:58.228336] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:42.661 [2024-04-18 20:59:58.308093] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:42.661 [2024-04-18 20:59:58.308129] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:42.661 [2024-04-18 20:59:58.308136] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:42.661 [2024-04-18 20:59:58.308142] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:42.661 [2024-04-18 20:59:58.308147] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:42.661 [2024-04-18 20:59:58.308192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.661 [2024-04-18 20:59:58.308208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.661 [2024-04-18 20:59:58.308296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:42.661 [2024-04-18 20:59:58.308297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.286 20:59:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:43.286 20:59:58 -- common/autotest_common.sh@850 -- # return 0 00:06:43.286 20:59:58 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:06:43.286 20:59:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:43.286 20:59:58 -- common/autotest_common.sh@10 -- # set +x 00:06:43.286 20:59:59 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:43.286 20:59:59 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:43.286 20:59:59 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:43.286 20:59:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:43.286 20:59:59 -- common/autotest_common.sh@10 -- # set +x 00:06:43.286 [2024-04-18 20:59:59.012395] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:43.286 20:59:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:43.286 20:59:59 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:43.286 20:59:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:43.286 20:59:59 -- common/autotest_common.sh@10 -- # set +x 00:06:43.286 Malloc1 00:06:43.286 20:59:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:43.286 20:59:59 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:43.286 20:59:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:43.286 20:59:59 -- common/autotest_common.sh@10 -- # set +x 00:06:43.286 20:59:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:43.286 20:59:59 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:43.286 20:59:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:43.286 20:59:59 -- common/autotest_common.sh@10 -- # set +x 00:06:43.286 20:59:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:43.286 20:59:59 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:43.286 20:59:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:43.286 20:59:59 -- common/autotest_common.sh@10 -- # set +x 00:06:43.286 [2024-04-18 20:59:59.172688] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:43.286 20:59:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:43.286 20:59:59 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:43.286 20:59:59 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:06:43.286 20:59:59 -- common/autotest_common.sh@1365 -- # local bdev_info 00:06:43.286 20:59:59 -- common/autotest_common.sh@1366 -- # local bs 00:06:43.286 20:59:59 -- common/autotest_common.sh@1367 -- # local nb 00:06:43.286 20:59:59 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:43.286 20:59:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:43.286 20:59:59 -- common/autotest_common.sh@10 -- # set +x 00:06:43.286 20:59:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:43.286 20:59:59 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:06:43.286 { 00:06:43.286 "name": "Malloc1", 00:06:43.286 "aliases": [ 00:06:43.286 "306dcb96-7003-49e7-9abe-03aaf34f74f8" 00:06:43.286 ], 00:06:43.286 "product_name": "Malloc disk", 00:06:43.286 "block_size": 512, 00:06:43.286 "num_blocks": 1048576, 00:06:43.286 "uuid": "306dcb96-7003-49e7-9abe-03aaf34f74f8", 00:06:43.286 "assigned_rate_limits": { 00:06:43.286 "rw_ios_per_sec": 0, 00:06:43.286 "rw_mbytes_per_sec": 0, 00:06:43.286 "r_mbytes_per_sec": 0, 00:06:43.286 "w_mbytes_per_sec": 0 00:06:43.286 }, 00:06:43.286 "claimed": true, 00:06:43.286 "claim_type": "exclusive_write", 00:06:43.286 "zoned": false, 00:06:43.286 "supported_io_types": { 00:06:43.286 "read": true, 00:06:43.286 "write": true, 00:06:43.286 "unmap": true, 00:06:43.286 "write_zeroes": true, 00:06:43.286 "flush": true, 00:06:43.286 "reset": true, 00:06:43.286 "compare": false, 00:06:43.286 "compare_and_write": false, 00:06:43.286 "abort": true, 00:06:43.286 "nvme_admin": false, 00:06:43.286 "nvme_io": false 00:06:43.286 }, 00:06:43.286 "memory_domains": [ 00:06:43.286 { 00:06:43.286 "dma_device_id": "system", 00:06:43.286 "dma_device_type": 1 00:06:43.286 }, 00:06:43.286 { 00:06:43.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:43.286 "dma_device_type": 2 00:06:43.286 } 00:06:43.286 ], 00:06:43.286 "driver_specific": {} 00:06:43.286 } 00:06:43.286 ]' 00:06:43.286 20:59:59 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:06:43.545 20:59:59 -- common/autotest_common.sh@1369 -- # bs=512 00:06:43.545 20:59:59 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:06:43.545 20:59:59 -- common/autotest_common.sh@1370 -- # nb=1048576 00:06:43.545 20:59:59 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:06:43.545 20:59:59 -- common/autotest_common.sh@1374 -- # echo 512 00:06:43.545 20:59:59 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:43.545 20:59:59 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:44.920 21:00:00 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:44.920 21:00:00 -- common/autotest_common.sh@1184 -- # local i=0 00:06:44.920 21:00:00 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:06:44.920 21:00:00 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:06:44.920 21:00:00 -- common/autotest_common.sh@1191 -- # sleep 2 00:06:46.817 21:00:02 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:06:46.817 21:00:02 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:06:46.817 21:00:02 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:06:46.817 21:00:02 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:06:46.817 21:00:02 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:06:46.817 21:00:02 -- common/autotest_common.sh@1194 -- # return 0 00:06:46.817 21:00:02 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:46.817 21:00:02 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:46.817 21:00:02 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:46.817 21:00:02 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:46.817 21:00:02 -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:46.817 21:00:02 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:46.817 21:00:02 -- setup/common.sh@80 -- # echo 536870912 00:06:46.817 21:00:02 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:46.817 21:00:02 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:46.817 21:00:02 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:46.817 21:00:02 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:47.073 21:00:02 -- target/filesystem.sh@69 -- # partprobe 00:06:47.332 21:00:03 -- target/filesystem.sh@70 -- # sleep 1 00:06:48.706 21:00:04 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:48.706 21:00:04 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:48.706 21:00:04 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:48.706 21:00:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:48.706 21:00:04 -- common/autotest_common.sh@10 -- # set +x 00:06:48.706 ************************************ 00:06:48.706 START TEST filesystem_ext4 00:06:48.706 ************************************ 00:06:48.706 21:00:04 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:48.706 21:00:04 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:48.706 21:00:04 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:48.706 21:00:04 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:48.706 21:00:04 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:06:48.706 21:00:04 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:48.706 21:00:04 -- common/autotest_common.sh@914 -- # local i=0 00:06:48.706 21:00:04 -- common/autotest_common.sh@915 -- # local force 00:06:48.706 21:00:04 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:06:48.706 21:00:04 -- common/autotest_common.sh@918 -- # force=-F 00:06:48.706 21:00:04 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:48.706 mke2fs 1.46.5 (30-Dec-2021) 00:06:48.706 Discarding device blocks: 0/522240 done 00:06:48.706 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:48.706 Filesystem UUID: db5b3772-387c-4263-988d-90d7a5088392 00:06:48.706 Superblock backups stored on blocks: 00:06:48.706 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:48.706 00:06:48.706 Allocating group tables: 0/64 done 00:06:48.706 Writing inode tables: 0/64 done 00:06:48.706 Creating journal (8192 blocks): done 00:06:48.706 Writing superblocks and filesystem accounting information: 0/64 done 00:06:48.706 00:06:48.706 21:00:04 -- common/autotest_common.sh@931 -- # return 0 00:06:48.706 21:00:04 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:49.640 21:00:05 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:49.640 21:00:05 -- target/filesystem.sh@25 -- # sync 00:06:49.640 21:00:05 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:49.640 21:00:05 -- target/filesystem.sh@27 -- # sync 00:06:49.640 21:00:05 -- target/filesystem.sh@29 -- # i=0 00:06:49.640 21:00:05 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:49.640 21:00:05 -- target/filesystem.sh@37 -- # kill -0 2890647 00:06:49.640 21:00:05 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:49.640 21:00:05 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:49.641 21:00:05 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:49.641 21:00:05 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:49.898 00:06:49.898 real 0m1.180s 00:06:49.898 user 0m0.025s 00:06:49.898 sys 0m0.067s 00:06:49.898 21:00:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:49.898 21:00:05 -- common/autotest_common.sh@10 -- # set +x 00:06:49.898 ************************************ 00:06:49.898 END TEST filesystem_ext4 00:06:49.898 ************************************ 00:06:49.898 21:00:05 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:49.898 21:00:05 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:49.898 21:00:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:49.898 21:00:05 -- common/autotest_common.sh@10 -- # set +x 00:06:49.898 ************************************ 00:06:49.898 START TEST filesystem_btrfs 00:06:49.898 ************************************ 00:06:49.898 21:00:05 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:49.898 21:00:05 -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:49.899 21:00:05 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:49.899 21:00:05 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:49.899 21:00:05 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:06:49.899 21:00:05 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:49.899 21:00:05 -- common/autotest_common.sh@914 -- # local i=0 00:06:49.899 21:00:05 -- common/autotest_common.sh@915 -- # local force 00:06:49.899 21:00:05 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:06:49.899 21:00:05 -- common/autotest_common.sh@920 -- # force=-f 00:06:49.899 21:00:05 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:50.157 btrfs-progs v6.6.2 00:06:50.157 See https://btrfs.readthedocs.io for more information. 00:06:50.157 00:06:50.157 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:50.157 NOTE: several default settings have changed in version 5.15, please make sure 00:06:50.157 this does not affect your deployments: 00:06:50.157 - DUP for metadata (-m dup) 00:06:50.157 - enabled no-holes (-O no-holes) 00:06:50.157 - enabled free-space-tree (-R free-space-tree) 00:06:50.157 00:06:50.157 Label: (null) 00:06:50.157 UUID: 3be94aef-0b04-44d6-a3b5-e9be0119f3ec 00:06:50.157 Node size: 16384 00:06:50.157 Sector size: 4096 00:06:50.157 Filesystem size: 510.00MiB 00:06:50.157 Block group profiles: 00:06:50.157 Data: single 8.00MiB 00:06:50.157 Metadata: DUP 32.00MiB 00:06:50.157 System: DUP 8.00MiB 00:06:50.157 SSD detected: yes 00:06:50.157 Zoned device: no 00:06:50.157 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:50.157 Runtime features: free-space-tree 00:06:50.157 Checksum: crc32c 00:06:50.157 Number of devices: 1 00:06:50.157 Devices: 00:06:50.157 ID SIZE PATH 00:06:50.157 1 510.00MiB /dev/nvme0n1p1 00:06:50.157 00:06:50.157 21:00:06 -- common/autotest_common.sh@931 -- # return 0 00:06:50.157 21:00:06 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:51.089 21:00:06 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:51.090 21:00:06 -- target/filesystem.sh@25 -- # sync 00:06:51.090 21:00:06 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:51.090 21:00:06 -- target/filesystem.sh@27 -- # sync 00:06:51.090 21:00:06 -- target/filesystem.sh@29 -- # i=0 00:06:51.090 21:00:06 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:51.090 21:00:06 -- target/filesystem.sh@37 -- # kill -0 2890647 00:06:51.090 21:00:06 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:51.090 21:00:06 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:51.090 21:00:06 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:51.090 21:00:06 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:51.090 00:06:51.090 real 0m1.241s 00:06:51.090 user 0m0.022s 00:06:51.090 sys 0m0.129s 00:06:51.090 21:00:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:51.090 21:00:06 -- common/autotest_common.sh@10 -- # set +x 00:06:51.090 ************************************ 00:06:51.090 END TEST filesystem_btrfs 00:06:51.090 ************************************ 00:06:51.348 21:00:07 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:51.348 21:00:07 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:51.348 21:00:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:51.348 21:00:07 -- common/autotest_common.sh@10 -- # set +x 00:06:51.348 ************************************ 00:06:51.348 START TEST filesystem_xfs 00:06:51.348 ************************************ 00:06:51.348 21:00:07 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:06:51.348 21:00:07 -- target/filesystem.sh@18 -- # fstype=xfs 00:06:51.348 21:00:07 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:51.348 21:00:07 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:51.348 21:00:07 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:06:51.348 21:00:07 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:51.348 21:00:07 -- common/autotest_common.sh@914 -- # local i=0 00:06:51.348 21:00:07 -- common/autotest_common.sh@915 -- # local force 00:06:51.348 21:00:07 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:06:51.348 21:00:07 -- common/autotest_common.sh@920 -- # force=-f 00:06:51.348 21:00:07 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:51.348 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:51.348 = sectsz=512 attr=2, projid32bit=1 00:06:51.348 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:51.348 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:51.348 data = bsize=4096 blocks=130560, imaxpct=25 00:06:51.348 = sunit=0 swidth=0 blks 00:06:51.348 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:51.348 log =internal log bsize=4096 blocks=16384, version=2 00:06:51.348 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:51.348 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:52.283 Discarding blocks...Done. 00:06:52.283 21:00:08 -- common/autotest_common.sh@931 -- # return 0 00:06:52.283 21:00:08 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:54.811 21:00:10 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:54.811 21:00:10 -- target/filesystem.sh@25 -- # sync 00:06:54.811 21:00:10 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:54.811 21:00:10 -- target/filesystem.sh@27 -- # sync 00:06:54.811 21:00:10 -- target/filesystem.sh@29 -- # i=0 00:06:54.811 21:00:10 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:54.811 21:00:10 -- target/filesystem.sh@37 -- # kill -0 2890647 00:06:54.811 21:00:10 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:54.811 21:00:10 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:54.811 21:00:10 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:54.811 21:00:10 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:54.811 00:06:54.811 real 0m3.378s 00:06:54.811 user 0m0.021s 00:06:54.811 sys 0m0.076s 00:06:54.811 21:00:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:54.811 21:00:10 -- common/autotest_common.sh@10 -- # set +x 00:06:54.811 ************************************ 00:06:54.811 END TEST filesystem_xfs 00:06:54.811 ************************************ 00:06:54.811 21:00:10 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:54.811 21:00:10 -- target/filesystem.sh@93 -- # sync 00:06:54.811 21:00:10 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:54.811 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:54.811 21:00:10 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:54.811 21:00:10 -- common/autotest_common.sh@1205 -- # local i=0 00:06:54.811 21:00:10 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:06:54.811 21:00:10 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:54.811 21:00:10 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:06:54.811 21:00:10 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:54.811 21:00:10 -- common/autotest_common.sh@1217 -- # return 0 00:06:54.811 21:00:10 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:54.811 21:00:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:54.811 21:00:10 -- common/autotest_common.sh@10 -- # set +x 00:06:55.070 21:00:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:55.070 21:00:10 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:55.070 21:00:10 -- target/filesystem.sh@101 -- # killprocess 2890647 00:06:55.070 21:00:10 -- common/autotest_common.sh@936 -- # '[' -z 2890647 ']' 00:06:55.070 21:00:10 -- common/autotest_common.sh@940 -- # kill -0 2890647 00:06:55.070 21:00:10 -- common/autotest_common.sh@941 -- # uname 00:06:55.070 21:00:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:55.070 21:00:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2890647 00:06:55.070 21:00:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:55.070 21:00:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:55.070 21:00:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2890647' 00:06:55.070 killing process with pid 2890647 00:06:55.070 21:00:10 -- common/autotest_common.sh@955 -- # kill 2890647 00:06:55.070 21:00:10 -- common/autotest_common.sh@960 -- # wait 2890647 00:06:55.329 21:00:11 -- target/filesystem.sh@102 -- # nvmfpid= 00:06:55.329 00:06:55.329 real 0m13.045s 00:06:55.329 user 0m51.280s 00:06:55.329 sys 0m1.355s 00:06:55.329 21:00:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:55.329 21:00:11 -- common/autotest_common.sh@10 -- # set +x 00:06:55.329 ************************************ 00:06:55.329 END TEST nvmf_filesystem_no_in_capsule 00:06:55.329 ************************************ 00:06:55.329 21:00:11 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:55.329 21:00:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:55.329 21:00:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:55.329 21:00:11 -- common/autotest_common.sh@10 -- # set +x 00:06:55.588 ************************************ 00:06:55.588 START TEST nvmf_filesystem_in_capsule 00:06:55.588 ************************************ 00:06:55.588 21:00:11 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 4096 00:06:55.588 21:00:11 -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:55.588 21:00:11 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:55.588 21:00:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:06:55.588 21:00:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:55.588 21:00:11 -- common/autotest_common.sh@10 -- # set +x 00:06:55.588 21:00:11 -- nvmf/common.sh@470 -- # nvmfpid=2893651 00:06:55.588 21:00:11 -- nvmf/common.sh@471 -- # waitforlisten 2893651 00:06:55.588 21:00:11 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:55.588 21:00:11 -- common/autotest_common.sh@817 -- # '[' -z 2893651 ']' 00:06:55.588 21:00:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.588 21:00:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:55.588 21:00:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.588 21:00:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:55.588 21:00:11 -- common/autotest_common.sh@10 -- # set +x 00:06:55.588 [2024-04-18 21:00:11.398491] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:06:55.588 [2024-04-18 21:00:11.398540] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:55.588 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.588 [2024-04-18 21:00:11.463732] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:55.845 [2024-04-18 21:00:11.544949] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:55.845 [2024-04-18 21:00:11.544980] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:55.845 [2024-04-18 21:00:11.544987] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:55.845 [2024-04-18 21:00:11.544992] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:55.845 [2024-04-18 21:00:11.544997] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:55.845 [2024-04-18 21:00:11.545081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.845 [2024-04-18 21:00:11.545177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.845 [2024-04-18 21:00:11.545259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:55.845 [2024-04-18 21:00:11.545261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.410 21:00:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:56.410 21:00:12 -- common/autotest_common.sh@850 -- # return 0 00:06:56.410 21:00:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:06:56.410 21:00:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:56.410 21:00:12 -- common/autotest_common.sh@10 -- # set +x 00:06:56.410 21:00:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:56.410 21:00:12 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:56.410 21:00:12 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:56.410 21:00:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:56.410 21:00:12 -- common/autotest_common.sh@10 -- # set +x 00:06:56.411 [2024-04-18 21:00:12.256463] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:56.411 21:00:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:56.411 21:00:12 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:56.411 21:00:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:56.411 21:00:12 -- common/autotest_common.sh@10 -- # set +x 00:06:56.669 Malloc1 00:06:56.669 21:00:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:56.669 21:00:12 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:56.669 21:00:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:56.669 21:00:12 -- common/autotest_common.sh@10 -- # set +x 00:06:56.669 21:00:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:56.669 21:00:12 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:56.669 21:00:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:56.669 21:00:12 -- common/autotest_common.sh@10 -- # set +x 00:06:56.669 21:00:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:56.669 21:00:12 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:56.669 21:00:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:56.670 21:00:12 -- common/autotest_common.sh@10 -- # set +x 00:06:56.670 [2024-04-18 21:00:12.403268] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:56.670 21:00:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:56.670 21:00:12 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:56.670 21:00:12 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:06:56.670 21:00:12 -- common/autotest_common.sh@1365 -- # local bdev_info 00:06:56.670 21:00:12 -- common/autotest_common.sh@1366 -- # local bs 00:06:56.670 21:00:12 -- common/autotest_common.sh@1367 -- # local nb 00:06:56.670 21:00:12 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:56.670 21:00:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:56.670 21:00:12 -- common/autotest_common.sh@10 -- # set +x 00:06:56.670 21:00:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:56.670 21:00:12 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:06:56.670 { 00:06:56.670 "name": "Malloc1", 00:06:56.670 "aliases": [ 00:06:56.670 "56d7e0fa-b60e-4d7b-8fb3-2d76ce44012f" 00:06:56.670 ], 00:06:56.670 "product_name": "Malloc disk", 00:06:56.670 "block_size": 512, 00:06:56.670 "num_blocks": 1048576, 00:06:56.670 "uuid": "56d7e0fa-b60e-4d7b-8fb3-2d76ce44012f", 00:06:56.670 "assigned_rate_limits": { 00:06:56.670 "rw_ios_per_sec": 0, 00:06:56.670 "rw_mbytes_per_sec": 0, 00:06:56.670 "r_mbytes_per_sec": 0, 00:06:56.670 "w_mbytes_per_sec": 0 00:06:56.670 }, 00:06:56.670 "claimed": true, 00:06:56.670 "claim_type": "exclusive_write", 00:06:56.670 "zoned": false, 00:06:56.670 "supported_io_types": { 00:06:56.670 "read": true, 00:06:56.670 "write": true, 00:06:56.670 "unmap": true, 00:06:56.670 "write_zeroes": true, 00:06:56.670 "flush": true, 00:06:56.670 "reset": true, 00:06:56.670 "compare": false, 00:06:56.670 "compare_and_write": false, 00:06:56.670 "abort": true, 00:06:56.670 "nvme_admin": false, 00:06:56.670 "nvme_io": false 00:06:56.670 }, 00:06:56.670 "memory_domains": [ 00:06:56.670 { 00:06:56.670 "dma_device_id": "system", 00:06:56.670 "dma_device_type": 1 00:06:56.670 }, 00:06:56.670 { 00:06:56.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:56.670 "dma_device_type": 2 00:06:56.670 } 00:06:56.670 ], 00:06:56.670 "driver_specific": {} 00:06:56.670 } 00:06:56.670 ]' 00:06:56.670 21:00:12 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:06:56.670 21:00:12 -- common/autotest_common.sh@1369 -- # bs=512 00:06:56.670 21:00:12 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:06:56.670 21:00:12 -- common/autotest_common.sh@1370 -- # nb=1048576 00:06:56.670 21:00:12 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:06:56.670 21:00:12 -- common/autotest_common.sh@1374 -- # echo 512 00:06:56.670 21:00:12 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:56.670 21:00:12 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:58.056 21:00:13 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:58.056 21:00:13 -- common/autotest_common.sh@1184 -- # local i=0 00:06:58.056 21:00:13 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:06:58.056 21:00:13 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:06:58.056 21:00:13 -- common/autotest_common.sh@1191 -- # sleep 2 00:06:59.959 21:00:15 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:06:59.959 21:00:15 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:06:59.959 21:00:15 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:06:59.959 21:00:15 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:06:59.959 21:00:15 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:06:59.959 21:00:15 -- common/autotest_common.sh@1194 -- # return 0 00:06:59.959 21:00:15 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:59.959 21:00:15 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:59.959 21:00:15 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:59.959 21:00:15 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:59.959 21:00:15 -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:59.959 21:00:15 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:59.959 21:00:15 -- setup/common.sh@80 -- # echo 536870912 00:06:59.959 21:00:15 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:59.959 21:00:15 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:59.959 21:00:15 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:59.959 21:00:15 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:00.217 21:00:16 -- target/filesystem.sh@69 -- # partprobe 00:07:00.475 21:00:16 -- target/filesystem.sh@70 -- # sleep 1 00:07:01.437 21:00:17 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:01.437 21:00:17 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:01.437 21:00:17 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:01.437 21:00:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:01.437 21:00:17 -- common/autotest_common.sh@10 -- # set +x 00:07:01.695 ************************************ 00:07:01.695 START TEST filesystem_in_capsule_ext4 00:07:01.695 ************************************ 00:07:01.695 21:00:17 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:01.695 21:00:17 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:01.695 21:00:17 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:01.695 21:00:17 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:01.695 21:00:17 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:01.695 21:00:17 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:01.695 21:00:17 -- common/autotest_common.sh@914 -- # local i=0 00:07:01.695 21:00:17 -- common/autotest_common.sh@915 -- # local force 00:07:01.696 21:00:17 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:01.696 21:00:17 -- common/autotest_common.sh@918 -- # force=-F 00:07:01.696 21:00:17 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:01.696 mke2fs 1.46.5 (30-Dec-2021) 00:07:01.696 Discarding device blocks: 0/522240 done 00:07:01.696 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:01.696 Filesystem UUID: 0f4ec919-30c1-4d4b-9f91-d13aae704a3e 00:07:01.696 Superblock backups stored on blocks: 00:07:01.696 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:01.696 00:07:01.696 Allocating group tables: 0/64 done 00:07:01.696 Writing inode tables: 0/64 done 00:07:01.954 Creating journal (8192 blocks): done 00:07:01.954 Writing superblocks and filesystem accounting information: 0/64 done 00:07:01.954 00:07:01.954 21:00:17 -- common/autotest_common.sh@931 -- # return 0 00:07:01.954 21:00:17 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:02.212 21:00:18 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:02.212 21:00:18 -- target/filesystem.sh@25 -- # sync 00:07:02.212 21:00:18 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:02.212 21:00:18 -- target/filesystem.sh@27 -- # sync 00:07:02.212 21:00:18 -- target/filesystem.sh@29 -- # i=0 00:07:02.212 21:00:18 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:02.212 21:00:18 -- target/filesystem.sh@37 -- # kill -0 2893651 00:07:02.212 21:00:18 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:02.212 21:00:18 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:02.212 21:00:18 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:02.212 21:00:18 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:02.212 00:07:02.212 real 0m0.683s 00:07:02.212 user 0m0.027s 00:07:02.212 sys 0m0.062s 00:07:02.212 21:00:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:02.212 21:00:18 -- common/autotest_common.sh@10 -- # set +x 00:07:02.212 ************************************ 00:07:02.212 END TEST filesystem_in_capsule_ext4 00:07:02.212 ************************************ 00:07:02.470 21:00:18 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:02.470 21:00:18 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:02.470 21:00:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:02.470 21:00:18 -- common/autotest_common.sh@10 -- # set +x 00:07:02.470 ************************************ 00:07:02.470 START TEST filesystem_in_capsule_btrfs 00:07:02.470 ************************************ 00:07:02.470 21:00:18 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:02.470 21:00:18 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:02.470 21:00:18 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:02.470 21:00:18 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:02.470 21:00:18 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:07:02.470 21:00:18 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:02.470 21:00:18 -- common/autotest_common.sh@914 -- # local i=0 00:07:02.470 21:00:18 -- common/autotest_common.sh@915 -- # local force 00:07:02.470 21:00:18 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:07:02.470 21:00:18 -- common/autotest_common.sh@920 -- # force=-f 00:07:02.470 21:00:18 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:02.728 btrfs-progs v6.6.2 00:07:02.728 See https://btrfs.readthedocs.io for more information. 00:07:02.728 00:07:02.728 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:02.728 NOTE: several default settings have changed in version 5.15, please make sure 00:07:02.728 this does not affect your deployments: 00:07:02.728 - DUP for metadata (-m dup) 00:07:02.728 - enabled no-holes (-O no-holes) 00:07:02.728 - enabled free-space-tree (-R free-space-tree) 00:07:02.728 00:07:02.728 Label: (null) 00:07:02.728 UUID: 213f3cc5-68dc-4d56-8327-4b6e92dcf304 00:07:02.728 Node size: 16384 00:07:02.728 Sector size: 4096 00:07:02.728 Filesystem size: 510.00MiB 00:07:02.728 Block group profiles: 00:07:02.728 Data: single 8.00MiB 00:07:02.728 Metadata: DUP 32.00MiB 00:07:02.728 System: DUP 8.00MiB 00:07:02.728 SSD detected: yes 00:07:02.728 Zoned device: no 00:07:02.728 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:02.728 Runtime features: free-space-tree 00:07:02.728 Checksum: crc32c 00:07:02.728 Number of devices: 1 00:07:02.728 Devices: 00:07:02.728 ID SIZE PATH 00:07:02.728 1 510.00MiB /dev/nvme0n1p1 00:07:02.728 00:07:02.728 21:00:18 -- common/autotest_common.sh@931 -- # return 0 00:07:02.728 21:00:18 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:03.676 21:00:19 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:03.676 21:00:19 -- target/filesystem.sh@25 -- # sync 00:07:03.677 21:00:19 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:03.677 21:00:19 -- target/filesystem.sh@27 -- # sync 00:07:03.677 21:00:19 -- target/filesystem.sh@29 -- # i=0 00:07:03.677 21:00:19 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:03.677 21:00:19 -- target/filesystem.sh@37 -- # kill -0 2893651 00:07:03.677 21:00:19 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:03.677 21:00:19 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:03.677 21:00:19 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:03.677 21:00:19 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:03.677 00:07:03.677 real 0m1.171s 00:07:03.677 user 0m0.027s 00:07:03.677 sys 0m0.123s 00:07:03.677 21:00:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:03.677 21:00:19 -- common/autotest_common.sh@10 -- # set +x 00:07:03.677 ************************************ 00:07:03.677 END TEST filesystem_in_capsule_btrfs 00:07:03.677 ************************************ 00:07:03.677 21:00:19 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:03.677 21:00:19 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:03.677 21:00:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:03.677 21:00:19 -- common/autotest_common.sh@10 -- # set +x 00:07:03.934 ************************************ 00:07:03.934 START TEST filesystem_in_capsule_xfs 00:07:03.934 ************************************ 00:07:03.934 21:00:19 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:07:03.934 21:00:19 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:03.934 21:00:19 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:03.934 21:00:19 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:03.934 21:00:19 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:07:03.934 21:00:19 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:03.934 21:00:19 -- common/autotest_common.sh@914 -- # local i=0 00:07:03.934 21:00:19 -- common/autotest_common.sh@915 -- # local force 00:07:03.934 21:00:19 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:07:03.934 21:00:19 -- common/autotest_common.sh@920 -- # force=-f 00:07:03.934 21:00:19 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:03.934 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:03.934 = sectsz=512 attr=2, projid32bit=1 00:07:03.934 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:03.934 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:03.935 data = bsize=4096 blocks=130560, imaxpct=25 00:07:03.935 = sunit=0 swidth=0 blks 00:07:03.935 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:03.935 log =internal log bsize=4096 blocks=16384, version=2 00:07:03.935 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:03.935 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:04.868 Discarding blocks...Done. 00:07:04.868 21:00:20 -- common/autotest_common.sh@931 -- # return 0 00:07:04.868 21:00:20 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:07.405 21:00:23 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:07.405 21:00:23 -- target/filesystem.sh@25 -- # sync 00:07:07.405 21:00:23 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:07.405 21:00:23 -- target/filesystem.sh@27 -- # sync 00:07:07.405 21:00:23 -- target/filesystem.sh@29 -- # i=0 00:07:07.405 21:00:23 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:07.405 21:00:23 -- target/filesystem.sh@37 -- # kill -0 2893651 00:07:07.405 21:00:23 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:07.405 21:00:23 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:07.405 21:00:23 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:07.405 21:00:23 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:07.405 00:07:07.405 real 0m3.589s 00:07:07.405 user 0m0.020s 00:07:07.405 sys 0m0.076s 00:07:07.405 21:00:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:07.405 21:00:23 -- common/autotest_common.sh@10 -- # set +x 00:07:07.405 ************************************ 00:07:07.405 END TEST filesystem_in_capsule_xfs 00:07:07.405 ************************************ 00:07:07.405 21:00:23 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:07.667 21:00:23 -- target/filesystem.sh@93 -- # sync 00:07:07.667 21:00:23 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:07.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:07.925 21:00:23 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:07.925 21:00:23 -- common/autotest_common.sh@1205 -- # local i=0 00:07:07.925 21:00:23 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:07.925 21:00:23 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:07.925 21:00:23 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:07.925 21:00:23 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:07.925 21:00:23 -- common/autotest_common.sh@1217 -- # return 0 00:07:07.925 21:00:23 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:07.925 21:00:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:07.925 21:00:23 -- common/autotest_common.sh@10 -- # set +x 00:07:07.925 21:00:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:07.925 21:00:23 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:07.925 21:00:23 -- target/filesystem.sh@101 -- # killprocess 2893651 00:07:07.925 21:00:23 -- common/autotest_common.sh@936 -- # '[' -z 2893651 ']' 00:07:07.925 21:00:23 -- common/autotest_common.sh@940 -- # kill -0 2893651 00:07:07.925 21:00:23 -- common/autotest_common.sh@941 -- # uname 00:07:07.925 21:00:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:07.925 21:00:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2893651 00:07:07.925 21:00:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:07.925 21:00:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:07.925 21:00:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2893651' 00:07:07.925 killing process with pid 2893651 00:07:07.925 21:00:23 -- common/autotest_common.sh@955 -- # kill 2893651 00:07:07.925 21:00:23 -- common/autotest_common.sh@960 -- # wait 2893651 00:07:08.185 21:00:24 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:08.185 00:07:08.185 real 0m12.717s 00:07:08.185 user 0m49.938s 00:07:08.185 sys 0m1.372s 00:07:08.185 21:00:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:08.185 21:00:24 -- common/autotest_common.sh@10 -- # set +x 00:07:08.185 ************************************ 00:07:08.185 END TEST nvmf_filesystem_in_capsule 00:07:08.185 ************************************ 00:07:08.185 21:00:24 -- target/filesystem.sh@108 -- # nvmftestfini 00:07:08.185 21:00:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:08.185 21:00:24 -- nvmf/common.sh@117 -- # sync 00:07:08.185 21:00:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:08.185 21:00:24 -- nvmf/common.sh@120 -- # set +e 00:07:08.185 21:00:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:08.185 21:00:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:08.185 rmmod nvme_tcp 00:07:08.185 rmmod nvme_fabrics 00:07:08.444 rmmod nvme_keyring 00:07:08.444 21:00:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:08.444 21:00:24 -- nvmf/common.sh@124 -- # set -e 00:07:08.444 21:00:24 -- nvmf/common.sh@125 -- # return 0 00:07:08.444 21:00:24 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:07:08.444 21:00:24 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:08.444 21:00:24 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:08.444 21:00:24 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:08.444 21:00:24 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:08.444 21:00:24 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:08.444 21:00:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:08.444 21:00:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:08.444 21:00:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:10.348 21:00:26 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:10.348 00:07:10.348 real 0m34.482s 00:07:10.348 user 1m43.104s 00:07:10.348 sys 0m7.448s 00:07:10.348 21:00:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:10.348 21:00:26 -- common/autotest_common.sh@10 -- # set +x 00:07:10.348 ************************************ 00:07:10.348 END TEST nvmf_filesystem 00:07:10.348 ************************************ 00:07:10.348 21:00:26 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:10.348 21:00:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:10.348 21:00:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:10.348 21:00:26 -- common/autotest_common.sh@10 -- # set +x 00:07:10.611 ************************************ 00:07:10.612 START TEST nvmf_discovery 00:07:10.612 ************************************ 00:07:10.612 21:00:26 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:10.612 * Looking for test storage... 00:07:10.612 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:10.612 21:00:26 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:10.612 21:00:26 -- nvmf/common.sh@7 -- # uname -s 00:07:10.612 21:00:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:10.612 21:00:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:10.612 21:00:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:10.612 21:00:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:10.612 21:00:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:10.612 21:00:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:10.612 21:00:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:10.612 21:00:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:10.612 21:00:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:10.612 21:00:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:10.612 21:00:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:10.612 21:00:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:10.612 21:00:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:10.612 21:00:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:10.612 21:00:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:10.612 21:00:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:10.612 21:00:26 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:10.612 21:00:26 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:10.612 21:00:26 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:10.612 21:00:26 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:10.612 21:00:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.612 21:00:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.612 21:00:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.612 21:00:26 -- paths/export.sh@5 -- # export PATH 00:07:10.612 21:00:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.612 21:00:26 -- nvmf/common.sh@47 -- # : 0 00:07:10.612 21:00:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:10.612 21:00:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:10.612 21:00:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:10.612 21:00:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:10.612 21:00:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:10.612 21:00:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:10.612 21:00:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:10.612 21:00:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:10.612 21:00:26 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:10.612 21:00:26 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:10.612 21:00:26 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:10.612 21:00:26 -- target/discovery.sh@15 -- # hash nvme 00:07:10.612 21:00:26 -- target/discovery.sh@20 -- # nvmftestinit 00:07:10.612 21:00:26 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:10.612 21:00:26 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:10.612 21:00:26 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:10.612 21:00:26 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:10.612 21:00:26 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:10.612 21:00:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:10.612 21:00:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:10.612 21:00:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:10.612 21:00:26 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:10.612 21:00:26 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:10.612 21:00:26 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:10.612 21:00:26 -- common/autotest_common.sh@10 -- # set +x 00:07:15.904 21:00:31 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:15.904 21:00:31 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:15.904 21:00:31 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:15.904 21:00:31 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:15.904 21:00:31 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:15.904 21:00:31 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:15.904 21:00:31 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:15.904 21:00:31 -- nvmf/common.sh@295 -- # net_devs=() 00:07:15.904 21:00:31 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:15.904 21:00:31 -- nvmf/common.sh@296 -- # e810=() 00:07:15.904 21:00:31 -- nvmf/common.sh@296 -- # local -ga e810 00:07:15.904 21:00:31 -- nvmf/common.sh@297 -- # x722=() 00:07:15.904 21:00:31 -- nvmf/common.sh@297 -- # local -ga x722 00:07:15.904 21:00:31 -- nvmf/common.sh@298 -- # mlx=() 00:07:15.904 21:00:31 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:15.904 21:00:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:15.904 21:00:31 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:15.904 21:00:31 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:15.904 21:00:31 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:15.904 21:00:31 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:15.904 21:00:31 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:15.904 21:00:31 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:15.904 21:00:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:15.904 21:00:31 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:15.904 21:00:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:15.905 21:00:31 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:15.905 21:00:31 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:15.905 21:00:31 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:15.905 21:00:31 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:15.905 21:00:31 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:15.905 21:00:31 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:15.905 21:00:31 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:15.905 21:00:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:15.905 21:00:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:15.905 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:15.905 21:00:31 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:15.905 21:00:31 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:15.905 21:00:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:15.905 21:00:31 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:15.905 21:00:31 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:15.905 21:00:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:15.905 21:00:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:15.905 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:15.905 21:00:31 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:15.905 21:00:31 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:15.905 21:00:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:15.905 21:00:31 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:15.905 21:00:31 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:15.905 21:00:31 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:15.905 21:00:31 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:15.905 21:00:31 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:15.905 21:00:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:15.905 21:00:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:15.905 21:00:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:15.905 21:00:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:15.905 21:00:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:15.905 Found net devices under 0000:86:00.0: cvl_0_0 00:07:15.905 21:00:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:15.905 21:00:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:15.905 21:00:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:15.905 21:00:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:15.905 21:00:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:15.905 21:00:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:15.905 Found net devices under 0000:86:00.1: cvl_0_1 00:07:15.905 21:00:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:15.905 21:00:31 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:15.905 21:00:31 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:15.905 21:00:31 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:15.905 21:00:31 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:15.905 21:00:31 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:15.905 21:00:31 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:15.905 21:00:31 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:15.905 21:00:31 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:15.905 21:00:31 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:15.905 21:00:31 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:15.905 21:00:31 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:15.905 21:00:31 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:15.905 21:00:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:15.905 21:00:31 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:15.905 21:00:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:15.905 21:00:31 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:15.905 21:00:31 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:15.905 21:00:31 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:16.164 21:00:31 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:16.164 21:00:31 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:16.164 21:00:31 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:16.164 21:00:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:16.164 21:00:31 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:16.164 21:00:31 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:16.164 21:00:32 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:16.164 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:16.164 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:07:16.164 00:07:16.164 --- 10.0.0.2 ping statistics --- 00:07:16.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:16.164 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:07:16.164 21:00:32 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:16.164 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:16.164 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:07:16.164 00:07:16.164 --- 10.0.0.1 ping statistics --- 00:07:16.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:16.164 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:07:16.164 21:00:32 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:16.164 21:00:32 -- nvmf/common.sh@411 -- # return 0 00:07:16.164 21:00:32 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:16.164 21:00:32 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:16.164 21:00:32 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:16.164 21:00:32 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:16.164 21:00:32 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:16.164 21:00:32 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:16.164 21:00:32 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:16.164 21:00:32 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:16.164 21:00:32 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:16.164 21:00:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:16.164 21:00:32 -- common/autotest_common.sh@10 -- # set +x 00:07:16.164 21:00:32 -- nvmf/common.sh@470 -- # nvmfpid=2899761 00:07:16.164 21:00:32 -- nvmf/common.sh@471 -- # waitforlisten 2899761 00:07:16.164 21:00:32 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:16.164 21:00:32 -- common/autotest_common.sh@817 -- # '[' -z 2899761 ']' 00:07:16.164 21:00:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.165 21:00:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:16.165 21:00:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.165 21:00:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:16.165 21:00:32 -- common/autotest_common.sh@10 -- # set +x 00:07:16.424 [2024-04-18 21:00:32.105335] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:07:16.424 [2024-04-18 21:00:32.105379] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:16.424 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.424 [2024-04-18 21:00:32.169818] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:16.424 [2024-04-18 21:00:32.242118] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:16.424 [2024-04-18 21:00:32.242159] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:16.424 [2024-04-18 21:00:32.242166] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:16.424 [2024-04-18 21:00:32.242172] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:16.424 [2024-04-18 21:00:32.242177] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:16.424 [2024-04-18 21:00:32.242235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.424 [2024-04-18 21:00:32.242328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.424 [2024-04-18 21:00:32.242417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:16.424 [2024-04-18 21:00:32.242418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.993 21:00:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:16.993 21:00:32 -- common/autotest_common.sh@850 -- # return 0 00:07:16.993 21:00:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:16.993 21:00:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:16.993 21:00:32 -- common/autotest_common.sh@10 -- # set +x 00:07:17.253 21:00:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:17.253 21:00:32 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:17.253 21:00:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.253 21:00:32 -- common/autotest_common.sh@10 -- # set +x 00:07:17.253 [2024-04-18 21:00:32.950416] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:17.253 21:00:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.253 21:00:32 -- target/discovery.sh@26 -- # seq 1 4 00:07:17.253 21:00:32 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:17.253 21:00:32 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:17.253 21:00:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.253 21:00:32 -- common/autotest_common.sh@10 -- # set +x 00:07:17.253 Null1 00:07:17.253 21:00:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.253 21:00:32 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:17.253 21:00:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.253 21:00:32 -- common/autotest_common.sh@10 -- # set +x 00:07:17.253 21:00:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.253 21:00:32 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:17.253 21:00:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.253 21:00:32 -- common/autotest_common.sh@10 -- # set +x 00:07:17.253 21:00:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.253 21:00:32 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:17.253 21:00:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.253 21:00:32 -- common/autotest_common.sh@10 -- # set +x 00:07:17.253 [2024-04-18 21:00:32.999976] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:17.253 21:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.253 21:00:33 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:17.253 21:00:33 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:17.253 21:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.253 21:00:33 -- common/autotest_common.sh@10 -- # set +x 00:07:17.253 Null2 00:07:17.253 21:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.253 21:00:33 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:17.253 21:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.253 21:00:33 -- common/autotest_common.sh@10 -- # set +x 00:07:17.253 21:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.253 21:00:33 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:17.253 21:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.253 21:00:33 -- common/autotest_common.sh@10 -- # set +x 00:07:17.253 21:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.253 21:00:33 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:17.253 21:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.253 21:00:33 -- common/autotest_common.sh@10 -- # set +x 00:07:17.253 21:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.253 21:00:33 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:17.253 21:00:33 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:17.253 21:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.253 21:00:33 -- common/autotest_common.sh@10 -- # set +x 00:07:17.253 Null3 00:07:17.253 21:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.253 21:00:33 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:17.253 21:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.253 21:00:33 -- common/autotest_common.sh@10 -- # set +x 00:07:17.253 21:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.253 21:00:33 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:17.253 21:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.253 21:00:33 -- common/autotest_common.sh@10 -- # set +x 00:07:17.253 21:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.253 21:00:33 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:17.253 21:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.253 21:00:33 -- common/autotest_common.sh@10 -- # set +x 00:07:17.253 21:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.253 21:00:33 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:17.253 21:00:33 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:17.253 21:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.253 21:00:33 -- common/autotest_common.sh@10 -- # set +x 00:07:17.253 Null4 00:07:17.253 21:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.253 21:00:33 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:17.253 21:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.253 21:00:33 -- common/autotest_common.sh@10 -- # set +x 00:07:17.253 21:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.253 21:00:33 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:17.254 21:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.254 21:00:33 -- common/autotest_common.sh@10 -- # set +x 00:07:17.254 21:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.254 21:00:33 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:17.254 21:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.254 21:00:33 -- common/autotest_common.sh@10 -- # set +x 00:07:17.254 21:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.254 21:00:33 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:17.254 21:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.254 21:00:33 -- common/autotest_common.sh@10 -- # set +x 00:07:17.254 21:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.254 21:00:33 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:17.254 21:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.254 21:00:33 -- common/autotest_common.sh@10 -- # set +x 00:07:17.254 21:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.254 21:00:33 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:07:17.512 00:07:17.512 Discovery Log Number of Records 6, Generation counter 6 00:07:17.512 =====Discovery Log Entry 0====== 00:07:17.512 trtype: tcp 00:07:17.512 adrfam: ipv4 00:07:17.512 subtype: current discovery subsystem 00:07:17.512 treq: not required 00:07:17.512 portid: 0 00:07:17.512 trsvcid: 4420 00:07:17.512 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:17.512 traddr: 10.0.0.2 00:07:17.512 eflags: explicit discovery connections, duplicate discovery information 00:07:17.512 sectype: none 00:07:17.512 =====Discovery Log Entry 1====== 00:07:17.512 trtype: tcp 00:07:17.512 adrfam: ipv4 00:07:17.512 subtype: nvme subsystem 00:07:17.512 treq: not required 00:07:17.512 portid: 0 00:07:17.512 trsvcid: 4420 00:07:17.512 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:17.512 traddr: 10.0.0.2 00:07:17.512 eflags: none 00:07:17.512 sectype: none 00:07:17.512 =====Discovery Log Entry 2====== 00:07:17.512 trtype: tcp 00:07:17.512 adrfam: ipv4 00:07:17.512 subtype: nvme subsystem 00:07:17.512 treq: not required 00:07:17.512 portid: 0 00:07:17.512 trsvcid: 4420 00:07:17.512 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:17.512 traddr: 10.0.0.2 00:07:17.512 eflags: none 00:07:17.512 sectype: none 00:07:17.512 =====Discovery Log Entry 3====== 00:07:17.512 trtype: tcp 00:07:17.512 adrfam: ipv4 00:07:17.512 subtype: nvme subsystem 00:07:17.512 treq: not required 00:07:17.512 portid: 0 00:07:17.512 trsvcid: 4420 00:07:17.512 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:17.512 traddr: 10.0.0.2 00:07:17.512 eflags: none 00:07:17.512 sectype: none 00:07:17.512 =====Discovery Log Entry 4====== 00:07:17.512 trtype: tcp 00:07:17.512 adrfam: ipv4 00:07:17.512 subtype: nvme subsystem 00:07:17.512 treq: not required 00:07:17.512 portid: 0 00:07:17.512 trsvcid: 4420 00:07:17.512 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:17.512 traddr: 10.0.0.2 00:07:17.512 eflags: none 00:07:17.512 sectype: none 00:07:17.512 =====Discovery Log Entry 5====== 00:07:17.512 trtype: tcp 00:07:17.512 adrfam: ipv4 00:07:17.512 subtype: discovery subsystem referral 00:07:17.512 treq: not required 00:07:17.512 portid: 0 00:07:17.512 trsvcid: 4430 00:07:17.512 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:17.512 traddr: 10.0.0.2 00:07:17.512 eflags: none 00:07:17.512 sectype: none 00:07:17.512 21:00:33 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:17.512 Perform nvmf subsystem discovery via RPC 00:07:17.512 21:00:33 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:17.512 21:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.512 21:00:33 -- common/autotest_common.sh@10 -- # set +x 00:07:17.512 [2024-04-18 21:00:33.204464] nvmf_rpc.c: 279:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:07:17.512 [ 00:07:17.512 { 00:07:17.512 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:17.512 "subtype": "Discovery", 00:07:17.512 "listen_addresses": [ 00:07:17.512 { 00:07:17.512 "transport": "TCP", 00:07:17.512 "trtype": "TCP", 00:07:17.512 "adrfam": "IPv4", 00:07:17.512 "traddr": "10.0.0.2", 00:07:17.512 "trsvcid": "4420" 00:07:17.512 } 00:07:17.512 ], 00:07:17.512 "allow_any_host": true, 00:07:17.512 "hosts": [] 00:07:17.512 }, 00:07:17.512 { 00:07:17.512 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:17.512 "subtype": "NVMe", 00:07:17.512 "listen_addresses": [ 00:07:17.512 { 00:07:17.512 "transport": "TCP", 00:07:17.512 "trtype": "TCP", 00:07:17.512 "adrfam": "IPv4", 00:07:17.512 "traddr": "10.0.0.2", 00:07:17.512 "trsvcid": "4420" 00:07:17.512 } 00:07:17.512 ], 00:07:17.512 "allow_any_host": true, 00:07:17.512 "hosts": [], 00:07:17.512 "serial_number": "SPDK00000000000001", 00:07:17.512 "model_number": "SPDK bdev Controller", 00:07:17.512 "max_namespaces": 32, 00:07:17.512 "min_cntlid": 1, 00:07:17.512 "max_cntlid": 65519, 00:07:17.512 "namespaces": [ 00:07:17.512 { 00:07:17.512 "nsid": 1, 00:07:17.512 "bdev_name": "Null1", 00:07:17.512 "name": "Null1", 00:07:17.512 "nguid": "9C028356C0D241DAB440390DDA9FBA95", 00:07:17.512 "uuid": "9c028356-c0d2-41da-b440-390dda9fba95" 00:07:17.512 } 00:07:17.512 ] 00:07:17.512 }, 00:07:17.512 { 00:07:17.512 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:17.512 "subtype": "NVMe", 00:07:17.512 "listen_addresses": [ 00:07:17.512 { 00:07:17.512 "transport": "TCP", 00:07:17.512 "trtype": "TCP", 00:07:17.512 "adrfam": "IPv4", 00:07:17.512 "traddr": "10.0.0.2", 00:07:17.512 "trsvcid": "4420" 00:07:17.512 } 00:07:17.512 ], 00:07:17.512 "allow_any_host": true, 00:07:17.512 "hosts": [], 00:07:17.512 "serial_number": "SPDK00000000000002", 00:07:17.512 "model_number": "SPDK bdev Controller", 00:07:17.512 "max_namespaces": 32, 00:07:17.512 "min_cntlid": 1, 00:07:17.512 "max_cntlid": 65519, 00:07:17.512 "namespaces": [ 00:07:17.512 { 00:07:17.512 "nsid": 1, 00:07:17.512 "bdev_name": "Null2", 00:07:17.512 "name": "Null2", 00:07:17.512 "nguid": "DE489C5C197A454FA5A0F9B10EC950D3", 00:07:17.512 "uuid": "de489c5c-197a-454f-a5a0-f9b10ec950d3" 00:07:17.512 } 00:07:17.512 ] 00:07:17.512 }, 00:07:17.512 { 00:07:17.512 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:17.512 "subtype": "NVMe", 00:07:17.512 "listen_addresses": [ 00:07:17.512 { 00:07:17.512 "transport": "TCP", 00:07:17.512 "trtype": "TCP", 00:07:17.512 "adrfam": "IPv4", 00:07:17.512 "traddr": "10.0.0.2", 00:07:17.512 "trsvcid": "4420" 00:07:17.512 } 00:07:17.512 ], 00:07:17.512 "allow_any_host": true, 00:07:17.512 "hosts": [], 00:07:17.512 "serial_number": "SPDK00000000000003", 00:07:17.512 "model_number": "SPDK bdev Controller", 00:07:17.512 "max_namespaces": 32, 00:07:17.512 "min_cntlid": 1, 00:07:17.512 "max_cntlid": 65519, 00:07:17.512 "namespaces": [ 00:07:17.512 { 00:07:17.512 "nsid": 1, 00:07:17.512 "bdev_name": "Null3", 00:07:17.512 "name": "Null3", 00:07:17.512 "nguid": "1DB69F85211144438BA01362E85847A8", 00:07:17.512 "uuid": "1db69f85-2111-4443-8ba0-1362e85847a8" 00:07:17.512 } 00:07:17.512 ] 00:07:17.512 }, 00:07:17.512 { 00:07:17.512 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:17.512 "subtype": "NVMe", 00:07:17.512 "listen_addresses": [ 00:07:17.512 { 00:07:17.512 "transport": "TCP", 00:07:17.512 "trtype": "TCP", 00:07:17.512 "adrfam": "IPv4", 00:07:17.512 "traddr": "10.0.0.2", 00:07:17.512 "trsvcid": "4420" 00:07:17.512 } 00:07:17.512 ], 00:07:17.512 "allow_any_host": true, 00:07:17.512 "hosts": [], 00:07:17.512 "serial_number": "SPDK00000000000004", 00:07:17.512 "model_number": "SPDK bdev Controller", 00:07:17.512 "max_namespaces": 32, 00:07:17.512 "min_cntlid": 1, 00:07:17.512 "max_cntlid": 65519, 00:07:17.512 "namespaces": [ 00:07:17.512 { 00:07:17.512 "nsid": 1, 00:07:17.512 "bdev_name": "Null4", 00:07:17.512 "name": "Null4", 00:07:17.512 "nguid": "86C5437A8C0748AABC49A0237C6E9557", 00:07:17.512 "uuid": "86c5437a-8c07-48aa-bc49-a0237c6e9557" 00:07:17.512 } 00:07:17.512 ] 00:07:17.512 } 00:07:17.512 ] 00:07:17.512 21:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.512 21:00:33 -- target/discovery.sh@42 -- # seq 1 4 00:07:17.512 21:00:33 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:17.512 21:00:33 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:17.512 21:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.512 21:00:33 -- common/autotest_common.sh@10 -- # set +x 00:07:17.512 21:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.512 21:00:33 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:17.512 21:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.512 21:00:33 -- common/autotest_common.sh@10 -- # set +x 00:07:17.512 21:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.512 21:00:33 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:17.512 21:00:33 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:17.512 21:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.512 21:00:33 -- common/autotest_common.sh@10 -- # set +x 00:07:17.512 21:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.512 21:00:33 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:17.512 21:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.512 21:00:33 -- common/autotest_common.sh@10 -- # set +x 00:07:17.512 21:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.512 21:00:33 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:17.512 21:00:33 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:17.512 21:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.512 21:00:33 -- common/autotest_common.sh@10 -- # set +x 00:07:17.512 21:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.512 21:00:33 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:17.512 21:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.512 21:00:33 -- common/autotest_common.sh@10 -- # set +x 00:07:17.512 21:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.512 21:00:33 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:17.512 21:00:33 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:17.512 21:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.512 21:00:33 -- common/autotest_common.sh@10 -- # set +x 00:07:17.512 21:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.512 21:00:33 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:17.512 21:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.512 21:00:33 -- common/autotest_common.sh@10 -- # set +x 00:07:17.512 21:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.512 21:00:33 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:17.512 21:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.512 21:00:33 -- common/autotest_common.sh@10 -- # set +x 00:07:17.512 21:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.512 21:00:33 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:17.512 21:00:33 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:17.512 21:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.512 21:00:33 -- common/autotest_common.sh@10 -- # set +x 00:07:17.512 21:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.512 21:00:33 -- target/discovery.sh@49 -- # check_bdevs= 00:07:17.512 21:00:33 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:17.512 21:00:33 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:17.512 21:00:33 -- target/discovery.sh@57 -- # nvmftestfini 00:07:17.512 21:00:33 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:17.512 21:00:33 -- nvmf/common.sh@117 -- # sync 00:07:17.512 21:00:33 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:17.512 21:00:33 -- nvmf/common.sh@120 -- # set +e 00:07:17.512 21:00:33 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:17.512 21:00:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:17.512 rmmod nvme_tcp 00:07:17.512 rmmod nvme_fabrics 00:07:17.512 rmmod nvme_keyring 00:07:17.512 21:00:33 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:17.512 21:00:33 -- nvmf/common.sh@124 -- # set -e 00:07:17.512 21:00:33 -- nvmf/common.sh@125 -- # return 0 00:07:17.512 21:00:33 -- nvmf/common.sh@478 -- # '[' -n 2899761 ']' 00:07:17.512 21:00:33 -- nvmf/common.sh@479 -- # killprocess 2899761 00:07:17.512 21:00:33 -- common/autotest_common.sh@936 -- # '[' -z 2899761 ']' 00:07:17.512 21:00:33 -- common/autotest_common.sh@940 -- # kill -0 2899761 00:07:17.512 21:00:33 -- common/autotest_common.sh@941 -- # uname 00:07:17.512 21:00:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:17.512 21:00:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2899761 00:07:17.771 21:00:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:17.771 21:00:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:17.771 21:00:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2899761' 00:07:17.771 killing process with pid 2899761 00:07:17.771 21:00:33 -- common/autotest_common.sh@955 -- # kill 2899761 00:07:17.771 [2024-04-18 21:00:33.471218] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:07:17.771 21:00:33 -- common/autotest_common.sh@960 -- # wait 2899761 00:07:17.771 21:00:33 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:17.771 21:00:33 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:17.771 21:00:33 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:17.771 21:00:33 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:17.771 21:00:33 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:17.771 21:00:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:17.771 21:00:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:17.771 21:00:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:20.329 21:00:35 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:20.329 00:07:20.329 real 0m9.365s 00:07:20.329 user 0m7.281s 00:07:20.329 sys 0m4.554s 00:07:20.329 21:00:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:20.329 21:00:35 -- common/autotest_common.sh@10 -- # set +x 00:07:20.329 ************************************ 00:07:20.329 END TEST nvmf_discovery 00:07:20.329 ************************************ 00:07:20.329 21:00:35 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:20.329 21:00:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:20.329 21:00:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:20.329 21:00:35 -- common/autotest_common.sh@10 -- # set +x 00:07:20.329 ************************************ 00:07:20.329 START TEST nvmf_referrals 00:07:20.329 ************************************ 00:07:20.329 21:00:35 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:20.329 * Looking for test storage... 00:07:20.329 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:20.329 21:00:36 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:20.329 21:00:36 -- nvmf/common.sh@7 -- # uname -s 00:07:20.329 21:00:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:20.329 21:00:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:20.329 21:00:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:20.329 21:00:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:20.329 21:00:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:20.329 21:00:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:20.329 21:00:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:20.329 21:00:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:20.329 21:00:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:20.329 21:00:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:20.329 21:00:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:20.329 21:00:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:20.329 21:00:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:20.329 21:00:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:20.329 21:00:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:20.329 21:00:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:20.329 21:00:36 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:20.329 21:00:36 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:20.329 21:00:36 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:20.329 21:00:36 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:20.329 21:00:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.329 21:00:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.329 21:00:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.329 21:00:36 -- paths/export.sh@5 -- # export PATH 00:07:20.329 21:00:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.329 21:00:36 -- nvmf/common.sh@47 -- # : 0 00:07:20.329 21:00:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:20.329 21:00:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:20.329 21:00:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:20.329 21:00:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:20.329 21:00:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:20.329 21:00:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:20.329 21:00:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:20.329 21:00:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:20.329 21:00:36 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:20.329 21:00:36 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:20.329 21:00:36 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:20.329 21:00:36 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:20.329 21:00:36 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:20.329 21:00:36 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:20.329 21:00:36 -- target/referrals.sh@37 -- # nvmftestinit 00:07:20.329 21:00:36 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:20.329 21:00:36 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:20.329 21:00:36 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:20.329 21:00:36 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:20.329 21:00:36 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:20.329 21:00:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:20.329 21:00:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:20.329 21:00:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:20.329 21:00:36 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:20.329 21:00:36 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:20.329 21:00:36 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:20.329 21:00:36 -- common/autotest_common.sh@10 -- # set +x 00:07:26.927 21:00:42 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:26.927 21:00:42 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:26.927 21:00:42 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:26.927 21:00:42 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:26.927 21:00:42 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:26.927 21:00:42 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:26.927 21:00:42 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:26.927 21:00:42 -- nvmf/common.sh@295 -- # net_devs=() 00:07:26.927 21:00:42 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:26.927 21:00:42 -- nvmf/common.sh@296 -- # e810=() 00:07:26.927 21:00:42 -- nvmf/common.sh@296 -- # local -ga e810 00:07:26.927 21:00:42 -- nvmf/common.sh@297 -- # x722=() 00:07:26.927 21:00:42 -- nvmf/common.sh@297 -- # local -ga x722 00:07:26.927 21:00:42 -- nvmf/common.sh@298 -- # mlx=() 00:07:26.927 21:00:42 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:26.927 21:00:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:26.927 21:00:42 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:26.927 21:00:42 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:26.927 21:00:42 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:26.927 21:00:42 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:26.927 21:00:42 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:26.927 21:00:42 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:26.927 21:00:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:26.927 21:00:42 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:26.927 21:00:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:26.927 21:00:42 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:26.927 21:00:42 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:26.927 21:00:42 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:26.927 21:00:42 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:26.927 21:00:42 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:26.927 21:00:42 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:26.927 21:00:42 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:26.927 21:00:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:26.927 21:00:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:26.927 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:26.927 21:00:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:26.927 21:00:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:26.927 21:00:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:26.927 21:00:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:26.927 21:00:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:26.927 21:00:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:26.927 21:00:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:26.927 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:26.927 21:00:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:26.927 21:00:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:26.927 21:00:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:26.927 21:00:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:26.927 21:00:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:26.927 21:00:42 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:26.927 21:00:42 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:26.927 21:00:42 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:26.927 21:00:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:26.927 21:00:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:26.927 21:00:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:26.927 21:00:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:26.927 21:00:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:26.927 Found net devices under 0000:86:00.0: cvl_0_0 00:07:26.927 21:00:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:26.927 21:00:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:26.927 21:00:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:26.927 21:00:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:26.927 21:00:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:26.927 21:00:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:26.927 Found net devices under 0000:86:00.1: cvl_0_1 00:07:26.927 21:00:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:26.927 21:00:42 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:26.927 21:00:42 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:26.927 21:00:42 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:26.927 21:00:42 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:26.927 21:00:42 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:26.927 21:00:42 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:26.927 21:00:42 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:26.927 21:00:42 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:26.927 21:00:42 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:26.927 21:00:42 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:26.927 21:00:42 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:26.927 21:00:42 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:26.927 21:00:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:26.927 21:00:42 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:26.927 21:00:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:26.927 21:00:42 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:26.927 21:00:42 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:26.927 21:00:42 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:26.927 21:00:42 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:26.927 21:00:42 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:26.927 21:00:42 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:26.927 21:00:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:26.927 21:00:42 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:26.927 21:00:42 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:26.927 21:00:42 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:26.927 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:26.927 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:07:26.927 00:07:26.927 --- 10.0.0.2 ping statistics --- 00:07:26.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.927 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:07:26.927 21:00:42 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:26.927 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:26.927 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:07:26.927 00:07:26.927 --- 10.0.0.1 ping statistics --- 00:07:26.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.927 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:07:26.927 21:00:42 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:26.927 21:00:42 -- nvmf/common.sh@411 -- # return 0 00:07:26.927 21:00:42 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:26.927 21:00:42 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:26.927 21:00:42 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:26.927 21:00:42 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:26.927 21:00:42 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:26.927 21:00:42 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:26.927 21:00:42 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:26.927 21:00:42 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:26.927 21:00:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:26.927 21:00:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:26.927 21:00:42 -- common/autotest_common.sh@10 -- # set +x 00:07:26.927 21:00:42 -- nvmf/common.sh@470 -- # nvmfpid=2903860 00:07:26.927 21:00:42 -- nvmf/common.sh@471 -- # waitforlisten 2903860 00:07:26.927 21:00:42 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:26.928 21:00:42 -- common/autotest_common.sh@817 -- # '[' -z 2903860 ']' 00:07:26.928 21:00:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.928 21:00:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:26.928 21:00:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.928 21:00:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:26.928 21:00:42 -- common/autotest_common.sh@10 -- # set +x 00:07:26.928 [2024-04-18 21:00:42.641024] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:07:26.928 [2024-04-18 21:00:42.641064] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:26.928 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.928 [2024-04-18 21:00:42.704005] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:26.928 [2024-04-18 21:00:42.785424] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:26.928 [2024-04-18 21:00:42.785459] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:26.928 [2024-04-18 21:00:42.785466] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:26.928 [2024-04-18 21:00:42.785472] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:26.928 [2024-04-18 21:00:42.785477] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:26.928 [2024-04-18 21:00:42.785531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.928 [2024-04-18 21:00:42.785582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:26.928 [2024-04-18 21:00:42.785686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:26.928 [2024-04-18 21:00:42.785687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.865 21:00:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:27.865 21:00:43 -- common/autotest_common.sh@850 -- # return 0 00:07:27.865 21:00:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:27.865 21:00:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:27.865 21:00:43 -- common/autotest_common.sh@10 -- # set +x 00:07:27.865 21:00:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:27.865 21:00:43 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:27.865 21:00:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.865 21:00:43 -- common/autotest_common.sh@10 -- # set +x 00:07:27.865 [2024-04-18 21:00:43.473234] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:27.865 21:00:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.865 21:00:43 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:27.865 21:00:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.865 21:00:43 -- common/autotest_common.sh@10 -- # set +x 00:07:27.865 [2024-04-18 21:00:43.486700] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:27.865 21:00:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.865 21:00:43 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:27.865 21:00:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.865 21:00:43 -- common/autotest_common.sh@10 -- # set +x 00:07:27.865 21:00:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.865 21:00:43 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:27.865 21:00:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.865 21:00:43 -- common/autotest_common.sh@10 -- # set +x 00:07:27.865 21:00:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.865 21:00:43 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:27.865 21:00:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.865 21:00:43 -- common/autotest_common.sh@10 -- # set +x 00:07:27.865 21:00:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.865 21:00:43 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:27.865 21:00:43 -- target/referrals.sh@48 -- # jq length 00:07:27.865 21:00:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.865 21:00:43 -- common/autotest_common.sh@10 -- # set +x 00:07:27.865 21:00:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.865 21:00:43 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:27.865 21:00:43 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:27.865 21:00:43 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:27.865 21:00:43 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:27.865 21:00:43 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:27.865 21:00:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.865 21:00:43 -- target/referrals.sh@21 -- # sort 00:07:27.865 21:00:43 -- common/autotest_common.sh@10 -- # set +x 00:07:27.865 21:00:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.865 21:00:43 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:27.865 21:00:43 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:27.865 21:00:43 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:27.865 21:00:43 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:27.865 21:00:43 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:27.865 21:00:43 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:27.865 21:00:43 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:27.865 21:00:43 -- target/referrals.sh@26 -- # sort 00:07:27.865 21:00:43 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:27.865 21:00:43 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:27.865 21:00:43 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:27.865 21:00:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.865 21:00:43 -- common/autotest_common.sh@10 -- # set +x 00:07:27.865 21:00:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.865 21:00:43 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:27.865 21:00:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.865 21:00:43 -- common/autotest_common.sh@10 -- # set +x 00:07:28.125 21:00:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:28.125 21:00:43 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:28.125 21:00:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:28.125 21:00:43 -- common/autotest_common.sh@10 -- # set +x 00:07:28.125 21:00:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:28.125 21:00:43 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:28.125 21:00:43 -- target/referrals.sh@56 -- # jq length 00:07:28.125 21:00:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:28.125 21:00:43 -- common/autotest_common.sh@10 -- # set +x 00:07:28.125 21:00:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:28.125 21:00:43 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:28.125 21:00:43 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:28.125 21:00:43 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:28.125 21:00:43 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:28.125 21:00:43 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:28.125 21:00:43 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:28.125 21:00:43 -- target/referrals.sh@26 -- # sort 00:07:28.125 21:00:43 -- target/referrals.sh@26 -- # echo 00:07:28.125 21:00:43 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:28.125 21:00:43 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:28.125 21:00:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:28.125 21:00:43 -- common/autotest_common.sh@10 -- # set +x 00:07:28.125 21:00:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:28.125 21:00:43 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:28.125 21:00:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:28.125 21:00:43 -- common/autotest_common.sh@10 -- # set +x 00:07:28.125 21:00:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:28.125 21:00:44 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:28.125 21:00:44 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:28.125 21:00:44 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:28.125 21:00:44 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:28.125 21:00:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:28.125 21:00:44 -- target/referrals.sh@21 -- # sort 00:07:28.125 21:00:44 -- common/autotest_common.sh@10 -- # set +x 00:07:28.125 21:00:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:28.125 21:00:44 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:28.125 21:00:44 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:28.125 21:00:44 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:28.125 21:00:44 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:28.125 21:00:44 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:28.383 21:00:44 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:28.383 21:00:44 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:28.383 21:00:44 -- target/referrals.sh@26 -- # sort 00:07:28.383 21:00:44 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:28.383 21:00:44 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:28.383 21:00:44 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:28.383 21:00:44 -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:28.383 21:00:44 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:28.383 21:00:44 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:28.383 21:00:44 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:28.641 21:00:44 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:28.641 21:00:44 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:28.641 21:00:44 -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:28.641 21:00:44 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:28.641 21:00:44 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:28.641 21:00:44 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:28.641 21:00:44 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:28.641 21:00:44 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:28.641 21:00:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:28.641 21:00:44 -- common/autotest_common.sh@10 -- # set +x 00:07:28.641 21:00:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:28.641 21:00:44 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:28.641 21:00:44 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:28.641 21:00:44 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:28.641 21:00:44 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:28.641 21:00:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:28.641 21:00:44 -- target/referrals.sh@21 -- # sort 00:07:28.641 21:00:44 -- common/autotest_common.sh@10 -- # set +x 00:07:28.641 21:00:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:28.641 21:00:44 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:28.641 21:00:44 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:28.641 21:00:44 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:28.641 21:00:44 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:28.641 21:00:44 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:28.641 21:00:44 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:28.641 21:00:44 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:28.641 21:00:44 -- target/referrals.sh@26 -- # sort 00:07:28.900 21:00:44 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:28.900 21:00:44 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:28.900 21:00:44 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:28.900 21:00:44 -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:28.900 21:00:44 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:28.900 21:00:44 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:28.900 21:00:44 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:28.900 21:00:44 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:28.900 21:00:44 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:28.900 21:00:44 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:28.900 21:00:44 -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:28.900 21:00:44 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:28.900 21:00:44 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:29.160 21:00:44 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:29.160 21:00:44 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:29.160 21:00:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:29.160 21:00:44 -- common/autotest_common.sh@10 -- # set +x 00:07:29.160 21:00:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:29.160 21:00:44 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:29.160 21:00:44 -- target/referrals.sh@82 -- # jq length 00:07:29.160 21:00:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:29.160 21:00:44 -- common/autotest_common.sh@10 -- # set +x 00:07:29.160 21:00:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:29.160 21:00:44 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:29.160 21:00:44 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:29.160 21:00:44 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:29.160 21:00:44 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:29.160 21:00:44 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:29.160 21:00:44 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:29.160 21:00:44 -- target/referrals.sh@26 -- # sort 00:07:29.160 21:00:45 -- target/referrals.sh@26 -- # echo 00:07:29.160 21:00:45 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:29.160 21:00:45 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:29.160 21:00:45 -- target/referrals.sh@86 -- # nvmftestfini 00:07:29.160 21:00:45 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:29.160 21:00:45 -- nvmf/common.sh@117 -- # sync 00:07:29.160 21:00:45 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:29.160 21:00:45 -- nvmf/common.sh@120 -- # set +e 00:07:29.160 21:00:45 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:29.160 21:00:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:29.160 rmmod nvme_tcp 00:07:29.420 rmmod nvme_fabrics 00:07:29.420 rmmod nvme_keyring 00:07:29.420 21:00:45 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:29.420 21:00:45 -- nvmf/common.sh@124 -- # set -e 00:07:29.420 21:00:45 -- nvmf/common.sh@125 -- # return 0 00:07:29.420 21:00:45 -- nvmf/common.sh@478 -- # '[' -n 2903860 ']' 00:07:29.420 21:00:45 -- nvmf/common.sh@479 -- # killprocess 2903860 00:07:29.420 21:00:45 -- common/autotest_common.sh@936 -- # '[' -z 2903860 ']' 00:07:29.420 21:00:45 -- common/autotest_common.sh@940 -- # kill -0 2903860 00:07:29.420 21:00:45 -- common/autotest_common.sh@941 -- # uname 00:07:29.420 21:00:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:29.420 21:00:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2903860 00:07:29.420 21:00:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:29.420 21:00:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:29.420 21:00:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2903860' 00:07:29.420 killing process with pid 2903860 00:07:29.420 21:00:45 -- common/autotest_common.sh@955 -- # kill 2903860 00:07:29.420 21:00:45 -- common/autotest_common.sh@960 -- # wait 2903860 00:07:29.679 21:00:45 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:29.679 21:00:45 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:29.679 21:00:45 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:29.679 21:00:45 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:29.679 21:00:45 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:29.679 21:00:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.679 21:00:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:29.679 21:00:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:31.584 21:00:47 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:31.584 00:07:31.584 real 0m11.541s 00:07:31.584 user 0m12.958s 00:07:31.584 sys 0m5.640s 00:07:31.584 21:00:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:31.584 21:00:47 -- common/autotest_common.sh@10 -- # set +x 00:07:31.584 ************************************ 00:07:31.584 END TEST nvmf_referrals 00:07:31.584 ************************************ 00:07:31.585 21:00:47 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:31.585 21:00:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:31.585 21:00:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:31.585 21:00:47 -- common/autotest_common.sh@10 -- # set +x 00:07:31.844 ************************************ 00:07:31.844 START TEST nvmf_connect_disconnect 00:07:31.844 ************************************ 00:07:31.844 21:00:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:31.844 * Looking for test storage... 00:07:31.844 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:31.844 21:00:47 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:31.844 21:00:47 -- nvmf/common.sh@7 -- # uname -s 00:07:31.844 21:00:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:31.844 21:00:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:31.844 21:00:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:31.844 21:00:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:31.844 21:00:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:31.844 21:00:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:31.844 21:00:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:31.844 21:00:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:31.844 21:00:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:31.844 21:00:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:31.844 21:00:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:31.844 21:00:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:31.844 21:00:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:31.844 21:00:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:31.844 21:00:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:31.844 21:00:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:31.844 21:00:47 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:31.844 21:00:47 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:31.844 21:00:47 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:31.844 21:00:47 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:31.844 21:00:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.844 21:00:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.844 21:00:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.844 21:00:47 -- paths/export.sh@5 -- # export PATH 00:07:31.844 21:00:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.844 21:00:47 -- nvmf/common.sh@47 -- # : 0 00:07:31.844 21:00:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:31.844 21:00:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:31.844 21:00:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:31.844 21:00:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:31.844 21:00:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:31.844 21:00:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:31.844 21:00:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:31.844 21:00:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:31.844 21:00:47 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:31.844 21:00:47 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:31.844 21:00:47 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:31.844 21:00:47 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:31.844 21:00:47 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:31.844 21:00:47 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:31.844 21:00:47 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:31.844 21:00:47 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:31.844 21:00:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.844 21:00:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:31.844 21:00:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:31.844 21:00:47 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:31.844 21:00:47 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:31.844 21:00:47 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:31.844 21:00:47 -- common/autotest_common.sh@10 -- # set +x 00:07:38.405 21:00:53 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:38.405 21:00:53 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:38.405 21:00:53 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:38.405 21:00:53 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:38.405 21:00:53 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:38.405 21:00:53 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:38.405 21:00:53 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:38.405 21:00:53 -- nvmf/common.sh@295 -- # net_devs=() 00:07:38.405 21:00:53 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:38.405 21:00:53 -- nvmf/common.sh@296 -- # e810=() 00:07:38.405 21:00:53 -- nvmf/common.sh@296 -- # local -ga e810 00:07:38.405 21:00:53 -- nvmf/common.sh@297 -- # x722=() 00:07:38.405 21:00:53 -- nvmf/common.sh@297 -- # local -ga x722 00:07:38.405 21:00:53 -- nvmf/common.sh@298 -- # mlx=() 00:07:38.405 21:00:53 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:38.405 21:00:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:38.405 21:00:53 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:38.405 21:00:53 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:38.405 21:00:53 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:38.405 21:00:53 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:38.405 21:00:53 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:38.405 21:00:53 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:38.405 21:00:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:38.405 21:00:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:38.405 21:00:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:38.405 21:00:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:38.405 21:00:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:38.405 21:00:53 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:38.405 21:00:53 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:38.405 21:00:53 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:38.405 21:00:53 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:38.405 21:00:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:38.405 21:00:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:38.405 21:00:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:38.405 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:38.405 21:00:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:38.405 21:00:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:38.405 21:00:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:38.405 21:00:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:38.405 21:00:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:38.405 21:00:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:38.405 21:00:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:38.405 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:38.405 21:00:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:38.405 21:00:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:38.405 21:00:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:38.405 21:00:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:38.405 21:00:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:38.405 21:00:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:38.405 21:00:53 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:38.405 21:00:53 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:38.405 21:00:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:38.405 21:00:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:38.405 21:00:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:38.405 21:00:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:38.405 21:00:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:38.405 Found net devices under 0000:86:00.0: cvl_0_0 00:07:38.405 21:00:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:38.405 21:00:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:38.405 21:00:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:38.405 21:00:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:38.405 21:00:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:38.405 21:00:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:38.405 Found net devices under 0000:86:00.1: cvl_0_1 00:07:38.405 21:00:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:38.405 21:00:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:38.405 21:00:53 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:38.405 21:00:53 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:38.405 21:00:53 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:38.405 21:00:53 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:38.405 21:00:53 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:38.405 21:00:53 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:38.405 21:00:53 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:38.405 21:00:53 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:38.405 21:00:53 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:38.405 21:00:53 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:38.405 21:00:53 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:38.405 21:00:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:38.405 21:00:53 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:38.405 21:00:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:38.405 21:00:53 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:38.405 21:00:53 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:38.405 21:00:53 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:38.405 21:00:53 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:38.405 21:00:53 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:38.405 21:00:53 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:38.405 21:00:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:38.405 21:00:53 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:38.405 21:00:53 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:38.405 21:00:53 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:38.405 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:38.405 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:07:38.405 00:07:38.405 --- 10.0.0.2 ping statistics --- 00:07:38.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.405 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:07:38.405 21:00:53 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:38.405 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:38.405 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:07:38.405 00:07:38.405 --- 10.0.0.1 ping statistics --- 00:07:38.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.405 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:07:38.405 21:00:53 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:38.405 21:00:53 -- nvmf/common.sh@411 -- # return 0 00:07:38.405 21:00:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:38.405 21:00:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:38.405 21:00:53 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:38.405 21:00:53 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:38.405 21:00:53 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:38.405 21:00:53 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:38.405 21:00:53 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:38.405 21:00:53 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:38.405 21:00:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:38.405 21:00:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:38.405 21:00:53 -- common/autotest_common.sh@10 -- # set +x 00:07:38.405 21:00:53 -- nvmf/common.sh@470 -- # nvmfpid=2908233 00:07:38.405 21:00:53 -- nvmf/common.sh@471 -- # waitforlisten 2908233 00:07:38.405 21:00:53 -- common/autotest_common.sh@817 -- # '[' -z 2908233 ']' 00:07:38.405 21:00:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.405 21:00:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:38.405 21:00:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.405 21:00:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:38.405 21:00:53 -- common/autotest_common.sh@10 -- # set +x 00:07:38.405 21:00:53 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:38.405 [2024-04-18 21:00:53.551253] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:07:38.405 [2024-04-18 21:00:53.551294] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:38.405 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.405 [2024-04-18 21:00:53.613815] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:38.405 [2024-04-18 21:00:53.691798] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:38.405 [2024-04-18 21:00:53.691844] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:38.405 [2024-04-18 21:00:53.691851] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:38.405 [2024-04-18 21:00:53.691857] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:38.405 [2024-04-18 21:00:53.691862] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:38.405 [2024-04-18 21:00:53.691908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.405 [2024-04-18 21:00:53.691917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:38.405 [2024-04-18 21:00:53.692022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:38.405 [2024-04-18 21:00:53.692023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.662 21:00:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:38.662 21:00:54 -- common/autotest_common.sh@850 -- # return 0 00:07:38.663 21:00:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:38.663 21:00:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:38.663 21:00:54 -- common/autotest_common.sh@10 -- # set +x 00:07:38.663 21:00:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:38.663 21:00:54 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:38.663 21:00:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:38.663 21:00:54 -- common/autotest_common.sh@10 -- # set +x 00:07:38.663 [2024-04-18 21:00:54.399367] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:38.663 21:00:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:38.663 21:00:54 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:38.663 21:00:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:38.663 21:00:54 -- common/autotest_common.sh@10 -- # set +x 00:07:38.663 21:00:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:38.663 21:00:54 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:38.663 21:00:54 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:38.663 21:00:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:38.663 21:00:54 -- common/autotest_common.sh@10 -- # set +x 00:07:38.663 21:00:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:38.663 21:00:54 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:38.663 21:00:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:38.663 21:00:54 -- common/autotest_common.sh@10 -- # set +x 00:07:38.663 21:00:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:38.663 21:00:54 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:38.663 21:00:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:38.663 21:00:54 -- common/autotest_common.sh@10 -- # set +x 00:07:38.663 [2024-04-18 21:00:54.451124] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:38.663 21:00:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:38.663 21:00:54 -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:07:38.663 21:00:54 -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:07:38.663 21:00:54 -- target/connect_disconnect.sh@34 -- # set +x 00:07:41.951 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:45.234 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:48.551 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:51.836 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:55.120 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:55.120 21:01:10 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:07:55.120 21:01:10 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:07:55.120 21:01:10 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:55.120 21:01:10 -- nvmf/common.sh@117 -- # sync 00:07:55.120 21:01:10 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:55.120 21:01:10 -- nvmf/common.sh@120 -- # set +e 00:07:55.120 21:01:10 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:55.120 21:01:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:55.120 rmmod nvme_tcp 00:07:55.120 rmmod nvme_fabrics 00:07:55.120 rmmod nvme_keyring 00:07:55.120 21:01:10 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:55.120 21:01:10 -- nvmf/common.sh@124 -- # set -e 00:07:55.120 21:01:10 -- nvmf/common.sh@125 -- # return 0 00:07:55.120 21:01:10 -- nvmf/common.sh@478 -- # '[' -n 2908233 ']' 00:07:55.120 21:01:10 -- nvmf/common.sh@479 -- # killprocess 2908233 00:07:55.120 21:01:10 -- common/autotest_common.sh@936 -- # '[' -z 2908233 ']' 00:07:55.120 21:01:10 -- common/autotest_common.sh@940 -- # kill -0 2908233 00:07:55.120 21:01:10 -- common/autotest_common.sh@941 -- # uname 00:07:55.120 21:01:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:55.120 21:01:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2908233 00:07:55.120 21:01:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:55.120 21:01:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:55.120 21:01:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2908233' 00:07:55.120 killing process with pid 2908233 00:07:55.120 21:01:10 -- common/autotest_common.sh@955 -- # kill 2908233 00:07:55.120 21:01:10 -- common/autotest_common.sh@960 -- # wait 2908233 00:07:55.120 21:01:11 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:55.120 21:01:11 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:55.120 21:01:11 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:55.120 21:01:11 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:55.120 21:01:11 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:55.120 21:01:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.120 21:01:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:55.120 21:01:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.656 21:01:13 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:57.656 00:07:57.656 real 0m25.443s 00:07:57.656 user 1m10.205s 00:07:57.656 sys 0m5.548s 00:07:57.656 21:01:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:57.656 21:01:13 -- common/autotest_common.sh@10 -- # set +x 00:07:57.656 ************************************ 00:07:57.656 END TEST nvmf_connect_disconnect 00:07:57.656 ************************************ 00:07:57.656 21:01:13 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:57.656 21:01:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:57.656 21:01:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:57.656 21:01:13 -- common/autotest_common.sh@10 -- # set +x 00:07:57.656 ************************************ 00:07:57.656 START TEST nvmf_multitarget 00:07:57.657 ************************************ 00:07:57.657 21:01:13 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:57.657 * Looking for test storage... 00:07:57.657 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:57.657 21:01:13 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:57.657 21:01:13 -- nvmf/common.sh@7 -- # uname -s 00:07:57.657 21:01:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:57.657 21:01:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:57.657 21:01:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:57.657 21:01:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:57.657 21:01:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:57.657 21:01:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:57.657 21:01:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:57.657 21:01:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:57.657 21:01:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:57.657 21:01:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:57.657 21:01:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:57.657 21:01:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:57.657 21:01:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:57.657 21:01:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:57.657 21:01:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:57.657 21:01:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:57.657 21:01:13 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:57.657 21:01:13 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:57.657 21:01:13 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:57.657 21:01:13 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:57.657 21:01:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.657 21:01:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.657 21:01:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.657 21:01:13 -- paths/export.sh@5 -- # export PATH 00:07:57.657 21:01:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.657 21:01:13 -- nvmf/common.sh@47 -- # : 0 00:07:57.657 21:01:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:57.657 21:01:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:57.657 21:01:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:57.657 21:01:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:57.657 21:01:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:57.657 21:01:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:57.657 21:01:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:57.657 21:01:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:57.657 21:01:13 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:57.657 21:01:13 -- target/multitarget.sh@15 -- # nvmftestinit 00:07:57.657 21:01:13 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:57.657 21:01:13 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:57.657 21:01:13 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:57.657 21:01:13 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:57.657 21:01:13 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:57.657 21:01:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.657 21:01:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:57.657 21:01:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.657 21:01:13 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:57.657 21:01:13 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:57.657 21:01:13 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:57.657 21:01:13 -- common/autotest_common.sh@10 -- # set +x 00:08:02.928 21:01:18 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:02.928 21:01:18 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:02.928 21:01:18 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:02.928 21:01:18 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:02.928 21:01:18 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:02.928 21:01:18 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:02.928 21:01:18 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:02.928 21:01:18 -- nvmf/common.sh@295 -- # net_devs=() 00:08:02.928 21:01:18 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:02.928 21:01:18 -- nvmf/common.sh@296 -- # e810=() 00:08:02.928 21:01:18 -- nvmf/common.sh@296 -- # local -ga e810 00:08:02.928 21:01:18 -- nvmf/common.sh@297 -- # x722=() 00:08:02.928 21:01:18 -- nvmf/common.sh@297 -- # local -ga x722 00:08:02.928 21:01:18 -- nvmf/common.sh@298 -- # mlx=() 00:08:02.928 21:01:18 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:02.928 21:01:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:02.928 21:01:18 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:02.928 21:01:18 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:02.928 21:01:18 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:02.928 21:01:18 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:02.928 21:01:18 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:02.928 21:01:18 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:02.928 21:01:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:02.928 21:01:18 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:02.928 21:01:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:02.928 21:01:18 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:02.928 21:01:18 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:02.928 21:01:18 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:02.928 21:01:18 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:02.928 21:01:18 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:02.928 21:01:18 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:02.928 21:01:18 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:02.928 21:01:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:02.928 21:01:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:02.928 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:02.928 21:01:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:02.928 21:01:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:02.928 21:01:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.928 21:01:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.928 21:01:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:02.928 21:01:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:02.928 21:01:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:02.928 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:02.928 21:01:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:02.928 21:01:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:02.928 21:01:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.928 21:01:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.928 21:01:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:02.928 21:01:18 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:02.928 21:01:18 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:02.928 21:01:18 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:02.928 21:01:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:02.928 21:01:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.928 21:01:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:02.928 21:01:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.928 21:01:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:02.928 Found net devices under 0000:86:00.0: cvl_0_0 00:08:02.928 21:01:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.928 21:01:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:02.928 21:01:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.928 21:01:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:02.928 21:01:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.928 21:01:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:02.928 Found net devices under 0000:86:00.1: cvl_0_1 00:08:02.928 21:01:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.928 21:01:18 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:02.928 21:01:18 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:02.928 21:01:18 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:02.928 21:01:18 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:02.928 21:01:18 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:02.928 21:01:18 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:02.928 21:01:18 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:02.928 21:01:18 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:02.928 21:01:18 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:02.928 21:01:18 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:02.928 21:01:18 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:02.928 21:01:18 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:02.928 21:01:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:02.928 21:01:18 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:02.928 21:01:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:02.928 21:01:18 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:02.928 21:01:18 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:02.928 21:01:18 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:02.928 21:01:18 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:02.928 21:01:18 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:02.928 21:01:18 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:02.928 21:01:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:02.928 21:01:18 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:02.928 21:01:18 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:02.928 21:01:18 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:02.928 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:02.928 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:08:02.928 00:08:02.928 --- 10.0.0.2 ping statistics --- 00:08:02.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.928 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:08:02.928 21:01:18 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:02.928 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:02.928 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.335 ms 00:08:02.928 00:08:02.928 --- 10.0.0.1 ping statistics --- 00:08:02.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.928 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:08:02.928 21:01:18 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:02.928 21:01:18 -- nvmf/common.sh@411 -- # return 0 00:08:02.928 21:01:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:02.928 21:01:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:02.928 21:01:18 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:02.928 21:01:18 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:02.928 21:01:18 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:02.928 21:01:18 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:02.928 21:01:18 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:02.928 21:01:18 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:02.928 21:01:18 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:02.928 21:01:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:02.928 21:01:18 -- common/autotest_common.sh@10 -- # set +x 00:08:02.928 21:01:18 -- nvmf/common.sh@470 -- # nvmfpid=2914922 00:08:02.928 21:01:18 -- nvmf/common.sh@471 -- # waitforlisten 2914922 00:08:02.928 21:01:18 -- common/autotest_common.sh@817 -- # '[' -z 2914922 ']' 00:08:02.929 21:01:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.929 21:01:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:02.929 21:01:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.929 21:01:18 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:02.929 21:01:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:02.929 21:01:18 -- common/autotest_common.sh@10 -- # set +x 00:08:02.929 [2024-04-18 21:01:18.837867] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:08:02.929 [2024-04-18 21:01:18.837915] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.188 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.188 [2024-04-18 21:01:18.904112] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:03.188 [2024-04-18 21:01:18.983406] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:03.188 [2024-04-18 21:01:18.983440] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:03.188 [2024-04-18 21:01:18.983447] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:03.188 [2024-04-18 21:01:18.983454] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:03.188 [2024-04-18 21:01:18.983459] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:03.188 [2024-04-18 21:01:18.983504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.188 [2024-04-18 21:01:18.983604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:03.188 [2024-04-18 21:01:18.983628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:03.188 [2024-04-18 21:01:18.983629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.755 21:01:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:03.755 21:01:19 -- common/autotest_common.sh@850 -- # return 0 00:08:03.755 21:01:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:03.755 21:01:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:03.755 21:01:19 -- common/autotest_common.sh@10 -- # set +x 00:08:03.755 21:01:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:03.755 21:01:19 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:04.013 21:01:19 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:04.013 21:01:19 -- target/multitarget.sh@21 -- # jq length 00:08:04.013 21:01:19 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:04.013 21:01:19 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:04.013 "nvmf_tgt_1" 00:08:04.013 21:01:19 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:04.271 "nvmf_tgt_2" 00:08:04.271 21:01:19 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:04.271 21:01:19 -- target/multitarget.sh@28 -- # jq length 00:08:04.271 21:01:20 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:04.271 21:01:20 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:04.271 true 00:08:04.271 21:01:20 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:04.528 true 00:08:04.528 21:01:20 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:04.528 21:01:20 -- target/multitarget.sh@35 -- # jq length 00:08:04.528 21:01:20 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:04.528 21:01:20 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:04.528 21:01:20 -- target/multitarget.sh@41 -- # nvmftestfini 00:08:04.528 21:01:20 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:04.528 21:01:20 -- nvmf/common.sh@117 -- # sync 00:08:04.528 21:01:20 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:04.528 21:01:20 -- nvmf/common.sh@120 -- # set +e 00:08:04.528 21:01:20 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:04.528 21:01:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:04.528 rmmod nvme_tcp 00:08:04.528 rmmod nvme_fabrics 00:08:04.528 rmmod nvme_keyring 00:08:04.528 21:01:20 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:04.787 21:01:20 -- nvmf/common.sh@124 -- # set -e 00:08:04.787 21:01:20 -- nvmf/common.sh@125 -- # return 0 00:08:04.787 21:01:20 -- nvmf/common.sh@478 -- # '[' -n 2914922 ']' 00:08:04.787 21:01:20 -- nvmf/common.sh@479 -- # killprocess 2914922 00:08:04.787 21:01:20 -- common/autotest_common.sh@936 -- # '[' -z 2914922 ']' 00:08:04.787 21:01:20 -- common/autotest_common.sh@940 -- # kill -0 2914922 00:08:04.787 21:01:20 -- common/autotest_common.sh@941 -- # uname 00:08:04.787 21:01:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:04.787 21:01:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2914922 00:08:04.787 21:01:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:04.787 21:01:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:04.787 21:01:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2914922' 00:08:04.787 killing process with pid 2914922 00:08:04.787 21:01:20 -- common/autotest_common.sh@955 -- # kill 2914922 00:08:04.787 21:01:20 -- common/autotest_common.sh@960 -- # wait 2914922 00:08:05.059 21:01:20 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:05.059 21:01:20 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:05.059 21:01:20 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:05.059 21:01:20 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:05.059 21:01:20 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:05.059 21:01:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:05.059 21:01:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:05.059 21:01:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.107 21:01:22 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:07.107 00:08:07.107 real 0m9.563s 00:08:07.107 user 0m8.918s 00:08:07.107 sys 0m4.620s 00:08:07.107 21:01:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:07.107 21:01:22 -- common/autotest_common.sh@10 -- # set +x 00:08:07.107 ************************************ 00:08:07.107 END TEST nvmf_multitarget 00:08:07.107 ************************************ 00:08:07.107 21:01:22 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:07.107 21:01:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:07.107 21:01:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:07.107 21:01:22 -- common/autotest_common.sh@10 -- # set +x 00:08:07.107 ************************************ 00:08:07.107 START TEST nvmf_rpc 00:08:07.107 ************************************ 00:08:07.107 21:01:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:07.107 * Looking for test storage... 00:08:07.107 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:07.107 21:01:23 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:07.107 21:01:23 -- nvmf/common.sh@7 -- # uname -s 00:08:07.107 21:01:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:07.107 21:01:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:07.107 21:01:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:07.107 21:01:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:07.107 21:01:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:07.107 21:01:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:07.107 21:01:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:07.107 21:01:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:07.107 21:01:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:07.366 21:01:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:07.366 21:01:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:07.366 21:01:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:07.366 21:01:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:07.366 21:01:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:07.366 21:01:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:07.367 21:01:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:07.367 21:01:23 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:07.367 21:01:23 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.367 21:01:23 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.367 21:01:23 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.367 21:01:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.367 21:01:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.367 21:01:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.367 21:01:23 -- paths/export.sh@5 -- # export PATH 00:08:07.367 21:01:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.367 21:01:23 -- nvmf/common.sh@47 -- # : 0 00:08:07.367 21:01:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:07.367 21:01:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:07.367 21:01:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:07.367 21:01:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:07.367 21:01:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:07.367 21:01:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:07.367 21:01:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:07.367 21:01:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:07.367 21:01:23 -- target/rpc.sh@11 -- # loops=5 00:08:07.367 21:01:23 -- target/rpc.sh@23 -- # nvmftestinit 00:08:07.367 21:01:23 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:07.367 21:01:23 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:07.367 21:01:23 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:07.367 21:01:23 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:07.367 21:01:23 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:07.367 21:01:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.367 21:01:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:07.367 21:01:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.367 21:01:23 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:07.367 21:01:23 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:07.367 21:01:23 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:07.367 21:01:23 -- common/autotest_common.sh@10 -- # set +x 00:08:13.940 21:01:28 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:13.940 21:01:28 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:13.940 21:01:28 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:13.940 21:01:28 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:13.940 21:01:28 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:13.940 21:01:28 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:13.940 21:01:28 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:13.940 21:01:28 -- nvmf/common.sh@295 -- # net_devs=() 00:08:13.940 21:01:28 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:13.940 21:01:28 -- nvmf/common.sh@296 -- # e810=() 00:08:13.940 21:01:28 -- nvmf/common.sh@296 -- # local -ga e810 00:08:13.940 21:01:28 -- nvmf/common.sh@297 -- # x722=() 00:08:13.940 21:01:28 -- nvmf/common.sh@297 -- # local -ga x722 00:08:13.940 21:01:28 -- nvmf/common.sh@298 -- # mlx=() 00:08:13.940 21:01:28 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:13.940 21:01:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:13.940 21:01:28 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:13.940 21:01:28 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:13.940 21:01:28 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:13.940 21:01:28 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:13.940 21:01:28 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:13.940 21:01:28 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:13.940 21:01:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:13.940 21:01:28 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:13.940 21:01:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:13.940 21:01:28 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:13.940 21:01:28 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:13.940 21:01:28 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:13.940 21:01:28 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:13.940 21:01:28 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:13.940 21:01:28 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:13.940 21:01:28 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:13.940 21:01:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:13.940 21:01:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:13.940 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:13.940 21:01:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:13.940 21:01:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:13.940 21:01:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:13.940 21:01:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:13.940 21:01:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:13.940 21:01:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:13.940 21:01:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:13.940 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:13.940 21:01:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:13.940 21:01:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:13.940 21:01:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:13.940 21:01:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:13.940 21:01:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:13.940 21:01:28 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:13.941 21:01:28 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:13.941 21:01:28 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:13.941 21:01:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:13.941 21:01:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:13.941 21:01:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:13.941 21:01:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:13.941 21:01:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:13.941 Found net devices under 0000:86:00.0: cvl_0_0 00:08:13.941 21:01:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:13.941 21:01:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:13.941 21:01:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:13.941 21:01:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:13.941 21:01:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:13.941 21:01:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:13.941 Found net devices under 0000:86:00.1: cvl_0_1 00:08:13.941 21:01:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:13.941 21:01:28 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:13.941 21:01:28 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:13.941 21:01:28 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:13.941 21:01:28 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:13.941 21:01:28 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:13.941 21:01:28 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:13.941 21:01:28 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:13.941 21:01:28 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:13.941 21:01:28 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:13.941 21:01:28 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:13.941 21:01:28 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:13.941 21:01:28 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:13.941 21:01:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:13.941 21:01:28 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:13.941 21:01:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:13.941 21:01:28 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:13.941 21:01:28 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:13.941 21:01:28 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:13.941 21:01:28 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:13.941 21:01:28 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:13.941 21:01:28 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:13.941 21:01:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:13.941 21:01:28 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:13.941 21:01:28 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:13.941 21:01:28 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:13.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:13.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:08:13.941 00:08:13.941 --- 10.0.0.2 ping statistics --- 00:08:13.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.941 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:08:13.941 21:01:28 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:13.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:13.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:08:13.941 00:08:13.941 --- 10.0.0.1 ping statistics --- 00:08:13.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.941 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:08:13.941 21:01:28 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:13.941 21:01:28 -- nvmf/common.sh@411 -- # return 0 00:08:13.941 21:01:28 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:13.941 21:01:28 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:13.941 21:01:28 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:13.941 21:01:28 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:13.941 21:01:28 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:13.941 21:01:28 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:13.941 21:01:28 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:13.941 21:01:28 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:08:13.941 21:01:28 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:13.941 21:01:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:13.941 21:01:28 -- common/autotest_common.sh@10 -- # set +x 00:08:13.941 21:01:28 -- nvmf/common.sh@470 -- # nvmfpid=2919144 00:08:13.941 21:01:28 -- nvmf/common.sh@471 -- # waitforlisten 2919144 00:08:13.941 21:01:28 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:13.941 21:01:28 -- common/autotest_common.sh@817 -- # '[' -z 2919144 ']' 00:08:13.941 21:01:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.941 21:01:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:13.941 21:01:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.941 21:01:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:13.941 21:01:28 -- common/autotest_common.sh@10 -- # set +x 00:08:13.941 [2024-04-18 21:01:28.978052] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:08:13.941 [2024-04-18 21:01:28.978095] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:13.941 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.941 [2024-04-18 21:01:29.043072] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:13.941 [2024-04-18 21:01:29.115063] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:13.941 [2024-04-18 21:01:29.115105] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:13.941 [2024-04-18 21:01:29.115111] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:13.941 [2024-04-18 21:01:29.115117] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:13.941 [2024-04-18 21:01:29.115122] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:13.941 [2024-04-18 21:01:29.115180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:13.941 [2024-04-18 21:01:29.115273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:13.941 [2024-04-18 21:01:29.115363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:13.941 [2024-04-18 21:01:29.115364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.941 21:01:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:13.941 21:01:29 -- common/autotest_common.sh@850 -- # return 0 00:08:13.941 21:01:29 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:13.941 21:01:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:13.941 21:01:29 -- common/autotest_common.sh@10 -- # set +x 00:08:13.941 21:01:29 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:13.941 21:01:29 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:08:13.941 21:01:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:13.941 21:01:29 -- common/autotest_common.sh@10 -- # set +x 00:08:13.941 21:01:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:13.941 21:01:29 -- target/rpc.sh@26 -- # stats='{ 00:08:13.941 "tick_rate": 2300000000, 00:08:13.941 "poll_groups": [ 00:08:13.941 { 00:08:13.941 "name": "nvmf_tgt_poll_group_0", 00:08:13.941 "admin_qpairs": 0, 00:08:13.941 "io_qpairs": 0, 00:08:13.941 "current_admin_qpairs": 0, 00:08:13.941 "current_io_qpairs": 0, 00:08:13.941 "pending_bdev_io": 0, 00:08:13.941 "completed_nvme_io": 0, 00:08:13.941 "transports": [] 00:08:13.941 }, 00:08:13.941 { 00:08:13.941 "name": "nvmf_tgt_poll_group_1", 00:08:13.941 "admin_qpairs": 0, 00:08:13.941 "io_qpairs": 0, 00:08:13.941 "current_admin_qpairs": 0, 00:08:13.941 "current_io_qpairs": 0, 00:08:13.941 "pending_bdev_io": 0, 00:08:13.941 "completed_nvme_io": 0, 00:08:13.941 "transports": [] 00:08:13.941 }, 00:08:13.941 { 00:08:13.941 "name": "nvmf_tgt_poll_group_2", 00:08:13.941 "admin_qpairs": 0, 00:08:13.941 "io_qpairs": 0, 00:08:13.941 "current_admin_qpairs": 0, 00:08:13.941 "current_io_qpairs": 0, 00:08:13.941 "pending_bdev_io": 0, 00:08:13.941 "completed_nvme_io": 0, 00:08:13.941 "transports": [] 00:08:13.941 }, 00:08:13.941 { 00:08:13.941 "name": "nvmf_tgt_poll_group_3", 00:08:13.941 "admin_qpairs": 0, 00:08:13.941 "io_qpairs": 0, 00:08:13.941 "current_admin_qpairs": 0, 00:08:13.941 "current_io_qpairs": 0, 00:08:13.941 "pending_bdev_io": 0, 00:08:13.941 "completed_nvme_io": 0, 00:08:13.941 "transports": [] 00:08:13.941 } 00:08:13.941 ] 00:08:13.941 }' 00:08:13.941 21:01:29 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:08:13.941 21:01:29 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:08:13.941 21:01:29 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:08:13.941 21:01:29 -- target/rpc.sh@15 -- # wc -l 00:08:14.201 21:01:29 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:08:14.201 21:01:29 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:08:14.201 21:01:29 -- target/rpc.sh@29 -- # [[ null == null ]] 00:08:14.201 21:01:29 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:14.201 21:01:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:14.201 21:01:29 -- common/autotest_common.sh@10 -- # set +x 00:08:14.201 [2024-04-18 21:01:29.929752] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:14.201 21:01:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:14.201 21:01:29 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:08:14.201 21:01:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:14.201 21:01:29 -- common/autotest_common.sh@10 -- # set +x 00:08:14.201 21:01:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:14.201 21:01:29 -- target/rpc.sh@33 -- # stats='{ 00:08:14.201 "tick_rate": 2300000000, 00:08:14.201 "poll_groups": [ 00:08:14.201 { 00:08:14.201 "name": "nvmf_tgt_poll_group_0", 00:08:14.201 "admin_qpairs": 0, 00:08:14.201 "io_qpairs": 0, 00:08:14.201 "current_admin_qpairs": 0, 00:08:14.201 "current_io_qpairs": 0, 00:08:14.201 "pending_bdev_io": 0, 00:08:14.201 "completed_nvme_io": 0, 00:08:14.201 "transports": [ 00:08:14.201 { 00:08:14.201 "trtype": "TCP" 00:08:14.201 } 00:08:14.201 ] 00:08:14.201 }, 00:08:14.201 { 00:08:14.201 "name": "nvmf_tgt_poll_group_1", 00:08:14.201 "admin_qpairs": 0, 00:08:14.201 "io_qpairs": 0, 00:08:14.201 "current_admin_qpairs": 0, 00:08:14.201 "current_io_qpairs": 0, 00:08:14.201 "pending_bdev_io": 0, 00:08:14.201 "completed_nvme_io": 0, 00:08:14.201 "transports": [ 00:08:14.201 { 00:08:14.201 "trtype": "TCP" 00:08:14.201 } 00:08:14.201 ] 00:08:14.201 }, 00:08:14.201 { 00:08:14.201 "name": "nvmf_tgt_poll_group_2", 00:08:14.201 "admin_qpairs": 0, 00:08:14.201 "io_qpairs": 0, 00:08:14.201 "current_admin_qpairs": 0, 00:08:14.201 "current_io_qpairs": 0, 00:08:14.201 "pending_bdev_io": 0, 00:08:14.201 "completed_nvme_io": 0, 00:08:14.201 "transports": [ 00:08:14.201 { 00:08:14.201 "trtype": "TCP" 00:08:14.201 } 00:08:14.201 ] 00:08:14.201 }, 00:08:14.201 { 00:08:14.201 "name": "nvmf_tgt_poll_group_3", 00:08:14.201 "admin_qpairs": 0, 00:08:14.201 "io_qpairs": 0, 00:08:14.201 "current_admin_qpairs": 0, 00:08:14.201 "current_io_qpairs": 0, 00:08:14.201 "pending_bdev_io": 0, 00:08:14.201 "completed_nvme_io": 0, 00:08:14.201 "transports": [ 00:08:14.201 { 00:08:14.201 "trtype": "TCP" 00:08:14.201 } 00:08:14.201 ] 00:08:14.201 } 00:08:14.201 ] 00:08:14.201 }' 00:08:14.201 21:01:29 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:08:14.201 21:01:29 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:14.201 21:01:29 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:14.202 21:01:29 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:14.202 21:01:30 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:08:14.202 21:01:30 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:08:14.202 21:01:30 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:14.202 21:01:30 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:14.202 21:01:30 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:14.202 21:01:30 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:08:14.202 21:01:30 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:08:14.202 21:01:30 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:08:14.202 21:01:30 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:08:14.202 21:01:30 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:08:14.202 21:01:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:14.202 21:01:30 -- common/autotest_common.sh@10 -- # set +x 00:08:14.202 Malloc1 00:08:14.202 21:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:14.202 21:01:30 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:14.202 21:01:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:14.202 21:01:30 -- common/autotest_common.sh@10 -- # set +x 00:08:14.202 21:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:14.202 21:01:30 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:14.202 21:01:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:14.202 21:01:30 -- common/autotest_common.sh@10 -- # set +x 00:08:14.202 21:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:14.202 21:01:30 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:08:14.202 21:01:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:14.202 21:01:30 -- common/autotest_common.sh@10 -- # set +x 00:08:14.202 21:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:14.202 21:01:30 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:14.202 21:01:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:14.202 21:01:30 -- common/autotest_common.sh@10 -- # set +x 00:08:14.202 [2024-04-18 21:01:30.097899] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:14.202 21:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:14.202 21:01:30 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:14.202 21:01:30 -- common/autotest_common.sh@638 -- # local es=0 00:08:14.202 21:01:30 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:14.202 21:01:30 -- common/autotest_common.sh@626 -- # local arg=nvme 00:08:14.202 21:01:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:14.202 21:01:30 -- common/autotest_common.sh@630 -- # type -t nvme 00:08:14.202 21:01:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:14.202 21:01:30 -- common/autotest_common.sh@632 -- # type -P nvme 00:08:14.202 21:01:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:14.202 21:01:30 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:08:14.202 21:01:30 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:08:14.202 21:01:30 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:14.202 [2024-04-18 21:01:30.126637] ctrlr.c: 801:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:08:14.461 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:14.461 could not add new controller: failed to write to nvme-fabrics device 00:08:14.461 21:01:30 -- common/autotest_common.sh@641 -- # es=1 00:08:14.461 21:01:30 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:14.461 21:01:30 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:14.461 21:01:30 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:14.461 21:01:30 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:14.461 21:01:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:14.461 21:01:30 -- common/autotest_common.sh@10 -- # set +x 00:08:14.461 21:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:14.461 21:01:30 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:15.840 21:01:31 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:08:15.840 21:01:31 -- common/autotest_common.sh@1184 -- # local i=0 00:08:15.840 21:01:31 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:15.840 21:01:31 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:15.840 21:01:31 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:17.748 21:01:33 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:17.748 21:01:33 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:17.748 21:01:33 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:17.748 21:01:33 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:17.748 21:01:33 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:17.748 21:01:33 -- common/autotest_common.sh@1194 -- # return 0 00:08:17.748 21:01:33 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:17.748 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:17.748 21:01:33 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:17.748 21:01:33 -- common/autotest_common.sh@1205 -- # local i=0 00:08:17.748 21:01:33 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:17.748 21:01:33 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:17.748 21:01:33 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:17.748 21:01:33 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:17.748 21:01:33 -- common/autotest_common.sh@1217 -- # return 0 00:08:17.748 21:01:33 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:17.748 21:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.748 21:01:33 -- common/autotest_common.sh@10 -- # set +x 00:08:17.748 21:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.748 21:01:33 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:17.748 21:01:33 -- common/autotest_common.sh@638 -- # local es=0 00:08:17.748 21:01:33 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:17.748 21:01:33 -- common/autotest_common.sh@626 -- # local arg=nvme 00:08:17.748 21:01:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:17.748 21:01:33 -- common/autotest_common.sh@630 -- # type -t nvme 00:08:17.748 21:01:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:17.748 21:01:33 -- common/autotest_common.sh@632 -- # type -P nvme 00:08:17.748 21:01:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:17.748 21:01:33 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:08:17.748 21:01:33 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:08:17.748 21:01:33 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:17.748 [2024-04-18 21:01:33.522778] ctrlr.c: 801:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:08:17.748 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:17.748 could not add new controller: failed to write to nvme-fabrics device 00:08:17.748 21:01:33 -- common/autotest_common.sh@641 -- # es=1 00:08:17.748 21:01:33 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:17.748 21:01:33 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:17.748 21:01:33 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:17.748 21:01:33 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:08:17.748 21:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.748 21:01:33 -- common/autotest_common.sh@10 -- # set +x 00:08:17.748 21:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.748 21:01:33 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:19.127 21:01:34 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:08:19.127 21:01:34 -- common/autotest_common.sh@1184 -- # local i=0 00:08:19.127 21:01:34 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:19.127 21:01:34 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:19.127 21:01:34 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:21.035 21:01:36 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:21.035 21:01:36 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:21.035 21:01:36 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:21.035 21:01:36 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:21.035 21:01:36 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:21.035 21:01:36 -- common/autotest_common.sh@1194 -- # return 0 00:08:21.035 21:01:36 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:21.035 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:21.035 21:01:36 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:21.035 21:01:36 -- common/autotest_common.sh@1205 -- # local i=0 00:08:21.035 21:01:36 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:21.035 21:01:36 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:21.035 21:01:36 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:21.035 21:01:36 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:21.035 21:01:36 -- common/autotest_common.sh@1217 -- # return 0 00:08:21.035 21:01:36 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:21.035 21:01:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:21.035 21:01:36 -- common/autotest_common.sh@10 -- # set +x 00:08:21.035 21:01:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:21.035 21:01:36 -- target/rpc.sh@81 -- # seq 1 5 00:08:21.035 21:01:36 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:21.035 21:01:36 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:21.035 21:01:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:21.035 21:01:36 -- common/autotest_common.sh@10 -- # set +x 00:08:21.035 21:01:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:21.035 21:01:36 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:21.035 21:01:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:21.035 21:01:36 -- common/autotest_common.sh@10 -- # set +x 00:08:21.035 [2024-04-18 21:01:36.909139] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:21.035 21:01:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:21.035 21:01:36 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:21.035 21:01:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:21.035 21:01:36 -- common/autotest_common.sh@10 -- # set +x 00:08:21.035 21:01:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:21.035 21:01:36 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:21.035 21:01:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:21.035 21:01:36 -- common/autotest_common.sh@10 -- # set +x 00:08:21.035 21:01:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:21.035 21:01:36 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:22.416 21:01:38 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:22.416 21:01:38 -- common/autotest_common.sh@1184 -- # local i=0 00:08:22.416 21:01:38 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:22.416 21:01:38 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:22.416 21:01:38 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:24.325 21:01:40 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:24.325 21:01:40 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:24.325 21:01:40 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:24.325 21:01:40 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:24.325 21:01:40 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:24.325 21:01:40 -- common/autotest_common.sh@1194 -- # return 0 00:08:24.325 21:01:40 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:24.325 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:24.325 21:01:40 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:24.325 21:01:40 -- common/autotest_common.sh@1205 -- # local i=0 00:08:24.325 21:01:40 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:24.325 21:01:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:24.325 21:01:40 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:24.325 21:01:40 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:24.325 21:01:40 -- common/autotest_common.sh@1217 -- # return 0 00:08:24.325 21:01:40 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:24.325 21:01:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:24.325 21:01:40 -- common/autotest_common.sh@10 -- # set +x 00:08:24.325 21:01:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:24.325 21:01:40 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:24.325 21:01:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:24.325 21:01:40 -- common/autotest_common.sh@10 -- # set +x 00:08:24.325 21:01:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:24.325 21:01:40 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:24.325 21:01:40 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:24.325 21:01:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:24.325 21:01:40 -- common/autotest_common.sh@10 -- # set +x 00:08:24.325 21:01:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:24.325 21:01:40 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:24.325 21:01:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:24.325 21:01:40 -- common/autotest_common.sh@10 -- # set +x 00:08:24.325 [2024-04-18 21:01:40.180874] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:24.325 21:01:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:24.325 21:01:40 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:24.325 21:01:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:24.325 21:01:40 -- common/autotest_common.sh@10 -- # set +x 00:08:24.325 21:01:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:24.325 21:01:40 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:24.325 21:01:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:24.325 21:01:40 -- common/autotest_common.sh@10 -- # set +x 00:08:24.325 21:01:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:24.325 21:01:40 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:25.705 21:01:41 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:25.705 21:01:41 -- common/autotest_common.sh@1184 -- # local i=0 00:08:25.705 21:01:41 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:25.705 21:01:41 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:25.705 21:01:41 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:27.610 21:01:43 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:27.610 21:01:43 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:27.610 21:01:43 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:27.610 21:01:43 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:27.610 21:01:43 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:27.610 21:01:43 -- common/autotest_common.sh@1194 -- # return 0 00:08:27.610 21:01:43 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:27.610 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:27.610 21:01:43 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:27.610 21:01:43 -- common/autotest_common.sh@1205 -- # local i=0 00:08:27.610 21:01:43 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:27.610 21:01:43 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:27.610 21:01:43 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:27.610 21:01:43 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:27.610 21:01:43 -- common/autotest_common.sh@1217 -- # return 0 00:08:27.610 21:01:43 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:27.610 21:01:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:27.610 21:01:43 -- common/autotest_common.sh@10 -- # set +x 00:08:27.610 21:01:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:27.610 21:01:43 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:27.610 21:01:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:27.610 21:01:43 -- common/autotest_common.sh@10 -- # set +x 00:08:27.610 21:01:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:27.610 21:01:43 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:27.610 21:01:43 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:27.610 21:01:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:27.610 21:01:43 -- common/autotest_common.sh@10 -- # set +x 00:08:27.610 21:01:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:27.610 21:01:43 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:27.610 21:01:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:27.610 21:01:43 -- common/autotest_common.sh@10 -- # set +x 00:08:27.610 [2024-04-18 21:01:43.502552] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:27.610 21:01:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:27.610 21:01:43 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:27.610 21:01:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:27.610 21:01:43 -- common/autotest_common.sh@10 -- # set +x 00:08:27.610 21:01:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:27.610 21:01:43 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:27.610 21:01:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:27.610 21:01:43 -- common/autotest_common.sh@10 -- # set +x 00:08:27.610 21:01:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:27.610 21:01:43 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:28.990 21:01:44 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:28.990 21:01:44 -- common/autotest_common.sh@1184 -- # local i=0 00:08:28.990 21:01:44 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:28.990 21:01:44 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:28.990 21:01:44 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:30.897 21:01:46 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:30.897 21:01:46 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:30.897 21:01:46 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:30.897 21:01:46 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:30.897 21:01:46 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:30.897 21:01:46 -- common/autotest_common.sh@1194 -- # return 0 00:08:30.897 21:01:46 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:30.898 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:30.898 21:01:46 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:30.898 21:01:46 -- common/autotest_common.sh@1205 -- # local i=0 00:08:30.898 21:01:46 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:30.898 21:01:46 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:30.898 21:01:46 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:30.898 21:01:46 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:30.898 21:01:46 -- common/autotest_common.sh@1217 -- # return 0 00:08:30.898 21:01:46 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:30.898 21:01:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:30.898 21:01:46 -- common/autotest_common.sh@10 -- # set +x 00:08:30.898 21:01:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:30.898 21:01:46 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:30.898 21:01:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:30.898 21:01:46 -- common/autotest_common.sh@10 -- # set +x 00:08:30.898 21:01:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:30.898 21:01:46 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:30.898 21:01:46 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:30.898 21:01:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:30.898 21:01:46 -- common/autotest_common.sh@10 -- # set +x 00:08:30.898 21:01:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:30.898 21:01:46 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:30.898 21:01:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:30.898 21:01:46 -- common/autotest_common.sh@10 -- # set +x 00:08:30.898 [2024-04-18 21:01:46.816504] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:30.898 21:01:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:30.898 21:01:46 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:30.898 21:01:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:30.898 21:01:46 -- common/autotest_common.sh@10 -- # set +x 00:08:31.157 21:01:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:31.157 21:01:46 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:31.157 21:01:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:31.157 21:01:46 -- common/autotest_common.sh@10 -- # set +x 00:08:31.157 21:01:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:31.157 21:01:46 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:32.537 21:01:48 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:32.537 21:01:48 -- common/autotest_common.sh@1184 -- # local i=0 00:08:32.537 21:01:48 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:32.537 21:01:48 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:32.537 21:01:48 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:34.445 21:01:50 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:34.445 21:01:50 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:34.445 21:01:50 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:34.445 21:01:50 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:34.445 21:01:50 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:34.445 21:01:50 -- common/autotest_common.sh@1194 -- # return 0 00:08:34.445 21:01:50 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:34.445 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:34.445 21:01:50 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:34.445 21:01:50 -- common/autotest_common.sh@1205 -- # local i=0 00:08:34.445 21:01:50 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:34.445 21:01:50 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:34.445 21:01:50 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:34.445 21:01:50 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:34.445 21:01:50 -- common/autotest_common.sh@1217 -- # return 0 00:08:34.445 21:01:50 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:34.445 21:01:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.445 21:01:50 -- common/autotest_common.sh@10 -- # set +x 00:08:34.445 21:01:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.445 21:01:50 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:34.445 21:01:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.445 21:01:50 -- common/autotest_common.sh@10 -- # set +x 00:08:34.445 21:01:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.445 21:01:50 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:34.445 21:01:50 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:34.445 21:01:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.445 21:01:50 -- common/autotest_common.sh@10 -- # set +x 00:08:34.445 21:01:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.445 21:01:50 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:34.445 21:01:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.445 21:01:50 -- common/autotest_common.sh@10 -- # set +x 00:08:34.445 [2024-04-18 21:01:50.180065] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:34.445 21:01:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.445 21:01:50 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:34.445 21:01:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.445 21:01:50 -- common/autotest_common.sh@10 -- # set +x 00:08:34.445 21:01:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.445 21:01:50 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:34.445 21:01:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.445 21:01:50 -- common/autotest_common.sh@10 -- # set +x 00:08:34.445 21:01:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.445 21:01:50 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:35.383 21:01:51 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:35.383 21:01:51 -- common/autotest_common.sh@1184 -- # local i=0 00:08:35.383 21:01:51 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:35.383 21:01:51 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:35.383 21:01:51 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:37.925 21:01:53 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:37.925 21:01:53 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:37.925 21:01:53 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:37.925 21:01:53 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:37.925 21:01:53 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:37.925 21:01:53 -- common/autotest_common.sh@1194 -- # return 0 00:08:37.925 21:01:53 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:37.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.925 21:01:53 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:37.925 21:01:53 -- common/autotest_common.sh@1205 -- # local i=0 00:08:37.925 21:01:53 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:37.925 21:01:53 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:37.925 21:01:53 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:37.925 21:01:53 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:37.925 21:01:53 -- common/autotest_common.sh@1217 -- # return 0 00:08:37.925 21:01:53 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:37.925 21:01:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:37.925 21:01:53 -- common/autotest_common.sh@10 -- # set +x 00:08:37.925 21:01:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:37.925 21:01:53 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:37.925 21:01:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:37.925 21:01:53 -- common/autotest_common.sh@10 -- # set +x 00:08:37.925 21:01:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:37.925 21:01:53 -- target/rpc.sh@99 -- # seq 1 5 00:08:37.925 21:01:53 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:37.925 21:01:53 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:37.925 21:01:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:37.925 21:01:53 -- common/autotest_common.sh@10 -- # set +x 00:08:37.925 21:01:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:37.925 21:01:53 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:37.925 21:01:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:37.925 21:01:53 -- common/autotest_common.sh@10 -- # set +x 00:08:37.925 [2024-04-18 21:01:53.494068] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:37.925 21:01:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:37.925 21:01:53 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:37.925 21:01:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:37.925 21:01:53 -- common/autotest_common.sh@10 -- # set +x 00:08:37.925 21:01:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:37.925 21:01:53 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:37.925 21:01:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:37.925 21:01:53 -- common/autotest_common.sh@10 -- # set +x 00:08:37.925 21:01:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:37.925 21:01:53 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.925 21:01:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:37.925 21:01:53 -- common/autotest_common.sh@10 -- # set +x 00:08:37.925 21:01:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:37.925 21:01:53 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:37.925 21:01:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:37.925 21:01:53 -- common/autotest_common.sh@10 -- # set +x 00:08:37.925 21:01:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:37.925 21:01:53 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:37.925 21:01:53 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:37.925 21:01:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:37.925 21:01:53 -- common/autotest_common.sh@10 -- # set +x 00:08:37.925 21:01:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:37.925 21:01:53 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:37.925 21:01:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:37.925 21:01:53 -- common/autotest_common.sh@10 -- # set +x 00:08:37.925 [2024-04-18 21:01:53.542161] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:37.925 21:01:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:37.925 21:01:53 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:37.925 21:01:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:37.925 21:01:53 -- common/autotest_common.sh@10 -- # set +x 00:08:37.925 21:01:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:37.925 21:01:53 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:37.925 21:01:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:37.925 21:01:53 -- common/autotest_common.sh@10 -- # set +x 00:08:37.925 21:01:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:37.925 21:01:53 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.925 21:01:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:37.925 21:01:53 -- common/autotest_common.sh@10 -- # set +x 00:08:37.925 21:01:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:37.925 21:01:53 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:37.925 21:01:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:37.925 21:01:53 -- common/autotest_common.sh@10 -- # set +x 00:08:37.925 21:01:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:37.925 21:01:53 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:37.925 21:01:53 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:37.925 21:01:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:37.925 21:01:53 -- common/autotest_common.sh@10 -- # set +x 00:08:37.925 21:01:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:37.925 21:01:53 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:37.925 21:01:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:37.925 21:01:53 -- common/autotest_common.sh@10 -- # set +x 00:08:37.925 [2024-04-18 21:01:53.590300] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:37.925 21:01:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:37.925 21:01:53 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:37.925 21:01:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:37.925 21:01:53 -- common/autotest_common.sh@10 -- # set +x 00:08:37.925 21:01:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:37.925 21:01:53 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:37.925 21:01:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:37.925 21:01:53 -- common/autotest_common.sh@10 -- # set +x 00:08:37.925 21:01:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:37.925 21:01:53 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.925 21:01:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:37.925 21:01:53 -- common/autotest_common.sh@10 -- # set +x 00:08:37.925 21:01:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:37.925 21:01:53 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:37.925 21:01:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:37.925 21:01:53 -- common/autotest_common.sh@10 -- # set +x 00:08:37.925 21:01:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:37.925 21:01:53 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:37.925 21:01:53 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:37.925 21:01:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:37.925 21:01:53 -- common/autotest_common.sh@10 -- # set +x 00:08:37.925 21:01:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:37.925 21:01:53 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:37.925 21:01:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:37.925 21:01:53 -- common/autotest_common.sh@10 -- # set +x 00:08:37.926 [2024-04-18 21:01:53.642494] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:37.926 21:01:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:37.926 21:01:53 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:37.926 21:01:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:37.926 21:01:53 -- common/autotest_common.sh@10 -- # set +x 00:08:37.926 21:01:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:37.926 21:01:53 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:37.926 21:01:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:37.926 21:01:53 -- common/autotest_common.sh@10 -- # set +x 00:08:37.926 21:01:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:37.926 21:01:53 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.926 21:01:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:37.926 21:01:53 -- common/autotest_common.sh@10 -- # set +x 00:08:37.926 21:01:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:37.926 21:01:53 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:37.926 21:01:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:37.926 21:01:53 -- common/autotest_common.sh@10 -- # set +x 00:08:37.926 21:01:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:37.926 21:01:53 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:37.926 21:01:53 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:37.926 21:01:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:37.926 21:01:53 -- common/autotest_common.sh@10 -- # set +x 00:08:37.926 21:01:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:37.926 21:01:53 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:37.926 21:01:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:37.926 21:01:53 -- common/autotest_common.sh@10 -- # set +x 00:08:37.926 [2024-04-18 21:01:53.690663] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:37.926 21:01:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:37.926 21:01:53 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:37.926 21:01:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:37.926 21:01:53 -- common/autotest_common.sh@10 -- # set +x 00:08:37.926 21:01:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:37.926 21:01:53 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:37.926 21:01:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:37.926 21:01:53 -- common/autotest_common.sh@10 -- # set +x 00:08:37.926 21:01:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:37.926 21:01:53 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.926 21:01:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:37.926 21:01:53 -- common/autotest_common.sh@10 -- # set +x 00:08:37.926 21:01:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:37.926 21:01:53 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:37.926 21:01:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:37.926 21:01:53 -- common/autotest_common.sh@10 -- # set +x 00:08:37.926 21:01:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:37.926 21:01:53 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:08:37.926 21:01:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:37.926 21:01:53 -- common/autotest_common.sh@10 -- # set +x 00:08:37.926 21:01:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:37.926 21:01:53 -- target/rpc.sh@110 -- # stats='{ 00:08:37.926 "tick_rate": 2300000000, 00:08:37.926 "poll_groups": [ 00:08:37.926 { 00:08:37.926 "name": "nvmf_tgt_poll_group_0", 00:08:37.926 "admin_qpairs": 2, 00:08:37.926 "io_qpairs": 168, 00:08:37.926 "current_admin_qpairs": 0, 00:08:37.926 "current_io_qpairs": 0, 00:08:37.926 "pending_bdev_io": 0, 00:08:37.926 "completed_nvme_io": 185, 00:08:37.926 "transports": [ 00:08:37.926 { 00:08:37.926 "trtype": "TCP" 00:08:37.926 } 00:08:37.926 ] 00:08:37.926 }, 00:08:37.926 { 00:08:37.926 "name": "nvmf_tgt_poll_group_1", 00:08:37.926 "admin_qpairs": 2, 00:08:37.926 "io_qpairs": 168, 00:08:37.926 "current_admin_qpairs": 0, 00:08:37.926 "current_io_qpairs": 0, 00:08:37.926 "pending_bdev_io": 0, 00:08:37.926 "completed_nvme_io": 302, 00:08:37.926 "transports": [ 00:08:37.926 { 00:08:37.926 "trtype": "TCP" 00:08:37.926 } 00:08:37.926 ] 00:08:37.926 }, 00:08:37.926 { 00:08:37.926 "name": "nvmf_tgt_poll_group_2", 00:08:37.926 "admin_qpairs": 1, 00:08:37.926 "io_qpairs": 168, 00:08:37.926 "current_admin_qpairs": 0, 00:08:37.926 "current_io_qpairs": 0, 00:08:37.926 "pending_bdev_io": 0, 00:08:37.926 "completed_nvme_io": 266, 00:08:37.926 "transports": [ 00:08:37.926 { 00:08:37.926 "trtype": "TCP" 00:08:37.926 } 00:08:37.926 ] 00:08:37.926 }, 00:08:37.926 { 00:08:37.926 "name": "nvmf_tgt_poll_group_3", 00:08:37.926 "admin_qpairs": 2, 00:08:37.926 "io_qpairs": 168, 00:08:37.926 "current_admin_qpairs": 0, 00:08:37.926 "current_io_qpairs": 0, 00:08:37.926 "pending_bdev_io": 0, 00:08:37.926 "completed_nvme_io": 269, 00:08:37.926 "transports": [ 00:08:37.926 { 00:08:37.926 "trtype": "TCP" 00:08:37.926 } 00:08:37.926 ] 00:08:37.926 } 00:08:37.926 ] 00:08:37.926 }' 00:08:37.926 21:01:53 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:08:37.926 21:01:53 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:37.926 21:01:53 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:37.926 21:01:53 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:37.926 21:01:53 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:08:37.926 21:01:53 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:08:37.926 21:01:53 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:37.926 21:01:53 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:37.926 21:01:53 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:37.926 21:01:53 -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:08:37.926 21:01:53 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:08:37.926 21:01:53 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:08:37.926 21:01:53 -- target/rpc.sh@123 -- # nvmftestfini 00:08:37.926 21:01:53 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:37.926 21:01:53 -- nvmf/common.sh@117 -- # sync 00:08:37.926 21:01:53 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:37.926 21:01:53 -- nvmf/common.sh@120 -- # set +e 00:08:37.926 21:01:53 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:37.926 21:01:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:37.926 rmmod nvme_tcp 00:08:38.198 rmmod nvme_fabrics 00:08:38.198 rmmod nvme_keyring 00:08:38.198 21:01:53 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:38.198 21:01:53 -- nvmf/common.sh@124 -- # set -e 00:08:38.198 21:01:53 -- nvmf/common.sh@125 -- # return 0 00:08:38.198 21:01:53 -- nvmf/common.sh@478 -- # '[' -n 2919144 ']' 00:08:38.198 21:01:53 -- nvmf/common.sh@479 -- # killprocess 2919144 00:08:38.198 21:01:53 -- common/autotest_common.sh@936 -- # '[' -z 2919144 ']' 00:08:38.198 21:01:53 -- common/autotest_common.sh@940 -- # kill -0 2919144 00:08:38.198 21:01:53 -- common/autotest_common.sh@941 -- # uname 00:08:38.198 21:01:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:38.198 21:01:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2919144 00:08:38.198 21:01:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:38.198 21:01:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:38.198 21:01:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2919144' 00:08:38.198 killing process with pid 2919144 00:08:38.198 21:01:53 -- common/autotest_common.sh@955 -- # kill 2919144 00:08:38.198 21:01:53 -- common/autotest_common.sh@960 -- # wait 2919144 00:08:38.474 21:01:54 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:38.474 21:01:54 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:38.474 21:01:54 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:38.474 21:01:54 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:38.474 21:01:54 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:38.474 21:01:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.474 21:01:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:38.474 21:01:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.462 21:01:56 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:40.462 00:08:40.462 real 0m33.305s 00:08:40.462 user 1m41.486s 00:08:40.462 sys 0m6.133s 00:08:40.462 21:01:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:40.462 21:01:56 -- common/autotest_common.sh@10 -- # set +x 00:08:40.462 ************************************ 00:08:40.462 END TEST nvmf_rpc 00:08:40.462 ************************************ 00:08:40.462 21:01:56 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:40.462 21:01:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:40.462 21:01:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:40.462 21:01:56 -- common/autotest_common.sh@10 -- # set +x 00:08:40.723 ************************************ 00:08:40.723 START TEST nvmf_invalid 00:08:40.723 ************************************ 00:08:40.723 21:01:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:40.723 * Looking for test storage... 00:08:40.723 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:40.723 21:01:56 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:40.723 21:01:56 -- nvmf/common.sh@7 -- # uname -s 00:08:40.723 21:01:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:40.723 21:01:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:40.723 21:01:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:40.723 21:01:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:40.723 21:01:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:40.723 21:01:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:40.723 21:01:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:40.723 21:01:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:40.723 21:01:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:40.723 21:01:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:40.723 21:01:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:40.723 21:01:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:40.723 21:01:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:40.723 21:01:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:40.723 21:01:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:40.723 21:01:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:40.723 21:01:56 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:40.723 21:01:56 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:40.723 21:01:56 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:40.723 21:01:56 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:40.723 21:01:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.723 21:01:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.723 21:01:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.723 21:01:56 -- paths/export.sh@5 -- # export PATH 00:08:40.723 21:01:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.723 21:01:56 -- nvmf/common.sh@47 -- # : 0 00:08:40.723 21:01:56 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:40.723 21:01:56 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:40.723 21:01:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:40.723 21:01:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:40.723 21:01:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:40.723 21:01:56 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:40.723 21:01:56 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:40.723 21:01:56 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:40.723 21:01:56 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:40.723 21:01:56 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:40.723 21:01:56 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:08:40.723 21:01:56 -- target/invalid.sh@14 -- # target=foobar 00:08:40.723 21:01:56 -- target/invalid.sh@16 -- # RANDOM=0 00:08:40.723 21:01:56 -- target/invalid.sh@34 -- # nvmftestinit 00:08:40.723 21:01:56 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:40.723 21:01:56 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:40.723 21:01:56 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:40.723 21:01:56 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:40.723 21:01:56 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:40.723 21:01:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.723 21:01:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:40.723 21:01:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.723 21:01:56 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:40.723 21:01:56 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:40.723 21:01:56 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:40.723 21:01:56 -- common/autotest_common.sh@10 -- # set +x 00:08:47.294 21:02:02 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:47.294 21:02:02 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:47.294 21:02:02 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:47.294 21:02:02 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:47.294 21:02:02 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:47.294 21:02:02 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:47.294 21:02:02 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:47.294 21:02:02 -- nvmf/common.sh@295 -- # net_devs=() 00:08:47.294 21:02:02 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:47.294 21:02:02 -- nvmf/common.sh@296 -- # e810=() 00:08:47.294 21:02:02 -- nvmf/common.sh@296 -- # local -ga e810 00:08:47.294 21:02:02 -- nvmf/common.sh@297 -- # x722=() 00:08:47.294 21:02:02 -- nvmf/common.sh@297 -- # local -ga x722 00:08:47.294 21:02:02 -- nvmf/common.sh@298 -- # mlx=() 00:08:47.294 21:02:02 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:47.294 21:02:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:47.294 21:02:02 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:47.294 21:02:02 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:47.294 21:02:02 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:47.294 21:02:02 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:47.294 21:02:02 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:47.294 21:02:02 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:47.294 21:02:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:47.294 21:02:02 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:47.294 21:02:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:47.294 21:02:02 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:47.294 21:02:02 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:47.294 21:02:02 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:47.294 21:02:02 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:47.294 21:02:02 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:47.294 21:02:02 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:47.294 21:02:02 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:47.294 21:02:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:47.294 21:02:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:47.294 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:47.294 21:02:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:47.294 21:02:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:47.294 21:02:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:47.294 21:02:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:47.294 21:02:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:47.294 21:02:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:47.294 21:02:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:47.294 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:47.294 21:02:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:47.294 21:02:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:47.294 21:02:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:47.294 21:02:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:47.294 21:02:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:47.294 21:02:02 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:47.294 21:02:02 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:47.294 21:02:02 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:47.294 21:02:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:47.294 21:02:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.294 21:02:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:47.294 21:02:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.294 21:02:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:47.294 Found net devices under 0000:86:00.0: cvl_0_0 00:08:47.294 21:02:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.294 21:02:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:47.294 21:02:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.294 21:02:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:47.294 21:02:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.294 21:02:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:47.294 Found net devices under 0000:86:00.1: cvl_0_1 00:08:47.294 21:02:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.294 21:02:02 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:47.294 21:02:02 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:47.294 21:02:02 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:47.294 21:02:02 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:47.294 21:02:02 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:47.294 21:02:02 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:47.294 21:02:02 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:47.294 21:02:02 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:47.295 21:02:02 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:47.295 21:02:02 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:47.295 21:02:02 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:47.295 21:02:02 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:47.295 21:02:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:47.295 21:02:02 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:47.295 21:02:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:47.295 21:02:02 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:47.295 21:02:02 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:47.295 21:02:02 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:47.295 21:02:02 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:47.295 21:02:02 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:47.295 21:02:02 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:47.295 21:02:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:47.295 21:02:02 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:47.295 21:02:02 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:47.295 21:02:02 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:47.295 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:47.295 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:08:47.295 00:08:47.295 --- 10.0.0.2 ping statistics --- 00:08:47.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:47.295 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:08:47.295 21:02:02 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:47.295 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:47.295 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:08:47.295 00:08:47.295 --- 10.0.0.1 ping statistics --- 00:08:47.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:47.295 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:08:47.295 21:02:02 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:47.295 21:02:02 -- nvmf/common.sh@411 -- # return 0 00:08:47.295 21:02:02 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:47.295 21:02:02 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:47.295 21:02:02 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:47.295 21:02:02 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:47.295 21:02:02 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:47.295 21:02:02 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:47.295 21:02:02 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:47.295 21:02:02 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:08:47.295 21:02:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:47.295 21:02:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:47.295 21:02:02 -- common/autotest_common.sh@10 -- # set +x 00:08:47.295 21:02:02 -- nvmf/common.sh@470 -- # nvmfpid=2927354 00:08:47.295 21:02:02 -- nvmf/common.sh@471 -- # waitforlisten 2927354 00:08:47.295 21:02:02 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:47.295 21:02:02 -- common/autotest_common.sh@817 -- # '[' -z 2927354 ']' 00:08:47.295 21:02:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.295 21:02:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:47.295 21:02:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.295 21:02:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:47.295 21:02:02 -- common/autotest_common.sh@10 -- # set +x 00:08:47.295 [2024-04-18 21:02:02.767018] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:08:47.295 [2024-04-18 21:02:02.767056] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:47.295 EAL: No free 2048 kB hugepages reported on node 1 00:08:47.295 [2024-04-18 21:02:02.829687] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:47.295 [2024-04-18 21:02:02.906689] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:47.295 [2024-04-18 21:02:02.906728] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:47.295 [2024-04-18 21:02:02.906735] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:47.295 [2024-04-18 21:02:02.906740] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:47.295 [2024-04-18 21:02:02.906745] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:47.295 [2024-04-18 21:02:02.906813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:47.295 [2024-04-18 21:02:02.906906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:47.295 [2024-04-18 21:02:02.906995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:47.295 [2024-04-18 21:02:02.906996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.860 21:02:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:47.860 21:02:03 -- common/autotest_common.sh@850 -- # return 0 00:08:47.860 21:02:03 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:47.860 21:02:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:47.860 21:02:03 -- common/autotest_common.sh@10 -- # set +x 00:08:47.860 21:02:03 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:47.861 21:02:03 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:47.861 21:02:03 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode18448 00:08:47.861 [2024-04-18 21:02:03.764776] nvmf_rpc.c: 405:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:08:48.119 21:02:03 -- target/invalid.sh@40 -- # out='request: 00:08:48.119 { 00:08:48.119 "nqn": "nqn.2016-06.io.spdk:cnode18448", 00:08:48.119 "tgt_name": "foobar", 00:08:48.119 "method": "nvmf_create_subsystem", 00:08:48.119 "req_id": 1 00:08:48.119 } 00:08:48.119 Got JSON-RPC error response 00:08:48.119 response: 00:08:48.119 { 00:08:48.119 "code": -32603, 00:08:48.119 "message": "Unable to find target foobar" 00:08:48.119 }' 00:08:48.119 21:02:03 -- target/invalid.sh@41 -- # [[ request: 00:08:48.119 { 00:08:48.119 "nqn": "nqn.2016-06.io.spdk:cnode18448", 00:08:48.119 "tgt_name": "foobar", 00:08:48.119 "method": "nvmf_create_subsystem", 00:08:48.119 "req_id": 1 00:08:48.119 } 00:08:48.119 Got JSON-RPC error response 00:08:48.119 response: 00:08:48.119 { 00:08:48.119 "code": -32603, 00:08:48.119 "message": "Unable to find target foobar" 00:08:48.119 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:08:48.119 21:02:03 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:08:48.119 21:02:03 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode12647 00:08:48.119 [2024-04-18 21:02:03.961481] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12647: invalid serial number 'SPDKISFASTANDAWESOME' 00:08:48.119 21:02:03 -- target/invalid.sh@45 -- # out='request: 00:08:48.119 { 00:08:48.119 "nqn": "nqn.2016-06.io.spdk:cnode12647", 00:08:48.119 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:08:48.119 "method": "nvmf_create_subsystem", 00:08:48.119 "req_id": 1 00:08:48.119 } 00:08:48.119 Got JSON-RPC error response 00:08:48.119 response: 00:08:48.119 { 00:08:48.119 "code": -32602, 00:08:48.119 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:08:48.119 }' 00:08:48.119 21:02:03 -- target/invalid.sh@46 -- # [[ request: 00:08:48.119 { 00:08:48.119 "nqn": "nqn.2016-06.io.spdk:cnode12647", 00:08:48.119 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:08:48.119 "method": "nvmf_create_subsystem", 00:08:48.119 "req_id": 1 00:08:48.119 } 00:08:48.119 Got JSON-RPC error response 00:08:48.119 response: 00:08:48.119 { 00:08:48.119 "code": -32602, 00:08:48.119 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:08:48.119 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:48.119 21:02:03 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:08:48.119 21:02:03 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode29710 00:08:48.377 [2024-04-18 21:02:04.154117] nvmf_rpc.c: 431:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29710: invalid model number 'SPDK_Controller' 00:08:48.377 21:02:04 -- target/invalid.sh@50 -- # out='request: 00:08:48.377 { 00:08:48.377 "nqn": "nqn.2016-06.io.spdk:cnode29710", 00:08:48.377 "model_number": "SPDK_Controller\u001f", 00:08:48.377 "method": "nvmf_create_subsystem", 00:08:48.377 "req_id": 1 00:08:48.377 } 00:08:48.377 Got JSON-RPC error response 00:08:48.377 response: 00:08:48.377 { 00:08:48.377 "code": -32602, 00:08:48.377 "message": "Invalid MN SPDK_Controller\u001f" 00:08:48.377 }' 00:08:48.377 21:02:04 -- target/invalid.sh@51 -- # [[ request: 00:08:48.377 { 00:08:48.377 "nqn": "nqn.2016-06.io.spdk:cnode29710", 00:08:48.377 "model_number": "SPDK_Controller\u001f", 00:08:48.377 "method": "nvmf_create_subsystem", 00:08:48.377 "req_id": 1 00:08:48.377 } 00:08:48.377 Got JSON-RPC error response 00:08:48.377 response: 00:08:48.377 { 00:08:48.377 "code": -32602, 00:08:48.377 "message": "Invalid MN SPDK_Controller\u001f" 00:08:48.377 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:48.377 21:02:04 -- target/invalid.sh@54 -- # gen_random_s 21 00:08:48.377 21:02:04 -- target/invalid.sh@19 -- # local length=21 ll 00:08:48.377 21:02:04 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:48.377 21:02:04 -- target/invalid.sh@21 -- # local chars 00:08:48.377 21:02:04 -- target/invalid.sh@22 -- # local string 00:08:48.377 21:02:04 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:48.377 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.377 21:02:04 -- target/invalid.sh@25 -- # printf %x 61 00:08:48.377 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:08:48.377 21:02:04 -- target/invalid.sh@25 -- # string+== 00:08:48.377 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.377 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.377 21:02:04 -- target/invalid.sh@25 -- # printf %x 33 00:08:48.377 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x21' 00:08:48.377 21:02:04 -- target/invalid.sh@25 -- # string+='!' 00:08:48.377 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.377 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.377 21:02:04 -- target/invalid.sh@25 -- # printf %x 92 00:08:48.377 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:08:48.377 21:02:04 -- target/invalid.sh@25 -- # string+='\' 00:08:48.377 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.377 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.377 21:02:04 -- target/invalid.sh@25 -- # printf %x 39 00:08:48.377 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x27' 00:08:48.377 21:02:04 -- target/invalid.sh@25 -- # string+=\' 00:08:48.377 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.377 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.377 21:02:04 -- target/invalid.sh@25 -- # printf %x 83 00:08:48.377 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x53' 00:08:48.377 21:02:04 -- target/invalid.sh@25 -- # string+=S 00:08:48.377 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.377 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.377 21:02:04 -- target/invalid.sh@25 -- # printf %x 91 00:08:48.377 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:08:48.377 21:02:04 -- target/invalid.sh@25 -- # string+='[' 00:08:48.377 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.377 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.377 21:02:04 -- target/invalid.sh@25 -- # printf %x 83 00:08:48.377 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x53' 00:08:48.377 21:02:04 -- target/invalid.sh@25 -- # string+=S 00:08:48.377 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.377 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.377 21:02:04 -- target/invalid.sh@25 -- # printf %x 110 00:08:48.377 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:08:48.377 21:02:04 -- target/invalid.sh@25 -- # string+=n 00:08:48.377 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.377 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.377 21:02:04 -- target/invalid.sh@25 -- # printf %x 60 00:08:48.377 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:08:48.377 21:02:04 -- target/invalid.sh@25 -- # string+='<' 00:08:48.377 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.377 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.377 21:02:04 -- target/invalid.sh@25 -- # printf %x 36 00:08:48.377 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x24' 00:08:48.377 21:02:04 -- target/invalid.sh@25 -- # string+='$' 00:08:48.377 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.377 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.377 21:02:04 -- target/invalid.sh@25 -- # printf %x 78 00:08:48.377 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:08:48.377 21:02:04 -- target/invalid.sh@25 -- # string+=N 00:08:48.377 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.377 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.377 21:02:04 -- target/invalid.sh@25 -- # printf %x 112 00:08:48.377 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x70' 00:08:48.378 21:02:04 -- target/invalid.sh@25 -- # string+=p 00:08:48.378 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.378 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.378 21:02:04 -- target/invalid.sh@25 -- # printf %x 107 00:08:48.378 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:08:48.378 21:02:04 -- target/invalid.sh@25 -- # string+=k 00:08:48.378 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.378 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.378 21:02:04 -- target/invalid.sh@25 -- # printf %x 75 00:08:48.378 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:08:48.378 21:02:04 -- target/invalid.sh@25 -- # string+=K 00:08:48.378 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.378 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.378 21:02:04 -- target/invalid.sh@25 -- # printf %x 48 00:08:48.378 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x30' 00:08:48.378 21:02:04 -- target/invalid.sh@25 -- # string+=0 00:08:48.378 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.378 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.378 21:02:04 -- target/invalid.sh@25 -- # printf %x 37 00:08:48.378 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x25' 00:08:48.378 21:02:04 -- target/invalid.sh@25 -- # string+=% 00:08:48.378 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.378 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.378 21:02:04 -- target/invalid.sh@25 -- # printf %x 104 00:08:48.378 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x68' 00:08:48.378 21:02:04 -- target/invalid.sh@25 -- # string+=h 00:08:48.378 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.378 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.378 21:02:04 -- target/invalid.sh@25 -- # printf %x 104 00:08:48.378 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x68' 00:08:48.378 21:02:04 -- target/invalid.sh@25 -- # string+=h 00:08:48.378 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.378 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.378 21:02:04 -- target/invalid.sh@25 -- # printf %x 123 00:08:48.378 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:08:48.378 21:02:04 -- target/invalid.sh@25 -- # string+='{' 00:08:48.378 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.378 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.378 21:02:04 -- target/invalid.sh@25 -- # printf %x 47 00:08:48.378 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:08:48.378 21:02:04 -- target/invalid.sh@25 -- # string+=/ 00:08:48.378 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.378 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.636 21:02:04 -- target/invalid.sh@25 -- # printf %x 109 00:08:48.636 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:08:48.636 21:02:04 -- target/invalid.sh@25 -- # string+=m 00:08:48.636 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.636 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.636 21:02:04 -- target/invalid.sh@28 -- # [[ = == \- ]] 00:08:48.636 21:02:04 -- target/invalid.sh@31 -- # echo '=!\'\''S[Sn<$NpkK0%hh{/m' 00:08:48.636 21:02:04 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '=!\'\''S[Sn<$NpkK0%hh{/m' nqn.2016-06.io.spdk:cnode3423 00:08:48.636 [2024-04-18 21:02:04.471174] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3423: invalid serial number '=!\'S[Sn<$NpkK0%hh{/m' 00:08:48.636 21:02:04 -- target/invalid.sh@54 -- # out='request: 00:08:48.636 { 00:08:48.636 "nqn": "nqn.2016-06.io.spdk:cnode3423", 00:08:48.636 "serial_number": "=!\\'\''S[Sn<$NpkK0%hh{/m", 00:08:48.636 "method": "nvmf_create_subsystem", 00:08:48.636 "req_id": 1 00:08:48.636 } 00:08:48.636 Got JSON-RPC error response 00:08:48.636 response: 00:08:48.636 { 00:08:48.636 "code": -32602, 00:08:48.636 "message": "Invalid SN =!\\'\''S[Sn<$NpkK0%hh{/m" 00:08:48.636 }' 00:08:48.636 21:02:04 -- target/invalid.sh@55 -- # [[ request: 00:08:48.636 { 00:08:48.636 "nqn": "nqn.2016-06.io.spdk:cnode3423", 00:08:48.636 "serial_number": "=!\\'S[Sn<$NpkK0%hh{/m", 00:08:48.636 "method": "nvmf_create_subsystem", 00:08:48.636 "req_id": 1 00:08:48.636 } 00:08:48.636 Got JSON-RPC error response 00:08:48.636 response: 00:08:48.636 { 00:08:48.636 "code": -32602, 00:08:48.636 "message": "Invalid SN =!\\'S[Sn<$NpkK0%hh{/m" 00:08:48.636 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:48.636 21:02:04 -- target/invalid.sh@58 -- # gen_random_s 41 00:08:48.636 21:02:04 -- target/invalid.sh@19 -- # local length=41 ll 00:08:48.636 21:02:04 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:48.636 21:02:04 -- target/invalid.sh@21 -- # local chars 00:08:48.636 21:02:04 -- target/invalid.sh@22 -- # local string 00:08:48.636 21:02:04 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:48.636 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.636 21:02:04 -- target/invalid.sh@25 -- # printf %x 37 00:08:48.636 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x25' 00:08:48.636 21:02:04 -- target/invalid.sh@25 -- # string+=% 00:08:48.636 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.636 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.636 21:02:04 -- target/invalid.sh@25 -- # printf %x 43 00:08:48.636 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:08:48.636 21:02:04 -- target/invalid.sh@25 -- # string+=+ 00:08:48.636 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.636 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.636 21:02:04 -- target/invalid.sh@25 -- # printf %x 107 00:08:48.636 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:08:48.636 21:02:04 -- target/invalid.sh@25 -- # string+=k 00:08:48.636 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.636 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.636 21:02:04 -- target/invalid.sh@25 -- # printf %x 78 00:08:48.636 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:08:48.636 21:02:04 -- target/invalid.sh@25 -- # string+=N 00:08:48.636 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.636 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.636 21:02:04 -- target/invalid.sh@25 -- # printf %x 115 00:08:48.636 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x73' 00:08:48.636 21:02:04 -- target/invalid.sh@25 -- # string+=s 00:08:48.636 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.636 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.636 21:02:04 -- target/invalid.sh@25 -- # printf %x 65 00:08:48.636 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x41' 00:08:48.636 21:02:04 -- target/invalid.sh@25 -- # string+=A 00:08:48.636 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.636 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.636 21:02:04 -- target/invalid.sh@25 -- # printf %x 55 00:08:48.636 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x37' 00:08:48.636 21:02:04 -- target/invalid.sh@25 -- # string+=7 00:08:48.636 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.636 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.636 21:02:04 -- target/invalid.sh@25 -- # printf %x 49 00:08:48.636 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x31' 00:08:48.636 21:02:04 -- target/invalid.sh@25 -- # string+=1 00:08:48.636 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.636 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.636 21:02:04 -- target/invalid.sh@25 -- # printf %x 126 00:08:48.636 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:08:48.636 21:02:04 -- target/invalid.sh@25 -- # string+='~' 00:08:48.636 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.636 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.636 21:02:04 -- target/invalid.sh@25 -- # printf %x 75 00:08:48.895 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:08:48.895 21:02:04 -- target/invalid.sh@25 -- # string+=K 00:08:48.895 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.895 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.895 21:02:04 -- target/invalid.sh@25 -- # printf %x 70 00:08:48.895 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x46' 00:08:48.895 21:02:04 -- target/invalid.sh@25 -- # string+=F 00:08:48.895 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.895 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.895 21:02:04 -- target/invalid.sh@25 -- # printf %x 87 00:08:48.895 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x57' 00:08:48.895 21:02:04 -- target/invalid.sh@25 -- # string+=W 00:08:48.895 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.895 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.895 21:02:04 -- target/invalid.sh@25 -- # printf %x 113 00:08:48.895 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x71' 00:08:48.895 21:02:04 -- target/invalid.sh@25 -- # string+=q 00:08:48.895 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.895 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.895 21:02:04 -- target/invalid.sh@25 -- # printf %x 84 00:08:48.895 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x54' 00:08:48.895 21:02:04 -- target/invalid.sh@25 -- # string+=T 00:08:48.895 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.895 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.895 21:02:04 -- target/invalid.sh@25 -- # printf %x 34 00:08:48.895 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x22' 00:08:48.895 21:02:04 -- target/invalid.sh@25 -- # string+='"' 00:08:48.895 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.895 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # printf %x 52 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x34' 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # string+=4 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # printf %x 94 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # string+='^' 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # printf %x 39 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x27' 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # string+=\' 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # printf %x 76 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # string+=L 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # printf %x 59 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # string+=';' 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # printf %x 77 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # string+=M 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # printf %x 37 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x25' 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # string+=% 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # printf %x 123 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # string+='{' 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # printf %x 44 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # string+=, 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # printf %x 104 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x68' 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # string+=h 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # printf %x 45 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # string+=- 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # printf %x 46 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # string+=. 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # printf %x 84 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x54' 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # string+=T 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # printf %x 110 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # string+=n 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # printf %x 36 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x24' 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # string+='$' 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # printf %x 97 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x61' 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # string+=a 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # printf %x 66 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x42' 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # string+=B 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # printf %x 109 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # string+=m 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # printf %x 125 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # string+='}' 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # printf %x 72 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x48' 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # string+=H 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # printf %x 105 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x69' 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # string+=i 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # printf %x 72 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x48' 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # string+=H 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # printf %x 62 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # string+='>' 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # printf %x 91 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # string+='[' 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # printf %x 126 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # string+='~' 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # printf %x 87 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # echo -e '\x57' 00:08:48.896 21:02:04 -- target/invalid.sh@25 -- # string+=W 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:48.896 21:02:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:48.896 21:02:04 -- target/invalid.sh@28 -- # [[ % == \- ]] 00:08:48.896 21:02:04 -- target/invalid.sh@31 -- # echo '%+kNsA71~KFWqT"4^'\''L;M%{,h-.Tn$aBm}HiH>[~W' 00:08:48.896 21:02:04 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '%+kNsA71~KFWqT"4^'\''L;M%{,h-.Tn$aBm}HiH>[~W' nqn.2016-06.io.spdk:cnode31817 00:08:49.155 [2024-04-18 21:02:04.916679] nvmf_rpc.c: 431:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31817: invalid model number '%+kNsA71~KFWqT"4^'L;M%{,h-.Tn$aBm}HiH>[~W' 00:08:49.155 21:02:04 -- target/invalid.sh@58 -- # out='request: 00:08:49.155 { 00:08:49.155 "nqn": "nqn.2016-06.io.spdk:cnode31817", 00:08:49.155 "model_number": "%+kNsA71~KFWqT\"4^'\''L;M%{,h-.Tn$aBm}HiH>[~W", 00:08:49.155 "method": "nvmf_create_subsystem", 00:08:49.155 "req_id": 1 00:08:49.155 } 00:08:49.155 Got JSON-RPC error response 00:08:49.155 response: 00:08:49.155 { 00:08:49.155 "code": -32602, 00:08:49.155 "message": "Invalid MN %+kNsA71~KFWqT\"4^'\''L;M%{,h-.Tn$aBm}HiH>[~W" 00:08:49.155 }' 00:08:49.155 21:02:04 -- target/invalid.sh@59 -- # [[ request: 00:08:49.155 { 00:08:49.155 "nqn": "nqn.2016-06.io.spdk:cnode31817", 00:08:49.155 "model_number": "%+kNsA71~KFWqT\"4^'L;M%{,h-.Tn$aBm}HiH>[~W", 00:08:49.155 "method": "nvmf_create_subsystem", 00:08:49.155 "req_id": 1 00:08:49.155 } 00:08:49.155 Got JSON-RPC error response 00:08:49.155 response: 00:08:49.155 { 00:08:49.155 "code": -32602, 00:08:49.155 "message": "Invalid MN %+kNsA71~KFWqT\"4^'L;M%{,h-.Tn$aBm}HiH>[~W" 00:08:49.155 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:49.155 21:02:04 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:08:49.412 [2024-04-18 21:02:05.097378] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:49.412 21:02:05 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:08:49.412 21:02:05 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:08:49.412 21:02:05 -- target/invalid.sh@67 -- # echo '' 00:08:49.412 21:02:05 -- target/invalid.sh@67 -- # head -n 1 00:08:49.412 21:02:05 -- target/invalid.sh@67 -- # IP= 00:08:49.412 21:02:05 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:08:49.671 [2024-04-18 21:02:05.486695] nvmf_rpc.c: 796:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:08:49.671 21:02:05 -- target/invalid.sh@69 -- # out='request: 00:08:49.671 { 00:08:49.671 "nqn": "nqn.2016-06.io.spdk:cnode", 00:08:49.671 "listen_address": { 00:08:49.671 "trtype": "tcp", 00:08:49.671 "traddr": "", 00:08:49.671 "trsvcid": "4421" 00:08:49.671 }, 00:08:49.671 "method": "nvmf_subsystem_remove_listener", 00:08:49.671 "req_id": 1 00:08:49.671 } 00:08:49.671 Got JSON-RPC error response 00:08:49.671 response: 00:08:49.671 { 00:08:49.671 "code": -32602, 00:08:49.671 "message": "Invalid parameters" 00:08:49.671 }' 00:08:49.671 21:02:05 -- target/invalid.sh@70 -- # [[ request: 00:08:49.671 { 00:08:49.671 "nqn": "nqn.2016-06.io.spdk:cnode", 00:08:49.671 "listen_address": { 00:08:49.671 "trtype": "tcp", 00:08:49.671 "traddr": "", 00:08:49.671 "trsvcid": "4421" 00:08:49.671 }, 00:08:49.671 "method": "nvmf_subsystem_remove_listener", 00:08:49.671 "req_id": 1 00:08:49.671 } 00:08:49.671 Got JSON-RPC error response 00:08:49.671 response: 00:08:49.671 { 00:08:49.671 "code": -32602, 00:08:49.671 "message": "Invalid parameters" 00:08:49.671 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:08:49.671 21:02:05 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30807 -i 0 00:08:49.930 [2024-04-18 21:02:05.675276] nvmf_rpc.c: 443:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30807: invalid cntlid range [0-65519] 00:08:49.930 21:02:05 -- target/invalid.sh@73 -- # out='request: 00:08:49.930 { 00:08:49.930 "nqn": "nqn.2016-06.io.spdk:cnode30807", 00:08:49.930 "min_cntlid": 0, 00:08:49.930 "method": "nvmf_create_subsystem", 00:08:49.930 "req_id": 1 00:08:49.930 } 00:08:49.930 Got JSON-RPC error response 00:08:49.930 response: 00:08:49.930 { 00:08:49.930 "code": -32602, 00:08:49.930 "message": "Invalid cntlid range [0-65519]" 00:08:49.930 }' 00:08:49.930 21:02:05 -- target/invalid.sh@74 -- # [[ request: 00:08:49.930 { 00:08:49.930 "nqn": "nqn.2016-06.io.spdk:cnode30807", 00:08:49.930 "min_cntlid": 0, 00:08:49.930 "method": "nvmf_create_subsystem", 00:08:49.930 "req_id": 1 00:08:49.930 } 00:08:49.930 Got JSON-RPC error response 00:08:49.930 response: 00:08:49.930 { 00:08:49.930 "code": -32602, 00:08:49.930 "message": "Invalid cntlid range [0-65519]" 00:08:49.930 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:49.930 21:02:05 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28652 -i 65520 00:08:49.930 [2024-04-18 21:02:05.847849] nvmf_rpc.c: 443:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28652: invalid cntlid range [65520-65519] 00:08:50.188 21:02:05 -- target/invalid.sh@75 -- # out='request: 00:08:50.188 { 00:08:50.188 "nqn": "nqn.2016-06.io.spdk:cnode28652", 00:08:50.188 "min_cntlid": 65520, 00:08:50.188 "method": "nvmf_create_subsystem", 00:08:50.188 "req_id": 1 00:08:50.188 } 00:08:50.188 Got JSON-RPC error response 00:08:50.188 response: 00:08:50.188 { 00:08:50.188 "code": -32602, 00:08:50.188 "message": "Invalid cntlid range [65520-65519]" 00:08:50.188 }' 00:08:50.188 21:02:05 -- target/invalid.sh@76 -- # [[ request: 00:08:50.188 { 00:08:50.188 "nqn": "nqn.2016-06.io.spdk:cnode28652", 00:08:50.188 "min_cntlid": 65520, 00:08:50.188 "method": "nvmf_create_subsystem", 00:08:50.188 "req_id": 1 00:08:50.188 } 00:08:50.188 Got JSON-RPC error response 00:08:50.188 response: 00:08:50.188 { 00:08:50.188 "code": -32602, 00:08:50.188 "message": "Invalid cntlid range [65520-65519]" 00:08:50.188 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:50.188 21:02:05 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16012 -I 0 00:08:50.188 [2024-04-18 21:02:06.020419] nvmf_rpc.c: 443:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16012: invalid cntlid range [1-0] 00:08:50.188 21:02:06 -- target/invalid.sh@77 -- # out='request: 00:08:50.188 { 00:08:50.188 "nqn": "nqn.2016-06.io.spdk:cnode16012", 00:08:50.188 "max_cntlid": 0, 00:08:50.188 "method": "nvmf_create_subsystem", 00:08:50.188 "req_id": 1 00:08:50.188 } 00:08:50.188 Got JSON-RPC error response 00:08:50.188 response: 00:08:50.188 { 00:08:50.188 "code": -32602, 00:08:50.188 "message": "Invalid cntlid range [1-0]" 00:08:50.188 }' 00:08:50.188 21:02:06 -- target/invalid.sh@78 -- # [[ request: 00:08:50.188 { 00:08:50.188 "nqn": "nqn.2016-06.io.spdk:cnode16012", 00:08:50.188 "max_cntlid": 0, 00:08:50.188 "method": "nvmf_create_subsystem", 00:08:50.188 "req_id": 1 00:08:50.188 } 00:08:50.188 Got JSON-RPC error response 00:08:50.188 response: 00:08:50.188 { 00:08:50.188 "code": -32602, 00:08:50.188 "message": "Invalid cntlid range [1-0]" 00:08:50.188 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:50.188 21:02:06 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17192 -I 65520 00:08:50.446 [2024-04-18 21:02:06.201040] nvmf_rpc.c: 443:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17192: invalid cntlid range [1-65520] 00:08:50.446 21:02:06 -- target/invalid.sh@79 -- # out='request: 00:08:50.446 { 00:08:50.446 "nqn": "nqn.2016-06.io.spdk:cnode17192", 00:08:50.446 "max_cntlid": 65520, 00:08:50.446 "method": "nvmf_create_subsystem", 00:08:50.446 "req_id": 1 00:08:50.446 } 00:08:50.446 Got JSON-RPC error response 00:08:50.446 response: 00:08:50.446 { 00:08:50.446 "code": -32602, 00:08:50.446 "message": "Invalid cntlid range [1-65520]" 00:08:50.446 }' 00:08:50.446 21:02:06 -- target/invalid.sh@80 -- # [[ request: 00:08:50.446 { 00:08:50.446 "nqn": "nqn.2016-06.io.spdk:cnode17192", 00:08:50.446 "max_cntlid": 65520, 00:08:50.446 "method": "nvmf_create_subsystem", 00:08:50.446 "req_id": 1 00:08:50.446 } 00:08:50.446 Got JSON-RPC error response 00:08:50.446 response: 00:08:50.446 { 00:08:50.446 "code": -32602, 00:08:50.446 "message": "Invalid cntlid range [1-65520]" 00:08:50.446 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:50.446 21:02:06 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3426 -i 6 -I 5 00:08:50.705 [2024-04-18 21:02:06.381673] nvmf_rpc.c: 443:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3426: invalid cntlid range [6-5] 00:08:50.705 21:02:06 -- target/invalid.sh@83 -- # out='request: 00:08:50.705 { 00:08:50.705 "nqn": "nqn.2016-06.io.spdk:cnode3426", 00:08:50.705 "min_cntlid": 6, 00:08:50.705 "max_cntlid": 5, 00:08:50.705 "method": "nvmf_create_subsystem", 00:08:50.705 "req_id": 1 00:08:50.705 } 00:08:50.705 Got JSON-RPC error response 00:08:50.705 response: 00:08:50.705 { 00:08:50.705 "code": -32602, 00:08:50.705 "message": "Invalid cntlid range [6-5]" 00:08:50.705 }' 00:08:50.705 21:02:06 -- target/invalid.sh@84 -- # [[ request: 00:08:50.705 { 00:08:50.705 "nqn": "nqn.2016-06.io.spdk:cnode3426", 00:08:50.705 "min_cntlid": 6, 00:08:50.705 "max_cntlid": 5, 00:08:50.705 "method": "nvmf_create_subsystem", 00:08:50.705 "req_id": 1 00:08:50.705 } 00:08:50.705 Got JSON-RPC error response 00:08:50.705 response: 00:08:50.705 { 00:08:50.705 "code": -32602, 00:08:50.705 "message": "Invalid cntlid range [6-5]" 00:08:50.705 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:50.705 21:02:06 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:08:50.705 21:02:06 -- target/invalid.sh@87 -- # out='request: 00:08:50.705 { 00:08:50.705 "name": "foobar", 00:08:50.705 "method": "nvmf_delete_target", 00:08:50.705 "req_id": 1 00:08:50.705 } 00:08:50.705 Got JSON-RPC error response 00:08:50.705 response: 00:08:50.705 { 00:08:50.705 "code": -32602, 00:08:50.705 "message": "The specified target doesn'\''t exist, cannot delete it." 00:08:50.705 }' 00:08:50.705 21:02:06 -- target/invalid.sh@88 -- # [[ request: 00:08:50.705 { 00:08:50.705 "name": "foobar", 00:08:50.705 "method": "nvmf_delete_target", 00:08:50.705 "req_id": 1 00:08:50.705 } 00:08:50.705 Got JSON-RPC error response 00:08:50.705 response: 00:08:50.705 { 00:08:50.705 "code": -32602, 00:08:50.705 "message": "The specified target doesn't exist, cannot delete it." 00:08:50.705 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:08:50.705 21:02:06 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:08:50.705 21:02:06 -- target/invalid.sh@91 -- # nvmftestfini 00:08:50.705 21:02:06 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:50.705 21:02:06 -- nvmf/common.sh@117 -- # sync 00:08:50.705 21:02:06 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:50.705 21:02:06 -- nvmf/common.sh@120 -- # set +e 00:08:50.705 21:02:06 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:50.705 21:02:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:50.705 rmmod nvme_tcp 00:08:50.705 rmmod nvme_fabrics 00:08:50.705 rmmod nvme_keyring 00:08:50.705 21:02:06 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:50.705 21:02:06 -- nvmf/common.sh@124 -- # set -e 00:08:50.705 21:02:06 -- nvmf/common.sh@125 -- # return 0 00:08:50.705 21:02:06 -- nvmf/common.sh@478 -- # '[' -n 2927354 ']' 00:08:50.705 21:02:06 -- nvmf/common.sh@479 -- # killprocess 2927354 00:08:50.705 21:02:06 -- common/autotest_common.sh@936 -- # '[' -z 2927354 ']' 00:08:50.705 21:02:06 -- common/autotest_common.sh@940 -- # kill -0 2927354 00:08:50.705 21:02:06 -- common/autotest_common.sh@941 -- # uname 00:08:50.705 21:02:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:50.705 21:02:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2927354 00:08:50.705 21:02:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:50.705 21:02:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:50.705 21:02:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2927354' 00:08:50.705 killing process with pid 2927354 00:08:50.705 21:02:06 -- common/autotest_common.sh@955 -- # kill 2927354 00:08:50.705 21:02:06 -- common/autotest_common.sh@960 -- # wait 2927354 00:08:50.964 21:02:06 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:50.964 21:02:06 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:50.964 21:02:06 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:50.964 21:02:06 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:50.964 21:02:06 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:50.964 21:02:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.964 21:02:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:50.964 21:02:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.499 21:02:08 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:53.499 00:08:53.499 real 0m12.495s 00:08:53.499 user 0m19.604s 00:08:53.499 sys 0m5.543s 00:08:53.499 21:02:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:53.499 21:02:08 -- common/autotest_common.sh@10 -- # set +x 00:08:53.499 ************************************ 00:08:53.499 END TEST nvmf_invalid 00:08:53.499 ************************************ 00:08:53.499 21:02:08 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:53.499 21:02:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:53.499 21:02:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:53.499 21:02:08 -- common/autotest_common.sh@10 -- # set +x 00:08:53.499 ************************************ 00:08:53.499 START TEST nvmf_abort 00:08:53.499 ************************************ 00:08:53.499 21:02:09 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:53.499 * Looking for test storage... 00:08:53.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:53.499 21:02:09 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:53.499 21:02:09 -- nvmf/common.sh@7 -- # uname -s 00:08:53.499 21:02:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:53.499 21:02:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:53.499 21:02:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:53.499 21:02:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:53.499 21:02:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:53.499 21:02:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:53.499 21:02:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:53.499 21:02:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:53.499 21:02:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:53.499 21:02:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:53.499 21:02:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:53.499 21:02:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:53.499 21:02:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:53.499 21:02:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:53.499 21:02:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:53.499 21:02:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:53.499 21:02:09 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:53.499 21:02:09 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:53.499 21:02:09 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:53.499 21:02:09 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:53.499 21:02:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.499 21:02:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.499 21:02:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.499 21:02:09 -- paths/export.sh@5 -- # export PATH 00:08:53.499 21:02:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.499 21:02:09 -- nvmf/common.sh@47 -- # : 0 00:08:53.499 21:02:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:53.499 21:02:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:53.499 21:02:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:53.499 21:02:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:53.499 21:02:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:53.499 21:02:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:53.499 21:02:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:53.499 21:02:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:53.499 21:02:09 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:53.499 21:02:09 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:53.499 21:02:09 -- target/abort.sh@14 -- # nvmftestinit 00:08:53.499 21:02:09 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:53.499 21:02:09 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:53.499 21:02:09 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:53.499 21:02:09 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:53.499 21:02:09 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:53.499 21:02:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.499 21:02:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:53.499 21:02:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.499 21:02:09 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:53.499 21:02:09 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:53.499 21:02:09 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:53.499 21:02:09 -- common/autotest_common.sh@10 -- # set +x 00:09:00.068 21:02:15 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:00.068 21:02:15 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:00.068 21:02:15 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:00.068 21:02:15 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:00.068 21:02:15 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:00.068 21:02:15 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:00.068 21:02:15 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:00.068 21:02:15 -- nvmf/common.sh@295 -- # net_devs=() 00:09:00.068 21:02:15 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:00.068 21:02:15 -- nvmf/common.sh@296 -- # e810=() 00:09:00.068 21:02:15 -- nvmf/common.sh@296 -- # local -ga e810 00:09:00.068 21:02:15 -- nvmf/common.sh@297 -- # x722=() 00:09:00.068 21:02:15 -- nvmf/common.sh@297 -- # local -ga x722 00:09:00.068 21:02:15 -- nvmf/common.sh@298 -- # mlx=() 00:09:00.068 21:02:15 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:00.068 21:02:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:00.068 21:02:15 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:00.068 21:02:15 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:00.068 21:02:15 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:00.068 21:02:15 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:00.068 21:02:15 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:00.068 21:02:15 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:00.068 21:02:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:00.068 21:02:15 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:00.068 21:02:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:00.068 21:02:15 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:00.068 21:02:15 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:00.068 21:02:15 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:00.068 21:02:15 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:00.068 21:02:15 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:00.068 21:02:15 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:00.068 21:02:15 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:00.068 21:02:15 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:00.068 21:02:15 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:00.068 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:00.068 21:02:15 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:00.068 21:02:15 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:00.068 21:02:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.068 21:02:15 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.068 21:02:15 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:00.068 21:02:15 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:00.068 21:02:15 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:00.068 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:00.068 21:02:15 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:00.068 21:02:15 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:00.068 21:02:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.068 21:02:15 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.068 21:02:15 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:00.068 21:02:15 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:00.068 21:02:15 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:00.068 21:02:15 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:00.068 21:02:15 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:00.068 21:02:15 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.068 21:02:15 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:00.068 21:02:15 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.068 21:02:15 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:00.068 Found net devices under 0000:86:00.0: cvl_0_0 00:09:00.068 21:02:15 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.068 21:02:15 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:00.068 21:02:15 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.068 21:02:15 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:00.068 21:02:15 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.068 21:02:15 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:00.068 Found net devices under 0000:86:00.1: cvl_0_1 00:09:00.068 21:02:15 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.069 21:02:15 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:00.069 21:02:15 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:00.069 21:02:15 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:00.069 21:02:15 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:00.069 21:02:15 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:00.069 21:02:15 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:00.069 21:02:15 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:00.069 21:02:15 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:00.069 21:02:15 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:00.069 21:02:15 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:00.069 21:02:15 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:00.069 21:02:15 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:00.069 21:02:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:00.069 21:02:15 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:00.069 21:02:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:00.069 21:02:15 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:00.069 21:02:15 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:00.069 21:02:15 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:00.069 21:02:15 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:00.069 21:02:15 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:00.069 21:02:15 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:00.069 21:02:15 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:00.069 21:02:15 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:00.069 21:02:15 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:00.069 21:02:15 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:00.069 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:00.069 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:09:00.069 00:09:00.069 --- 10.0.0.2 ping statistics --- 00:09:00.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.069 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:09:00.069 21:02:15 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:00.069 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:00.069 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:09:00.069 00:09:00.069 --- 10.0.0.1 ping statistics --- 00:09:00.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.069 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:09:00.069 21:02:15 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:00.069 21:02:15 -- nvmf/common.sh@411 -- # return 0 00:09:00.069 21:02:15 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:00.069 21:02:15 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:00.069 21:02:15 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:00.069 21:02:15 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:00.069 21:02:15 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:00.069 21:02:15 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:00.069 21:02:15 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:00.069 21:02:15 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:00.069 21:02:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:00.069 21:02:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:00.069 21:02:15 -- common/autotest_common.sh@10 -- # set +x 00:09:00.069 21:02:15 -- nvmf/common.sh@470 -- # nvmfpid=2932037 00:09:00.069 21:02:15 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:00.069 21:02:15 -- nvmf/common.sh@471 -- # waitforlisten 2932037 00:09:00.069 21:02:15 -- common/autotest_common.sh@817 -- # '[' -z 2932037 ']' 00:09:00.069 21:02:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.069 21:02:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:00.069 21:02:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.069 21:02:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:00.069 21:02:15 -- common/autotest_common.sh@10 -- # set +x 00:09:00.069 [2024-04-18 21:02:15.577932] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:09:00.069 [2024-04-18 21:02:15.577975] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:00.069 EAL: No free 2048 kB hugepages reported on node 1 00:09:00.069 [2024-04-18 21:02:15.639743] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:00.069 [2024-04-18 21:02:15.720601] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:00.069 [2024-04-18 21:02:15.720633] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:00.069 [2024-04-18 21:02:15.720640] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:00.069 [2024-04-18 21:02:15.720645] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:00.069 [2024-04-18 21:02:15.720650] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:00.069 [2024-04-18 21:02:15.720758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:00.069 [2024-04-18 21:02:15.720855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:00.069 [2024-04-18 21:02:15.720857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:00.636 21:02:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:00.636 21:02:16 -- common/autotest_common.sh@850 -- # return 0 00:09:00.636 21:02:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:00.636 21:02:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:00.636 21:02:16 -- common/autotest_common.sh@10 -- # set +x 00:09:00.636 21:02:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:00.636 21:02:16 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:00.636 21:02:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:00.636 21:02:16 -- common/autotest_common.sh@10 -- # set +x 00:09:00.637 [2024-04-18 21:02:16.428412] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:00.637 21:02:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:00.637 21:02:16 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:00.637 21:02:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:00.637 21:02:16 -- common/autotest_common.sh@10 -- # set +x 00:09:00.637 Malloc0 00:09:00.637 21:02:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:00.637 21:02:16 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:00.637 21:02:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:00.637 21:02:16 -- common/autotest_common.sh@10 -- # set +x 00:09:00.637 Delay0 00:09:00.637 21:02:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:00.637 21:02:16 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:00.637 21:02:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:00.637 21:02:16 -- common/autotest_common.sh@10 -- # set +x 00:09:00.637 21:02:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:00.637 21:02:16 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:00.637 21:02:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:00.637 21:02:16 -- common/autotest_common.sh@10 -- # set +x 00:09:00.637 21:02:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:00.637 21:02:16 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:00.637 21:02:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:00.637 21:02:16 -- common/autotest_common.sh@10 -- # set +x 00:09:00.637 [2024-04-18 21:02:16.500456] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:00.637 21:02:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:00.637 21:02:16 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:00.637 21:02:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:00.637 21:02:16 -- common/autotest_common.sh@10 -- # set +x 00:09:00.637 21:02:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:00.637 21:02:16 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:00.637 EAL: No free 2048 kB hugepages reported on node 1 00:09:00.896 [2024-04-18 21:02:16.608702] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:03.456 Initializing NVMe Controllers 00:09:03.456 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:03.456 controller IO queue size 128 less than required 00:09:03.456 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:03.456 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:03.456 Initialization complete. Launching workers. 00:09:03.456 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 41688 00:09:03.456 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 41749, failed to submit 62 00:09:03.456 success 41692, unsuccess 57, failed 0 00:09:03.456 21:02:18 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:03.456 21:02:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:03.457 21:02:18 -- common/autotest_common.sh@10 -- # set +x 00:09:03.457 21:02:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:03.457 21:02:18 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:03.457 21:02:18 -- target/abort.sh@38 -- # nvmftestfini 00:09:03.457 21:02:18 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:03.457 21:02:18 -- nvmf/common.sh@117 -- # sync 00:09:03.457 21:02:18 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:03.457 21:02:18 -- nvmf/common.sh@120 -- # set +e 00:09:03.457 21:02:18 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:03.457 21:02:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:03.457 rmmod nvme_tcp 00:09:03.457 rmmod nvme_fabrics 00:09:03.457 rmmod nvme_keyring 00:09:03.457 21:02:18 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:03.457 21:02:18 -- nvmf/common.sh@124 -- # set -e 00:09:03.457 21:02:18 -- nvmf/common.sh@125 -- # return 0 00:09:03.457 21:02:18 -- nvmf/common.sh@478 -- # '[' -n 2932037 ']' 00:09:03.457 21:02:18 -- nvmf/common.sh@479 -- # killprocess 2932037 00:09:03.457 21:02:18 -- common/autotest_common.sh@936 -- # '[' -z 2932037 ']' 00:09:03.457 21:02:18 -- common/autotest_common.sh@940 -- # kill -0 2932037 00:09:03.457 21:02:18 -- common/autotest_common.sh@941 -- # uname 00:09:03.457 21:02:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:03.457 21:02:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2932037 00:09:03.457 21:02:18 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:03.457 21:02:18 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:03.457 21:02:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2932037' 00:09:03.457 killing process with pid 2932037 00:09:03.457 21:02:18 -- common/autotest_common.sh@955 -- # kill 2932037 00:09:03.457 21:02:18 -- common/autotest_common.sh@960 -- # wait 2932037 00:09:03.457 21:02:19 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:03.457 21:02:19 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:03.457 21:02:19 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:03.457 21:02:19 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:03.457 21:02:19 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:03.457 21:02:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.457 21:02:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:03.457 21:02:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.365 21:02:21 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:05.365 00:09:05.365 real 0m12.117s 00:09:05.365 user 0m13.603s 00:09:05.365 sys 0m5.840s 00:09:05.365 21:02:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:05.365 21:02:21 -- common/autotest_common.sh@10 -- # set +x 00:09:05.365 ************************************ 00:09:05.365 END TEST nvmf_abort 00:09:05.365 ************************************ 00:09:05.365 21:02:21 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:05.365 21:02:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:05.365 21:02:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:05.365 21:02:21 -- common/autotest_common.sh@10 -- # set +x 00:09:05.625 ************************************ 00:09:05.625 START TEST nvmf_ns_hotplug_stress 00:09:05.625 ************************************ 00:09:05.625 21:02:21 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:05.625 * Looking for test storage... 00:09:05.625 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:05.625 21:02:21 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:05.625 21:02:21 -- nvmf/common.sh@7 -- # uname -s 00:09:05.625 21:02:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:05.625 21:02:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:05.625 21:02:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:05.625 21:02:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:05.625 21:02:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:05.625 21:02:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:05.625 21:02:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:05.625 21:02:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:05.625 21:02:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:05.625 21:02:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:05.625 21:02:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:05.625 21:02:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:05.625 21:02:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:05.625 21:02:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:05.625 21:02:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:05.625 21:02:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:05.625 21:02:21 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:05.625 21:02:21 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:05.625 21:02:21 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:05.625 21:02:21 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:05.625 21:02:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.625 21:02:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.625 21:02:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.625 21:02:21 -- paths/export.sh@5 -- # export PATH 00:09:05.625 21:02:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.625 21:02:21 -- nvmf/common.sh@47 -- # : 0 00:09:05.625 21:02:21 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:05.625 21:02:21 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:05.625 21:02:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:05.625 21:02:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:05.625 21:02:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:05.625 21:02:21 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:05.625 21:02:21 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:05.625 21:02:21 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:05.625 21:02:21 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:05.625 21:02:21 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:09:05.625 21:02:21 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:05.625 21:02:21 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:05.625 21:02:21 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:05.625 21:02:21 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:05.625 21:02:21 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:05.625 21:02:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.625 21:02:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:05.625 21:02:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.625 21:02:21 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:05.625 21:02:21 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:05.625 21:02:21 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:05.625 21:02:21 -- common/autotest_common.sh@10 -- # set +x 00:09:10.894 21:02:26 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:10.894 21:02:26 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:10.894 21:02:26 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:10.895 21:02:26 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:10.895 21:02:26 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:10.895 21:02:26 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:10.895 21:02:26 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:10.895 21:02:26 -- nvmf/common.sh@295 -- # net_devs=() 00:09:10.895 21:02:26 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:10.895 21:02:26 -- nvmf/common.sh@296 -- # e810=() 00:09:10.895 21:02:26 -- nvmf/common.sh@296 -- # local -ga e810 00:09:10.895 21:02:26 -- nvmf/common.sh@297 -- # x722=() 00:09:10.895 21:02:26 -- nvmf/common.sh@297 -- # local -ga x722 00:09:10.895 21:02:26 -- nvmf/common.sh@298 -- # mlx=() 00:09:10.895 21:02:26 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:10.895 21:02:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:10.895 21:02:26 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:10.895 21:02:26 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:10.895 21:02:26 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:10.895 21:02:26 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:10.895 21:02:26 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:10.895 21:02:26 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:10.895 21:02:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:10.895 21:02:26 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:10.895 21:02:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:10.895 21:02:26 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:10.895 21:02:26 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:10.895 21:02:26 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:10.895 21:02:26 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:10.895 21:02:26 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:10.895 21:02:26 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:10.895 21:02:26 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:10.895 21:02:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:10.895 21:02:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:10.895 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:10.895 21:02:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:10.895 21:02:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:10.895 21:02:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:10.895 21:02:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:10.895 21:02:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:10.895 21:02:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:10.895 21:02:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:10.895 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:10.895 21:02:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:10.895 21:02:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:10.895 21:02:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:10.895 21:02:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:10.895 21:02:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:10.895 21:02:26 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:10.895 21:02:26 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:10.895 21:02:26 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:10.895 21:02:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:10.895 21:02:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:10.895 21:02:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:10.895 21:02:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:10.895 21:02:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:10.895 Found net devices under 0000:86:00.0: cvl_0_0 00:09:10.895 21:02:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:10.895 21:02:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:10.895 21:02:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:10.895 21:02:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:10.895 21:02:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:10.895 21:02:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:10.895 Found net devices under 0000:86:00.1: cvl_0_1 00:09:10.895 21:02:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:10.895 21:02:26 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:10.895 21:02:26 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:10.895 21:02:26 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:10.895 21:02:26 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:10.895 21:02:26 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:10.895 21:02:26 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:10.895 21:02:26 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:10.895 21:02:26 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:10.895 21:02:26 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:10.895 21:02:26 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:10.895 21:02:26 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:10.895 21:02:26 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:10.895 21:02:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:10.895 21:02:26 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:10.895 21:02:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:10.895 21:02:26 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:10.895 21:02:26 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:10.895 21:02:26 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:10.895 21:02:26 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:10.895 21:02:26 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:10.895 21:02:26 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:10.895 21:02:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:10.895 21:02:26 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:10.895 21:02:26 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:10.896 21:02:26 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:10.896 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:10.896 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:09:10.896 00:09:10.896 --- 10.0.0.2 ping statistics --- 00:09:10.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.896 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:09:10.896 21:02:26 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:10.896 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:10.896 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:09:10.896 00:09:10.896 --- 10.0.0.1 ping statistics --- 00:09:10.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.896 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:09:10.896 21:02:26 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:10.896 21:02:26 -- nvmf/common.sh@411 -- # return 0 00:09:10.896 21:02:26 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:10.896 21:02:26 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:10.896 21:02:26 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:10.896 21:02:26 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:10.896 21:02:26 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:10.896 21:02:26 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:10.896 21:02:26 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:10.896 21:02:26 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:09:10.896 21:02:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:10.896 21:02:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:10.896 21:02:26 -- common/autotest_common.sh@10 -- # set +x 00:09:10.896 21:02:26 -- nvmf/common.sh@470 -- # nvmfpid=2936338 00:09:10.896 21:02:26 -- nvmf/common.sh@471 -- # waitforlisten 2936338 00:09:10.896 21:02:26 -- common/autotest_common.sh@817 -- # '[' -z 2936338 ']' 00:09:10.896 21:02:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.896 21:02:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:10.896 21:02:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.896 21:02:26 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:10.896 21:02:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:10.896 21:02:26 -- common/autotest_common.sh@10 -- # set +x 00:09:10.896 [2024-04-18 21:02:26.794503] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:09:10.896 [2024-04-18 21:02:26.794555] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:10.896 EAL: No free 2048 kB hugepages reported on node 1 00:09:11.155 [2024-04-18 21:02:26.857545] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:11.155 [2024-04-18 21:02:26.934911] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:11.155 [2024-04-18 21:02:26.934945] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:11.155 [2024-04-18 21:02:26.934951] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:11.155 [2024-04-18 21:02:26.934957] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:11.155 [2024-04-18 21:02:26.934962] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:11.155 [2024-04-18 21:02:26.935072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:11.155 [2024-04-18 21:02:26.935091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:11.155 [2024-04-18 21:02:26.935092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:11.720 21:02:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:11.720 21:02:27 -- common/autotest_common.sh@850 -- # return 0 00:09:11.720 21:02:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:11.720 21:02:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:11.720 21:02:27 -- common/autotest_common.sh@10 -- # set +x 00:09:11.720 21:02:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:11.720 21:02:27 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:09:11.720 21:02:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:11.978 [2024-04-18 21:02:27.792407] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:11.978 21:02:27 -- target/ns_hotplug_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:12.237 21:02:27 -- target/ns_hotplug_stress.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:12.237 [2024-04-18 21:02:28.149681] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:12.495 21:02:28 -- target/ns_hotplug_stress.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:12.495 21:02:28 -- target/ns_hotplug_stress.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:12.753 Malloc0 00:09:12.753 21:02:28 -- target/ns_hotplug_stress.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:13.012 Delay0 00:09:13.012 21:02:28 -- target/ns_hotplug_stress.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:13.012 21:02:28 -- target/ns_hotplug_stress.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:13.271 NULL1 00:09:13.271 21:02:29 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:13.529 21:02:29 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=2936829 00:09:13.529 21:02:29 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:13.529 21:02:29 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2936829 00:09:13.529 21:02:29 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:13.529 EAL: No free 2048 kB hugepages reported on node 1 00:09:13.529 21:02:29 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:13.787 21:02:29 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:09:13.787 21:02:29 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:14.046 true 00:09:14.046 21:02:29 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2936829 00:09:14.046 21:02:29 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.304 21:02:29 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:14.304 21:02:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:09:14.304 21:02:30 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:14.562 true 00:09:14.562 21:02:30 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2936829 00:09:14.562 21:02:30 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.939 Read completed with error (sct=0, sc=11) 00:09:15.939 21:02:31 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:15.939 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.939 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.939 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.939 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.939 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.939 21:02:31 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:09:15.939 21:02:31 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:16.198 true 00:09:16.198 21:02:31 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2936829 00:09:16.198 21:02:31 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.136 21:02:32 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:17.136 21:02:32 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:09:17.136 21:02:32 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:17.136 true 00:09:17.396 21:02:33 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2936829 00:09:17.396 21:02:33 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.396 21:02:33 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:17.657 21:02:33 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:09:17.657 21:02:33 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:17.916 true 00:09:17.917 21:02:33 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2936829 00:09:17.917 21:02:33 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:18.854 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:18.854 21:02:34 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:19.114 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:19.114 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:19.114 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:19.114 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:19.114 21:02:34 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:09:19.114 21:02:34 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:19.373 true 00:09:19.373 21:02:35 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2936829 00:09:19.373 21:02:35 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.311 21:02:35 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:20.311 21:02:36 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:09:20.311 21:02:36 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:20.570 true 00:09:20.570 21:02:36 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2936829 00:09:20.570 21:02:36 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.830 21:02:36 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:20.830 21:02:36 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:09:20.830 21:02:36 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:09:21.089 true 00:09:21.089 21:02:36 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2936829 00:09:21.089 21:02:36 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:22.467 21:02:37 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:22.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:22.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:22.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:22.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:22.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:22.467 21:02:38 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:09:22.467 21:02:38 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:09:22.467 true 00:09:22.467 21:02:38 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2936829 00:09:22.467 21:02:38 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:23.405 21:02:39 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:23.665 21:02:39 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:09:23.665 21:02:39 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:09:23.665 true 00:09:23.665 21:02:39 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2936829 00:09:23.665 21:02:39 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:23.924 21:02:39 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:24.182 21:02:39 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:09:24.182 21:02:39 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:09:24.182 true 00:09:24.182 21:02:40 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2936829 00:09:24.182 21:02:40 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:25.638 21:02:41 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:25.638 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:25.638 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:25.638 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:25.638 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:25.638 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:25.638 21:02:41 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:09:25.638 21:02:41 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:09:25.897 true 00:09:25.897 21:02:41 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2936829 00:09:25.897 21:02:41 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:26.835 21:02:42 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:26.835 21:02:42 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:09:26.835 21:02:42 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:09:27.094 true 00:09:27.094 21:02:42 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2936829 00:09:27.094 21:02:42 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:27.354 21:02:43 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:27.354 21:02:43 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:09:27.354 21:02:43 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:27.612 true 00:09:27.612 21:02:43 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2936829 00:09:27.612 21:02:43 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:28.991 21:02:44 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:28.991 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:28.991 21:02:44 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:09:28.991 21:02:44 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:28.991 true 00:09:29.250 21:02:44 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2936829 00:09:29.250 21:02:44 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:29.250 21:02:45 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:29.509 21:02:45 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:09:29.509 21:02:45 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:29.768 true 00:09:29.768 21:02:45 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2936829 00:09:29.768 21:02:45 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:29.768 21:02:45 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:29.768 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.028 21:02:45 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:09:30.028 21:02:45 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:30.287 true 00:09:30.287 21:02:46 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2936829 00:09:30.287 21:02:46 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:31.224 21:02:46 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:31.224 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:31.224 21:02:47 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:09:31.224 21:02:47 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:31.484 true 00:09:31.484 21:02:47 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2936829 00:09:31.484 21:02:47 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:31.743 21:02:47 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:31.743 21:02:47 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:09:31.743 21:02:47 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:32.002 true 00:09:32.002 21:02:47 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2936829 00:09:32.002 21:02:47 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:32.261 21:02:47 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:32.261 21:02:48 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:09:32.261 21:02:48 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:32.520 true 00:09:32.520 21:02:48 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2936829 00:09:32.520 21:02:48 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:32.779 21:02:48 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:32.779 21:02:48 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:09:32.779 21:02:48 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:33.039 true 00:09:33.039 21:02:48 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2936829 00:09:33.039 21:02:48 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:33.298 21:02:49 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:33.298 21:02:49 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:09:33.298 21:02:49 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:33.557 true 00:09:33.557 21:02:49 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2936829 00:09:33.557 21:02:49 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:33.816 21:02:49 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:33.816 21:02:49 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:09:33.816 21:02:49 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:34.076 true 00:09:34.076 21:02:49 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2936829 00:09:34.076 21:02:49 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:35.455 21:02:51 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:35.455 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:35.455 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:35.455 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:35.455 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:35.455 21:02:51 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:09:35.455 21:02:51 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:35.713 true 00:09:35.713 21:02:51 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2936829 00:09:35.713 21:02:51 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.650 21:02:52 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:36.650 21:02:52 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:09:36.650 21:02:52 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:36.909 true 00:09:36.909 21:02:52 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2936829 00:09:36.909 21:02:52 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.909 21:02:52 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:37.168 21:02:53 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:09:37.168 21:02:53 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:37.426 true 00:09:37.426 21:02:53 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2936829 00:09:37.426 21:02:53 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:38.803 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:38.804 21:02:54 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:38.804 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:38.804 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:38.804 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:38.804 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:38.804 21:02:54 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:09:38.804 21:02:54 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:38.804 true 00:09:38.804 21:02:54 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2936829 00:09:38.804 21:02:54 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.810 21:02:55 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:39.810 21:02:55 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:09:39.810 21:02:55 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:40.069 true 00:09:40.069 21:02:55 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2936829 00:09:40.069 21:02:55 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:40.327 21:02:56 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:40.327 21:02:56 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:09:40.327 21:02:56 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:09:40.585 true 00:09:40.585 21:02:56 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2936829 00:09:40.585 21:02:56 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:40.844 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:40.844 21:02:56 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:40.844 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:40.844 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:40.844 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:40.844 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:41.103 21:02:56 -- target/ns_hotplug_stress.sh@40 -- # null_size=1030 00:09:41.103 21:02:56 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:09:41.103 true 00:09:41.103 21:02:56 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2936829 00:09:41.103 21:02:56 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:42.040 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:42.040 21:02:57 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:42.299 21:02:57 -- target/ns_hotplug_stress.sh@40 -- # null_size=1031 00:09:42.299 21:02:57 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:09:42.299 true 00:09:42.299 21:02:58 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2936829 00:09:42.299 21:02:58 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:42.559 21:02:58 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:42.824 21:02:58 -- target/ns_hotplug_stress.sh@40 -- # null_size=1032 00:09:42.824 21:02:58 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:09:42.824 true 00:09:42.824 21:02:58 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2936829 00:09:42.824 21:02:58 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.127 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:43.127 21:02:58 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:43.127 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:43.127 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:43.127 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:43.127 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:43.128 [2024-04-18 21:02:59.052577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.052660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.052718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.052766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.052821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.052867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.052917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.052961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.053018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.053066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.053109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.053149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.053199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.053245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.053292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.053345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.053393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.053446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.053492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.053551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.053596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.053649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.053706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.053753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.053807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.053851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.053901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.053953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.053999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.054052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.054099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.054144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.054185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.054232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.054269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.054308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.054353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.054392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.054433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.054472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.054522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.054577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.054619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.054661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.054704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.054748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.054792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.054826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.054869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.054914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.054956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.055006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.055051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.055095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.055135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.055177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.055225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.055276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.055316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.055358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.055397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.055436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.055482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.055537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.055743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.055795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.055846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.055895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.055939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.055985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.056035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.056083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.056129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.056181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.056227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.056282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.056337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.056387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.056431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.056473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.056515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.056556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.056608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.056654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.056695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.056738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.056782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.056830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.056873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.056920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.056966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.057011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.057052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.057096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.128 [2024-04-18 21:02:59.057139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.129 [2024-04-18 21:02:59.057186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.129 [2024-04-18 21:02:59.057234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.129 [2024-04-18 21:02:59.057280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.129 [2024-04-18 21:02:59.057326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.129 [2024-04-18 21:02:59.057377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.129 [2024-04-18 21:02:59.057418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.129 [2024-04-18 21:02:59.057455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.129 [2024-04-18 21:02:59.057506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.129 [2024-04-18 21:02:59.057558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.129 [2024-04-18 21:02:59.057601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.129 [2024-04-18 21:02:59.057661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.129 [2024-04-18 21:02:59.057710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.129 [2024-04-18 21:02:59.057757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.129 [2024-04-18 21:02:59.057809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.129 [2024-04-18 21:02:59.057855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.129 [2024-04-18 21:02:59.057904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.129 [2024-04-18 21:02:59.057952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.129 [2024-04-18 21:02:59.058005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.129 [2024-04-18 21:02:59.058061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.129 [2024-04-18 21:02:59.058105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.394 [2024-04-18 21:02:59.058150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.394 [2024-04-18 21:02:59.058204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.394 [2024-04-18 21:02:59.058248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.394 [2024-04-18 21:02:59.058295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.394 [2024-04-18 21:02:59.058350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.394 [2024-04-18 21:02:59.058394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.394 [2024-04-18 21:02:59.058445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.394 [2024-04-18 21:02:59.058490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.394 [2024-04-18 21:02:59.058542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.394 [2024-04-18 21:02:59.058594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.394 [2024-04-18 21:02:59.058642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.394 [2024-04-18 21:02:59.058697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.394 [2024-04-18 21:02:59.059611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.394 [2024-04-18 21:02:59.059663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.394 [2024-04-18 21:02:59.059704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.394 [2024-04-18 21:02:59.059755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.394 [2024-04-18 21:02:59.059795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.394 [2024-04-18 21:02:59.059847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.394 [2024-04-18 21:02:59.059898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.394 [2024-04-18 21:02:59.059938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.394 [2024-04-18 21:02:59.059985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.394 [2024-04-18 21:02:59.060033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.394 [2024-04-18 21:02:59.060077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.394 [2024-04-18 21:02:59.060123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.394 [2024-04-18 21:02:59.060172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.394 [2024-04-18 21:02:59.060211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.394 [2024-04-18 21:02:59.060259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.394 [2024-04-18 21:02:59.060307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.394 [2024-04-18 21:02:59.060351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.394 [2024-04-18 21:02:59.060401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.394 [2024-04-18 21:02:59.060451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.394 [2024-04-18 21:02:59.060498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.394 [2024-04-18 21:02:59.060555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.394 [2024-04-18 21:02:59.060600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.394 [2024-04-18 21:02:59.060648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.060703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.060756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.060800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.060849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.060901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.060946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.060987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.061021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.061063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.061102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.061152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.061193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.061237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.061275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.061317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.061366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.061413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.061458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.061501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.061559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.061608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.061656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.061706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.061749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.061799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.061853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.061899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.061951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.061997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.062047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.062094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.062140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.062195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.062246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.062301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.062352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.062400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.062454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.062502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.062558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.062604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.062817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.062864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.062909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.062959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.063003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.063045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.063091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.063132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.063171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.063209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.063256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.063293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.063341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.063386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.063429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.063469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.063521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.063984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.064037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.064083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.064131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.064174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.064227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.064271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.064318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.064368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.064426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.064480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.064533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.064582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.064633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.064683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.064725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.064775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.064816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.064867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.064914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.064960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.065003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.065053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.065110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.065160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.065214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.065258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.065311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.065360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.065402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.065445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.065488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.065537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.065586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.065633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.065679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.065710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.065765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.065810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.395 [2024-04-18 21:02:59.065853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.065893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.065938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.065981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.066023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.066060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.066104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.066143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.066191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.066230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.066270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.066315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.066360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.066407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.066455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.066499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.066553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.066603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.066643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.066694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.066742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.066789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.066838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.066886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.066937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.067123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.067168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.067210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.067249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.067288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.067332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.067386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.067435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.067479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.067531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.067574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.067618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.067659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.067702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.067747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.067794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.067844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.067893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.067935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.067984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.068031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.068076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.068127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.068176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.068231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.068276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.068321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.068371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.068417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.068470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.068527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.068571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.068616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.068653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.068693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.068739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.068783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.068827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.068870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.068912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.068962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.069008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.069053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.069102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.069152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.069199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.070033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.070086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.070138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.070194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.070240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.070289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.070343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.070396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.070446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.070492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.070549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.070593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.070644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.070685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.070731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.070773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.070815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.070854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.070891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.070940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.070981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.071025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.071062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.396 [2024-04-18 21:02:59.071103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.071145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.071182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.071228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.071273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.071315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.071350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.071398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.071436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.071481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.071526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.071574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.071617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.071664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.071710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.071752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.071796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.071839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.071879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.071924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.071974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.072020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.072072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.072121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.072169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.072218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.072266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.072319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.072368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.072414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.072460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.072518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.072566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.072623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.072671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.072716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.072777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.072823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.072871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.072916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.072963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.073161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.073210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.073264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.073309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.073361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.073411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.073461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.073517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.073571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.073617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.073668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.073715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.073760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.073811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.073853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.073888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.073929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.073971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.074011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.074056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.074102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.074148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.074191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.074232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.074282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.074328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.074370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.074416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.074448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.074504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.074550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.074598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.074642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.074684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.074722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.074764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.074804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.074842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.074884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.074927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.074983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.075027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.075070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.075112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.075161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.075208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.075256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.075306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.075359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.075404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.075456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.075505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.075559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.075614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.075662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.397 [2024-04-18 21:02:59.075711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.075764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.075812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.075865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.075912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.075959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.076011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.076056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.076911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.076956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.077002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.077046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.077092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.077133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.077167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.077217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.077260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.077306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.077353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.077402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.077447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.077498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.077556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.077610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.077654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.077694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.077739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.077786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.077835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.077888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.077933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.077984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.078028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.078086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.078140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.078183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.078229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.078275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.078330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.078380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.078432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.078485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.078540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.078590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.078650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.078701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.078745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.078793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.078841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.078890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.078921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.078971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.079017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.079061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.079109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.079152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.079194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.079237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.079280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.079324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.079371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.079414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.079456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.079488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.079541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.079581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.079621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.079671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.079713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.079753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.079797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.079850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.080055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.080101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.080149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.080199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.080255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.080301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.080354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.080408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.080457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.080509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.080569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.080622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.080677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.080724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.080773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.080818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.080871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.081416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.081469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.081527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.081580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.081634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.081679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.081729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.398 [2024-04-18 21:02:59.081774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.081824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.081874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.081923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.081969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.082016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.082065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.082110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.082153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.082200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.082238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.082282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.082330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.082377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.082426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.082468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.082518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.082560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.082613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.082656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.082703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.082752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.082790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.082827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.082872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.082914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.082952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.083003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.083043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.083080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.083123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.083168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.083209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.083256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.083301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.083348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.083389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.083435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.083480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.083546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.083593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.083639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.083688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.083743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.083796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 21:02:59 -- target/ns_hotplug_stress.sh@40 -- # null_size=1033 00:09:43.399 [2024-04-18 21:02:59.083841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.083893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 21:02:59 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:09:43.399 [2024-04-18 21:02:59.083942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.083990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.084044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.084095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.084149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.084203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.084258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.084302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.084354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.084401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.084629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.084676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.084718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.084760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.084808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.084856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.084900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.084934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.084974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.085018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.085061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.085103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.085144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.085191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.085239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.085282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.399 [2024-04-18 21:02:59.085331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.085377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.085422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.085453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.085504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.085559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.085604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.085648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.085687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.085740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.085790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.085838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.085897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.085952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.086000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.086048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.086094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.086144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.086194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.086241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.086294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.086340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.086386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.086434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.086468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.086523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.086566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.086616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.086661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.086710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.086750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.086786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.086844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.086898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.086948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.087000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.087046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.087094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.087146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.087195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.087243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.087289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.087340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.087392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.087446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.087495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.087552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.088406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.088462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.088518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.088565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.088627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.088680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.088730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.088778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.088822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.088875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.088926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.088978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.089034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.089080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.089126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.089174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.089226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.089278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.089325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.089378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.089426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.089469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.089519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.089564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.089612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.089656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.089698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.089743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.089785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.089835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.089872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.089922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.089962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.090007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.090052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.090099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.090143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.090193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.090239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.090284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.090339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.090379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.090414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.090459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.090517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.090561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.090607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.090649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.090696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.400 [2024-04-18 21:02:59.090745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.090796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.090842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.090884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.090930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.090977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.091016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.091056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.091109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.091152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.091191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.091252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.091303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.091361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.091422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.091630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.091685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.091735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.091784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.091833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.091883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.091937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.091982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.092028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.092061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.092109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.092154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.092200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.092242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.092285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.092325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.092370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.092407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.092448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.092493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.092537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.092583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.092629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.092675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.092720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.092775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.092822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.092871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.092911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.092947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.092987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.093029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.093073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.093118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.093157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.093200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.093242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.093289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.093335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.093389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.093431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.093476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.093526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.093569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.093619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.093667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.093723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.093770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.093821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.093864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.093912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.093961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.094008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.094049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.094104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.094147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.094201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.094247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.094300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.094342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.094391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.094434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.094484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.095328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.095376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.095425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.095469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.095521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.095561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.095605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.095655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.095698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.095745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.095783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.095829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.095872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.095915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.095959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.096002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.096051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.401 [2024-04-18 21:02:59.096097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.096147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.096193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.096237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.096292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.096337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.096384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.096432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.096478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.096526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.096569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.096613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.096659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.096708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.096760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.096807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.096861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.096908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.096959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.097006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.097049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.097099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.097147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.097202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.097252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.097300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.097349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.097394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.097446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.097490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.097546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.097592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.097638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.097682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.097732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.097782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.097833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.097882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.097927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.097975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.098020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.098067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.098119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.098170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.098222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.098265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.098322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.098521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.098566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.098613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.098667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.098707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.098747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.098790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.098827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.098870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.098908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.098951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.098995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.099045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.099086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.099132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.099171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.099214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.099636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.099686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.099732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.099776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.099811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.099849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.099892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.099941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.099979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.100019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.100066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.100109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.100155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.100202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.100248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.100293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.100339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.100382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.100432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.100476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.100528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.100574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.100619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.100667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.100715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.100763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.100805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.100855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.100902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.100950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.100993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.101036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.101070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.101124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.402 [2024-04-18 21:02:59.101167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.101218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.101266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.101311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.101357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.101396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.101435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.101480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.101515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.101566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.101610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.101653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.101698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.101737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.101779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.101820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.101862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.101911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.101954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.101997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.102046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.102086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.102137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.102178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.102228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.102277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.102322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.102363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.102416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.102466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.102680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.102727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:43.403 [2024-04-18 21:02:59.102776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.102832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.102873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.102926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.102971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.103020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.103068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.103115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.103156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.103206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.103256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.103304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.103351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.103402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.103452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.103498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.103552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.103601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.103648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.103696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.103749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.103795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.103846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.103889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.103938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.103984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.104034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.104082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.104129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.104173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.104222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.104272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.104317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.104360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.104405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.104451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.104492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.104538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.104579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.104611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.104659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.104702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.104741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.104781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.105672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.105721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.105763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.105810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.105859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.105901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.403 [2024-04-18 21:02:59.105960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.404 [2024-04-18 21:02:59.106003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.404 [2024-04-18 21:02:59.106050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.404 [2024-04-18 21:02:59.106097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.404 [2024-04-18 21:02:59.106146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.404 [2024-04-18 21:02:59.106191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.404 [2024-04-18 21:02:59.106241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.404 [2024-04-18 21:02:59.106287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.404 [2024-04-18 21:02:59.106337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.404 [2024-04-18 21:02:59.106380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.404 [2024-04-18 21:02:59.106433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.404 [2024-04-18 21:02:59.106481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.404 [2024-04-18 21:02:59.106538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.404 [2024-04-18 21:02:59.106584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.404 [2024-04-18 21:02:59.106629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.404 [2024-04-18 21:02:59.106681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.404 [2024-04-18 21:02:59.106727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.404 [2024-04-18 21:02:59.106777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.404 [2024-04-18 21:02:59.106824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.404 [2024-04-18 21:02:59.106868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.404 [2024-04-18 21:02:59.106917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.404 [2024-04-18 21:02:59.106959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.404 [2024-04-18 21:02:59.107002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.404 [2024-04-18 21:02:59.107035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.404 [2024-04-18 21:02:59.107082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.404 [2024-04-18 21:02:59.107126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.404 [2024-04-18 21:02:59.107163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.404 [2024-04-18 21:02:59.107205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.404 [2024-04-18 21:02:59.107253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.404 [2024-04-18 21:02:59.107293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.404 [2024-04-18 21:02:59.107338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.404 [2024-04-18 21:02:59.107381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.404 [2024-04-18 21:02:59.107426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.404 [2024-04-18 21:02:59.107467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.404 [2024-04-18 21:02:59.107515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.404 [2024-04-18 21:02:59.107565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.404 [2024-04-18 21:02:59.107598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.404 [2024-04-18 21:02:59.107644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.404 [2024-04-18 21:02:59.107683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.404 [2024-04-18 21:02:59.107720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.404 [2024-04-18 21:02:59.107767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.404 [2024-04-18 21:02:59.107808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.404 [2024-04-18 21:02:59.107855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.404 [2024-04-18 21:02:59.107907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.404 [2024-04-18 21:02:59.107955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.404 [2024-04-18 21:02:59.107994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.404 [2024-04-18 21:02:59.108036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.404 [2024-04-18 21:02:59.108080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.673 [2024-04-18 21:02:59.469233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.673 [2024-04-18 21:02:59.469325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.673 [2024-04-18 21:02:59.469394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.673 [2024-04-18 21:02:59.469449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.673 [2024-04-18 21:02:59.469498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.673 [2024-04-18 21:02:59.469574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.673 [2024-04-18 21:02:59.469634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.673 [2024-04-18 21:02:59.469698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.673 [2024-04-18 21:02:59.469773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.673 [2024-04-18 21:02:59.469847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.673 [2024-04-18 21:02:59.470142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.673 [2024-04-18 21:02:59.470207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.673 [2024-04-18 21:02:59.470269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.673 [2024-04-18 21:02:59.470342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.673 [2024-04-18 21:02:59.470411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.673 [2024-04-18 21:02:59.470480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.673 [2024-04-18 21:02:59.470554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.673 [2024-04-18 21:02:59.470618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.673 [2024-04-18 21:02:59.470687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.673 [2024-04-18 21:02:59.470754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.673 [2024-04-18 21:02:59.470819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.673 [2024-04-18 21:02:59.470877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.673 [2024-04-18 21:02:59.470934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.673 [2024-04-18 21:02:59.470996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.673 [2024-04-18 21:02:59.471064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.673 [2024-04-18 21:02:59.471128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.673 [2024-04-18 21:02:59.471185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.673 [2024-04-18 21:02:59.471250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.673 [2024-04-18 21:02:59.471306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.673 [2024-04-18 21:02:59.471361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.673 [2024-04-18 21:02:59.471424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.673 [2024-04-18 21:02:59.471483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.673 [2024-04-18 21:02:59.471544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.673 [2024-04-18 21:02:59.471600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.673 [2024-04-18 21:02:59.471664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.673 [2024-04-18 21:02:59.471728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.673 [2024-04-18 21:02:59.471794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.673 [2024-04-18 21:02:59.471854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.673 [2024-04-18 21:02:59.471907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.673 [2024-04-18 21:02:59.471967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.673 [2024-04-18 21:02:59.472028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.673 [2024-04-18 21:02:59.472093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.673 [2024-04-18 21:02:59.472157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.673 [2024-04-18 21:02:59.472221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.673 [2024-04-18 21:02:59.472281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.472342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.472405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.472473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.472545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.472613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.472678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.472745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.472811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.472878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.472943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.473010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.473074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.473137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.473209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.473275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.473345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.473408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.473478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.473548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.473603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.473672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.473729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.473785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.473848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.473904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.473967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.474030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.474084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.475419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.475495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.475572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.475643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.475712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.475776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.475840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.475911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.475976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.476039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.476106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.476176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.476250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.476318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.476379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.476434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.476495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.476556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.476615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.476673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.476732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.476791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.476849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.476905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.476958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.477018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.477076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.477131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.477186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.477243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.477305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.477365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.477429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.477485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.477549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.477617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.477682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.477748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.477827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.477890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.477956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.478018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.478086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.478155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.478221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.478288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.478359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.478417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.478477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.478549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.478614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.478677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.478744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.478813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.478879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.478945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.478987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.479050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.479117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.479174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.479229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.479293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.479349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.479410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.479670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.479729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.479784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.674 [2024-04-18 21:02:59.479842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.479897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.479952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.480007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.480068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.480121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.480190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.480253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.480313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.480377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.480443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.480508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.480580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.480651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.480718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.480780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.480850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.480918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.480977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.481046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.481104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.481169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.481225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.481277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.481330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.481383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.481451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.481515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.481571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.481629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.481694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.481747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.481812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.481872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.481926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.481981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.482040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.482098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.482155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.482206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.482266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.482328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.482397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.482475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.482544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.482608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.482672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.482734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.482804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.482864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.482923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.482982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.483041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.483103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.483159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.483229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.483301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.483367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.483431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.483495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.484595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.484660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.484714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.484769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.484836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.484890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.484947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.485007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.485057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.485107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 true 00:09:43.675 [2024-04-18 21:02:59.485165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.485228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.485284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.485338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.485394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.485448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.485503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.485575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.485640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.485711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.485778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.485843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.485909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.485971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.486039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.486105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.486171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.486237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.486297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.486357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.486421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.486490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.486558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.486626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.486688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.486755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.486819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.675 [2024-04-18 21:02:59.486881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.486938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.486989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.487046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.487102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.487163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.487220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.487285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.487340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.487396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.487456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.487517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.487566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.487627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.487685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.487740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.487793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.487854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.487908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.487944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.487989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.488026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.488079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.488132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.488175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.488220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.488266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.488473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.488529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.488580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.488628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.488676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.488729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.488779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.488829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.488887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.488936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.488985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.489032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.489081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.489135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.489185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.489234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.489281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.489325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.489371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.489417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.489449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.489493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.489540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.489584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.489628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.489672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.489717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.489767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.489804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.489854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.489900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.489946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.489996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.490040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.490088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.490139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.490189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.490234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.490280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.490329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.490379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.490434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.490483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.490540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.490591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.490640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.490692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.490750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.490803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.490864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.490911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.490961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.491015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.491070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.491121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.491165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.491217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.491275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.491326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.491376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.491422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.491474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.491531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.492399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.492444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.492486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.492537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.492588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.492641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.676 [2024-04-18 21:02:59.492693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.492738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.492782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.492829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.492879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.492924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.492971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.493010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.493042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.493086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.493129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.493171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.493221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.493265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.493312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.493354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.493401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.493443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.493494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.493538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.493580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.493628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.493671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.493721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.493762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.493806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.493853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.493901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.493949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.493998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.494053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.494098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.494150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.494201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.494248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.494300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.494354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.494407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.494459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.494518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.494566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.494616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.494667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.494714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.494766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.494823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.494874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.494923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.494979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.495024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.495073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.495126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.495178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.495228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.495283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.495330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.495374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.495419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.495599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.495649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.495687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.495732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.495778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.495824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.495868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.495909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.495959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.496005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.496045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.496084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.496122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.496166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.496212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.496255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.496296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.496341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.496388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.496428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.496472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.496524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.496560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.496597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.496641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.496691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.496736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.496781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.496832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.496887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.677 [2024-04-18 21:02:59.496935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.496981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.497028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.497079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.497134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.497177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.497225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.497273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.497326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.497371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.497414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.497465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.497518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.497563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.497605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.497653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.497706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.497751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.497788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.497841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.497888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.497932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.497972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.498018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.498064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.498109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.498158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.498198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.498241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.498279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.498324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.498374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.498423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.499322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.499379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.499428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.499483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.499535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.499589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.499638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.499687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.499738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.499792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.499846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.499903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.499958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.500007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.500060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.500107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.500157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.500210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.500256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.500303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.500349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.500392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.500448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.500497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.500550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.500584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.500635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.500675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.678 [2024-04-18 21:02:59.500721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.500766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.500816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.500860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.500912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.500962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.501009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.501059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.501107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.501139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.501189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.501232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.501279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.501321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.501361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.501404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.501443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.501494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.501542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.501585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.501628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.501681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.501725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.501771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.501814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.501870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.501921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.501975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.502022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.502071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.502117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.502166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.502214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.502262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.502322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.502379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.502593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.502642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.502688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.502738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.502791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.502845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.502890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.502937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.502984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.503033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.503079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.503125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.503159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.503204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.503255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.503301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.503349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.503392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.503439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.503491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.503544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.503589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.503638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.503675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.503718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.503762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.503809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.503849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.503894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.503937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.503979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.504030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.504085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.504127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.504164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.504210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.504260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.504314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.504359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.504406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.504458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.504513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.504563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.504610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.504660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.504712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.504762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.504813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.504867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.504918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.504967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.505018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.505070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.505124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.505175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.679 [2024-04-18 21:02:59.505231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.505277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.505328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.505387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.505440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.505484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.505521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.505562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.506099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.506156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.506203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.506247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.506293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.506332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.506379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.506422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.506466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.506520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.506566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.506609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.506656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.506688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.506740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.506789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.506828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.506877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.506922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.506968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.507021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.507067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.507120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.507169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.507220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.507269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.507325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.507373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.507418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.507470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.507525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.507574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.507620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.507665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.507703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.507748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.507792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.507836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.507884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.507933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.507977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.508024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.508076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.508123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.508174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.508230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.508281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.508331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.508382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.508432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.508484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.508538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.508590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.508637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.508685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.508741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.508791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.508840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.508892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.508945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.508994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.509052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.509103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 21:02:59 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2936829 00:09:43.680 [2024-04-18 21:02:59.509158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 21:02:59 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.680 [2024-04-18 21:02:59.510015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.510066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.510116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.510171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.510219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.510271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.510314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.510353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.510399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.510442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.510492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.510547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.510589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.510625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.510670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.510713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.510757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.510803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.510848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.510894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.680 [2024-04-18 21:02:59.510946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.510993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.511042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.511086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.511121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.511164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.511210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.511260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.511306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.511346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.511396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.511447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.511493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.511547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.511593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.511636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.511683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.511730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.511770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.511822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.511872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.511924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.511968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.512017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.512068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.512117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.512175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.512225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.512274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.512334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.512378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.512432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.512483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.512533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.512581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.512637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.512683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.512731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.512779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.512834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.512883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.512935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.512981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.513026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.513231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.513278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.513321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.513363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.513404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.513450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.513495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.513545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.513594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.513639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.513675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.513720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.513762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.513806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.513850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.513898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.513949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.513995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.514043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.514090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.514133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.514168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.514200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.514251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.514292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.514335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.514390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.514443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.514497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.514551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.514607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.514657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.514705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.514757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.514807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.514855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.514898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.514941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.514988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.515033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.515068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.515120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.515165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.515207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.515250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.515299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.515340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.515381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.515434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.515480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.515526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.515578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.681 [2024-04-18 21:02:59.515631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.515691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.515742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.515794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.515841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.515890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.515938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.515991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.516042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.516094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.516144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.517007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.517062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.517119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.517167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.517213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.517266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.517312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.517368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.517422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.517472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.517527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.517574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.517625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.517675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.517724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.517769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.517826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.517874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.517924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.517977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.518032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.518082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.518132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.518177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.518213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.518253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.518306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.518352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.518394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.518435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.518483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.518531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.518579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.518631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.518677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.518720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.518760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.518799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.518844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.518890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.518933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.518977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.519018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.519069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.519111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.519158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.519206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.519252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.519286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.519328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.519372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.519415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.519464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.519507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.519558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.519608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.519659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.519713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.519763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.519815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.519869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.519920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.519969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.520022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.520243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.520292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.520343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.520388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.520435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.520489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.520542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.520581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.520623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.520671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.520718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.520760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.520804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.520849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.520892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.682 [2024-04-18 21:02:59.520941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.520985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.521029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.521072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.521109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.521156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.521203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.521250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.521294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.521343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.521384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.521427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.521470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.521520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.521566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.521608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.521655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.521708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.521757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.521816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.521866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.521911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.521966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.522013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.522063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.522112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.522163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.522221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.522271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.522320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.522375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.522428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.522475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.522536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.522580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.522629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.522676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.522713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.522765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.522810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.522853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.522902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.522945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.522994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.523037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.523084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.523123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.523171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:43.683 [2024-04-18 21:02:59.524172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.524229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.524287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.524337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.524387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.524440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.524489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.524548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.524599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.524649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.524697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.524749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.524796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.524842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.524892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.524937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.524986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.525038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.525092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.525133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.525187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.525247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.525299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.525343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.525396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.525445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.525495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.525543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.525593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.525636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.525678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.525720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.525765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.525799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.683 [2024-04-18 21:02:59.525843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.525890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.525941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.525988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.526036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.526080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.526128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.526174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.526214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.526258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.526303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.526338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.526381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.526428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.526476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.526518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.526560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.526609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.526656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.526706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.526746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.526786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.526830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.526872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.526919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.526970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.527018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.527064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.527115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.527166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.527370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.527420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.527472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.527528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.527576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.527622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.527675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.527725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.527777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.527833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.527882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.527934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.527984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.528044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.528091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.528146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.528192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.528638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.528688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.528734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.528777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.528815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.528859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.528906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.528944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.528989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.529035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.529077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.529126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.529173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.529218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.529270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.529318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.529355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.529407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.529465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.529521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.529577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.529632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.529681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.529733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.529785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.529837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.529887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.529937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.529987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.530037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.530084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.530141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.530189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.530240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.530292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.530342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.530389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.530441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.530497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.530548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.530602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.530654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.530704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.684 [2024-04-18 21:02:59.530760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.530812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.530857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.530912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.530959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.531007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.531052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.531101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.531137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.531188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.531230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.531275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.531321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.531370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.531409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.531449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.531498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.531555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.531602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.531650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.531702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.531884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.531930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.531972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.532014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.532062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.532114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.532162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.532202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.532251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.532298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.532345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.532400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.532448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.532498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.532549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.532597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.532649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.532699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.532750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.532803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.532854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.532903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.532959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.533008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.533057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.533114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.533162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.533214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.533269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.533315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.533366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.533414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.533462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.533520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.533567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.533609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.533657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.533690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.533747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.533798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.533838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.533887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.533928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.533978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.534023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.534070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.534970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.535026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.535075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.535124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.535176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.535230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.535289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.535340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.535391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.535444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.535502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.535556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.535606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.535658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.535707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.535755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.535806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.535854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.535908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.535953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.536005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.536054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.536108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.536158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.536206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.536253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.536299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.536347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.536394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.536442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.536481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.685 [2024-04-18 21:02:59.536528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.536578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.536619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.536665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.536711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.536759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.536798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.536842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.536901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.536955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.536997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.537041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.537078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.537127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.537172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.537213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.537250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.537296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.537343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.537385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.537428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.537474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.537522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.537563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.537613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.537651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.537703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.537751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.537795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.537847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.537897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.537947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.537995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.538199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.538250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.538299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.538353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.538405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.538453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.538500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.538556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.538613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.538660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.538715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.538766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.538817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.538865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.538920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.538971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.539022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.539533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.539590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.539640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.539684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.539729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.539772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.539807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.539847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.539888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.539931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.539975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.540017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.540060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.540103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.540149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.540198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.540246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.540289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.540329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.540377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.540428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.540485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.540532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.540585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.540639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.540689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.540740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.540787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.540837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.540886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.540939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.540990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.541039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.541091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.541143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.541194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.541246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.541295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.541347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.541402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.541455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.541505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.541565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.541613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.541657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.541699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.541783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.541826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.541876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.686 [2024-04-18 21:02:59.541952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.542007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.542048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.542097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.542152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.542194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.542232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.542275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.542321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.542369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.542413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.542458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.542503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.542560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.542607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.542800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.542853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.542916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.542966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.543013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.543068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.543118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.543165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.543222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.543272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.543316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.543369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.543417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.543469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.543524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.543571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.543627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.543678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.543728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.543776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.543834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.543885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.543934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.543992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.544043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.544107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.544161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.544215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.544271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.544322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.544363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.544414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.544461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.544504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.544549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.544603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.544644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.544689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.544738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.544781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.544829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.544870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.544919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.544970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.545010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.545056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.545094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.545139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.545182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.545228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.545268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.545314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.545355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.545399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.545442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.545493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.545547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.545592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.545634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.545675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.545726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.545774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.545831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.546694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.546747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.546797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.546850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.687 [2024-04-18 21:02:59.546900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.546945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.546991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.547036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.547073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.547116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.547165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.547209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.547252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.547308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.547350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.547394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.547442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.547491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.547548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.547594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.547631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.547678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.547719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.547762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.547809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.547860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.547903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.547946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.547995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.548042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.548086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.548132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.548181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.548241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.548287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.548338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.548398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.548449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.548502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.548556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.548605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.548660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.548714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.548765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.548814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.548867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.548916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.548964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.549015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.549061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.549108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.549163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.549210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.549254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.549304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.549356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.549404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.549458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.549514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.549561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.549609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.549650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.549704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.549750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.549948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.549992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.550034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.550082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.550127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.550168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.550208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.550253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.550294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.550337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.550373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.550417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.550462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.550505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.550554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.550605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.550650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.550763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.550813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.550857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.550892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.550928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.550980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.551029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.551078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.551132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.551179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.551230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.551291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.551339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.551388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.551433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.551482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.551532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.551587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.551639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.551685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.551729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.688 [2024-04-18 21:02:59.551778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.551821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.551864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.551903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.551952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.552001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.552047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.552087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.552134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.552185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.552225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.552269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.552326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.552375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.552424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.552479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.552527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.552577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.552625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.552674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.552727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.552773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.552823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.552873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.552927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.553742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.553795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.553847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.553904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.553960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.554016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.554062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.554111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.554168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.554219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.554268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.554321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.554368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.554419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.554466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.554518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.554576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.554625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.554672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.554725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.554775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.554822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.554867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.554916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.554968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.555017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.555067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.555120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.555172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.555225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.555272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.555318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.555370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.555411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.555458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.555501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.555546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.555587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.555629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.555671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.555711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.555762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.555810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.555853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.555895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.555939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.555986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.556030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.556075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.556113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.556155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.556197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.556237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.556280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.556330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.556382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.556428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.556473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.556528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.556578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.556626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.556661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.556709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.556750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.556937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.556985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.557027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.557079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.557118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.557163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.557213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.557254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.689 [2024-04-18 21:02:59.557301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.557352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.557396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.557445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.557495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.557555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.557603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.557653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.557693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.557737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.557784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.557825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.557869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.557912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.557954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.557988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.558031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.558078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.558123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.558166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.558207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.558238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.558282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.558327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.558375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.558409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.558441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.558490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.558542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.558589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.558637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.558691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.558735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.558776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.558826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.558875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.558931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.558981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.559028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.559084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.559133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.559184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.559236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.559292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.559345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.559400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.559447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.559493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.559546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.559599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.559653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.559698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.559747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.559794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.559840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.560712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.560765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.560822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.560869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.560922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.560977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.561026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.561074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.561126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.561177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.561224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.561282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.561331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.561381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.561432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.561479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.561537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.561590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.561636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.561684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.561737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.561786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.561836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.561889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.561937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.561987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.562045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.562089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.562137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.562190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.562242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.562296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.562352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.562398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.562449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.562497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.562553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.562602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.562650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.562699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.562758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.562806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.690 [2024-04-18 21:02:59.562857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.562920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.562971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.563024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.563075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.563127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.563180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.563222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.563269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.563314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.563362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.563406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.563453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.563504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.563548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.563599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.563643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.563685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.563728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.563773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.563824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.563868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.564061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.564095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.564137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.564180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.564224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.564272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.564320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.564363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.564407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.564452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.564486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.564532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.564576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.564618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.564664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.564710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.564752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.565285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.565335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.565374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.565431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.565475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.565525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.565570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.565617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.565659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.565708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.565755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.565799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.565841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.565885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.565925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.565967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.566000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.566042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.566092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.566136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.566175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.566227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.566273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.566323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.566372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.566419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.566457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.566487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.566521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.566551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.566580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.566610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.566639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.566669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.566699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.566727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.566757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.566787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.566816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.566846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.566874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.566921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.566969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.567017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.567059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.567101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.567135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.567165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.567195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.567224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.567254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.567284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.567323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.567371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.567402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.567432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.567462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.691 [2024-04-18 21:02:59.567492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.567526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.567556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.567585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.567614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.567644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.567673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.567804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.567834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.567863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.567892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.567922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.567951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.567981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.568012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.568040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.568069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.568099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.568131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.568160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.568190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.568220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.568249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.568278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.568307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.568335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.568365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.568394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.568424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.568456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.568487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.568523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.568554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.568584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.568615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.568646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.568676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.568705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.568736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.568765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.568795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.568825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.568856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.568886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.568916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.568946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.568977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.569007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.569035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.569065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.569097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.569126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.569156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 [2024-04-18 21:02:59.569186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:43.692 Initializing NVMe Controllers 00:09:43.692 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:43.692 Controller IO queue size 128, less than required. 00:09:43.692 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:43.692 Controller IO queue size 128, less than required. 00:09:43.692 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:43.692 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:43.692 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:43.692 Initialization complete. Launching workers. 00:09:43.692 ======================================================== 00:09:43.692 Latency(us) 00:09:43.692 Device Information : IOPS MiB/s Average min max 00:09:43.692 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1726.94 0.84 44357.31 1245.08 1065066.22 00:09:43.692 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 14614.48 7.14 8736.40 2268.33 307420.77 00:09:43.692 ======================================================== 00:09:43.692 Total : 16341.41 7.98 12500.76 1245.08 1065066.22 00:09:43.692 00:09:43.951 21:02:59 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:43.951 21:02:59 -- target/ns_hotplug_stress.sh@40 -- # null_size=1034 00:09:43.951 21:02:59 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:09:44.210 true 00:09:44.210 21:03:00 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2936829 00:09:44.210 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (2936829) - No such process 00:09:44.210 21:03:00 -- target/ns_hotplug_stress.sh@44 -- # wait 2936829 00:09:44.210 21:03:00 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:09:44.210 21:03:00 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:09:44.210 21:03:00 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:44.210 21:03:00 -- nvmf/common.sh@117 -- # sync 00:09:44.210 21:03:00 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:44.210 21:03:00 -- nvmf/common.sh@120 -- # set +e 00:09:44.210 21:03:00 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:44.210 21:03:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:44.210 rmmod nvme_tcp 00:09:44.210 rmmod nvme_fabrics 00:09:44.210 rmmod nvme_keyring 00:09:44.210 21:03:00 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:44.210 21:03:00 -- nvmf/common.sh@124 -- # set -e 00:09:44.210 21:03:00 -- nvmf/common.sh@125 -- # return 0 00:09:44.210 21:03:00 -- nvmf/common.sh@478 -- # '[' -n 2936338 ']' 00:09:44.210 21:03:00 -- nvmf/common.sh@479 -- # killprocess 2936338 00:09:44.210 21:03:00 -- common/autotest_common.sh@936 -- # '[' -z 2936338 ']' 00:09:44.210 21:03:00 -- common/autotest_common.sh@940 -- # kill -0 2936338 00:09:44.210 21:03:00 -- common/autotest_common.sh@941 -- # uname 00:09:44.210 21:03:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:44.210 21:03:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2936338 00:09:44.470 21:03:00 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:44.470 21:03:00 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:44.470 21:03:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2936338' 00:09:44.470 killing process with pid 2936338 00:09:44.470 21:03:00 -- common/autotest_common.sh@955 -- # kill 2936338 00:09:44.470 21:03:00 -- common/autotest_common.sh@960 -- # wait 2936338 00:09:44.470 21:03:00 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:44.470 21:03:00 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:44.470 21:03:00 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:44.470 21:03:00 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:44.470 21:03:00 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:44.470 21:03:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.470 21:03:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:44.470 21:03:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.008 21:03:02 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:47.008 00:09:47.008 real 0m41.063s 00:09:47.008 user 2m27.664s 00:09:47.008 sys 0m10.444s 00:09:47.008 21:03:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:47.008 21:03:02 -- common/autotest_common.sh@10 -- # set +x 00:09:47.008 ************************************ 00:09:47.008 END TEST nvmf_ns_hotplug_stress 00:09:47.008 ************************************ 00:09:47.008 21:03:02 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:09:47.008 21:03:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:47.008 21:03:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:47.008 21:03:02 -- common/autotest_common.sh@10 -- # set +x 00:09:47.008 ************************************ 00:09:47.008 START TEST nvmf_connect_stress 00:09:47.008 ************************************ 00:09:47.008 21:03:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:09:47.008 * Looking for test storage... 00:09:47.008 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:47.008 21:03:02 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:47.008 21:03:02 -- nvmf/common.sh@7 -- # uname -s 00:09:47.008 21:03:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:47.008 21:03:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:47.008 21:03:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:47.008 21:03:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:47.008 21:03:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:47.008 21:03:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:47.008 21:03:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:47.008 21:03:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:47.008 21:03:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:47.008 21:03:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:47.008 21:03:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:47.008 21:03:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:47.008 21:03:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:47.008 21:03:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:47.008 21:03:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:47.008 21:03:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:47.008 21:03:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:47.008 21:03:02 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.008 21:03:02 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.008 21:03:02 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.008 21:03:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.008 21:03:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.008 21:03:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.008 21:03:02 -- paths/export.sh@5 -- # export PATH 00:09:47.008 21:03:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.008 21:03:02 -- nvmf/common.sh@47 -- # : 0 00:09:47.008 21:03:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:47.008 21:03:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:47.008 21:03:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:47.008 21:03:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:47.008 21:03:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:47.008 21:03:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:47.008 21:03:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:47.008 21:03:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:47.008 21:03:02 -- target/connect_stress.sh@12 -- # nvmftestinit 00:09:47.008 21:03:02 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:47.008 21:03:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:47.008 21:03:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:47.008 21:03:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:47.008 21:03:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:47.008 21:03:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.008 21:03:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:47.008 21:03:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.008 21:03:02 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:47.008 21:03:02 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:47.008 21:03:02 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:47.008 21:03:02 -- common/autotest_common.sh@10 -- # set +x 00:09:53.579 21:03:08 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:53.579 21:03:08 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:53.579 21:03:08 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:53.579 21:03:08 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:53.579 21:03:08 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:53.579 21:03:08 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:53.579 21:03:08 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:53.579 21:03:08 -- nvmf/common.sh@295 -- # net_devs=() 00:09:53.579 21:03:08 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:53.579 21:03:08 -- nvmf/common.sh@296 -- # e810=() 00:09:53.579 21:03:08 -- nvmf/common.sh@296 -- # local -ga e810 00:09:53.579 21:03:08 -- nvmf/common.sh@297 -- # x722=() 00:09:53.579 21:03:08 -- nvmf/common.sh@297 -- # local -ga x722 00:09:53.579 21:03:08 -- nvmf/common.sh@298 -- # mlx=() 00:09:53.579 21:03:08 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:53.579 21:03:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:53.579 21:03:08 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:53.579 21:03:08 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:53.579 21:03:08 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:53.579 21:03:08 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:53.579 21:03:08 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:53.579 21:03:08 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:53.579 21:03:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:53.579 21:03:08 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:53.579 21:03:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:53.579 21:03:08 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:53.579 21:03:08 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:53.579 21:03:08 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:53.579 21:03:08 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:53.579 21:03:08 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:53.579 21:03:08 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:53.579 21:03:08 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:53.579 21:03:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:53.579 21:03:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:53.579 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:53.579 21:03:08 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:53.579 21:03:08 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:53.579 21:03:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:53.579 21:03:08 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:53.579 21:03:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:53.579 21:03:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:53.579 21:03:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:53.579 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:53.579 21:03:08 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:53.579 21:03:08 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:53.579 21:03:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:53.579 21:03:08 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:53.579 21:03:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:53.579 21:03:08 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:53.579 21:03:08 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:53.579 21:03:08 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:53.579 21:03:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:53.579 21:03:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:53.579 21:03:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:53.579 21:03:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:53.579 21:03:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:53.579 Found net devices under 0000:86:00.0: cvl_0_0 00:09:53.579 21:03:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:53.579 21:03:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:53.579 21:03:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:53.579 21:03:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:53.579 21:03:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:53.579 21:03:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:53.579 Found net devices under 0000:86:00.1: cvl_0_1 00:09:53.579 21:03:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:53.579 21:03:08 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:53.579 21:03:08 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:53.579 21:03:08 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:53.579 21:03:08 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:53.579 21:03:08 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:53.579 21:03:08 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:53.579 21:03:08 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:53.579 21:03:08 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:53.579 21:03:08 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:53.579 21:03:08 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:53.579 21:03:08 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:53.579 21:03:08 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:53.579 21:03:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:53.579 21:03:08 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:53.579 21:03:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:53.579 21:03:08 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:53.579 21:03:08 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:53.579 21:03:08 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:53.579 21:03:08 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:53.579 21:03:08 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:53.579 21:03:08 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:53.579 21:03:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:53.579 21:03:08 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:53.579 21:03:08 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:53.579 21:03:08 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:53.579 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:53.579 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:09:53.579 00:09:53.579 --- 10.0.0.2 ping statistics --- 00:09:53.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.579 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:09:53.579 21:03:08 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:53.579 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:53.579 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.338 ms 00:09:53.579 00:09:53.579 --- 10.0.0.1 ping statistics --- 00:09:53.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.579 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:09:53.579 21:03:08 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:53.579 21:03:08 -- nvmf/common.sh@411 -- # return 0 00:09:53.579 21:03:08 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:53.579 21:03:08 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:53.579 21:03:08 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:53.579 21:03:08 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:53.579 21:03:08 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:53.579 21:03:08 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:53.579 21:03:08 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:53.579 21:03:08 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:09:53.579 21:03:08 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:53.579 21:03:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:53.579 21:03:08 -- common/autotest_common.sh@10 -- # set +x 00:09:53.579 21:03:08 -- nvmf/common.sh@470 -- # nvmfpid=2946104 00:09:53.579 21:03:08 -- nvmf/common.sh@471 -- # waitforlisten 2946104 00:09:53.580 21:03:08 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:53.580 21:03:08 -- common/autotest_common.sh@817 -- # '[' -z 2946104 ']' 00:09:53.580 21:03:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.580 21:03:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:53.580 21:03:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.580 21:03:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:53.580 21:03:08 -- common/autotest_common.sh@10 -- # set +x 00:09:53.580 [2024-04-18 21:03:08.870583] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:09:53.580 [2024-04-18 21:03:08.870629] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:53.580 EAL: No free 2048 kB hugepages reported on node 1 00:09:53.580 [2024-04-18 21:03:08.934668] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:53.580 [2024-04-18 21:03:09.012258] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:53.580 [2024-04-18 21:03:09.012292] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:53.580 [2024-04-18 21:03:09.012300] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:53.580 [2024-04-18 21:03:09.012306] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:53.580 [2024-04-18 21:03:09.012311] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:53.580 [2024-04-18 21:03:09.012410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:53.580 [2024-04-18 21:03:09.012422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:53.580 [2024-04-18 21:03:09.012423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:53.838 21:03:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:53.838 21:03:09 -- common/autotest_common.sh@850 -- # return 0 00:09:53.838 21:03:09 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:53.838 21:03:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:53.838 21:03:09 -- common/autotest_common.sh@10 -- # set +x 00:09:53.838 21:03:09 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:53.838 21:03:09 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:53.838 21:03:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:53.838 21:03:09 -- common/autotest_common.sh@10 -- # set +x 00:09:53.839 [2024-04-18 21:03:09.725685] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:53.839 21:03:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:53.839 21:03:09 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:53.839 21:03:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:53.839 21:03:09 -- common/autotest_common.sh@10 -- # set +x 00:09:53.839 21:03:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:53.839 21:03:09 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:53.839 21:03:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:53.839 21:03:09 -- common/autotest_common.sh@10 -- # set +x 00:09:53.839 [2024-04-18 21:03:09.761615] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:53.839 21:03:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:53.839 21:03:09 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:53.839 21:03:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:53.839 21:03:09 -- common/autotest_common.sh@10 -- # set +x 00:09:54.098 NULL1 00:09:54.098 21:03:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:54.098 21:03:09 -- target/connect_stress.sh@21 -- # PERF_PID=2946537 00:09:54.098 21:03:09 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:54.098 21:03:09 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:09:54.098 21:03:09 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:54.098 21:03:09 -- target/connect_stress.sh@27 -- # seq 1 20 00:09:54.098 21:03:09 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:54.098 21:03:09 -- target/connect_stress.sh@28 -- # cat 00:09:54.098 21:03:09 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:54.098 21:03:09 -- target/connect_stress.sh@28 -- # cat 00:09:54.098 21:03:09 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:54.098 21:03:09 -- target/connect_stress.sh@28 -- # cat 00:09:54.098 21:03:09 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:54.098 21:03:09 -- target/connect_stress.sh@28 -- # cat 00:09:54.098 21:03:09 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:54.098 21:03:09 -- target/connect_stress.sh@28 -- # cat 00:09:54.098 21:03:09 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:54.098 21:03:09 -- target/connect_stress.sh@28 -- # cat 00:09:54.098 21:03:09 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:54.098 21:03:09 -- target/connect_stress.sh@28 -- # cat 00:09:54.098 21:03:09 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:54.098 EAL: No free 2048 kB hugepages reported on node 1 00:09:54.098 21:03:09 -- target/connect_stress.sh@28 -- # cat 00:09:54.098 21:03:09 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:54.098 21:03:09 -- target/connect_stress.sh@28 -- # cat 00:09:54.098 21:03:09 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:54.098 21:03:09 -- target/connect_stress.sh@28 -- # cat 00:09:54.098 21:03:09 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:54.098 21:03:09 -- target/connect_stress.sh@28 -- # cat 00:09:54.098 21:03:09 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:54.098 21:03:09 -- target/connect_stress.sh@28 -- # cat 00:09:54.098 21:03:09 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:54.098 21:03:09 -- target/connect_stress.sh@28 -- # cat 00:09:54.098 21:03:09 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:54.098 21:03:09 -- target/connect_stress.sh@28 -- # cat 00:09:54.098 21:03:09 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:54.098 21:03:09 -- target/connect_stress.sh@28 -- # cat 00:09:54.098 21:03:09 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:54.098 21:03:09 -- target/connect_stress.sh@28 -- # cat 00:09:54.098 21:03:09 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:54.098 21:03:09 -- target/connect_stress.sh@28 -- # cat 00:09:54.098 21:03:09 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:54.098 21:03:09 -- target/connect_stress.sh@28 -- # cat 00:09:54.098 21:03:09 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:54.098 21:03:09 -- target/connect_stress.sh@28 -- # cat 00:09:54.098 21:03:09 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:54.098 21:03:09 -- target/connect_stress.sh@28 -- # cat 00:09:54.098 21:03:09 -- target/connect_stress.sh@34 -- # kill -0 2946537 00:09:54.098 21:03:09 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:54.098 21:03:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:54.098 21:03:09 -- common/autotest_common.sh@10 -- # set +x 00:09:54.357 21:03:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:54.357 21:03:10 -- target/connect_stress.sh@34 -- # kill -0 2946537 00:09:54.357 21:03:10 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:54.357 21:03:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:54.357 21:03:10 -- common/autotest_common.sh@10 -- # set +x 00:09:54.615 21:03:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:54.615 21:03:10 -- target/connect_stress.sh@34 -- # kill -0 2946537 00:09:54.615 21:03:10 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:54.615 21:03:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:54.615 21:03:10 -- common/autotest_common.sh@10 -- # set +x 00:09:55.182 21:03:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.182 21:03:10 -- target/connect_stress.sh@34 -- # kill -0 2946537 00:09:55.182 21:03:10 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:55.182 21:03:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.182 21:03:10 -- common/autotest_common.sh@10 -- # set +x 00:09:55.440 21:03:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.440 21:03:11 -- target/connect_stress.sh@34 -- # kill -0 2946537 00:09:55.440 21:03:11 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:55.440 21:03:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.440 21:03:11 -- common/autotest_common.sh@10 -- # set +x 00:09:55.698 21:03:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.698 21:03:11 -- target/connect_stress.sh@34 -- # kill -0 2946537 00:09:55.698 21:03:11 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:55.698 21:03:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.698 21:03:11 -- common/autotest_common.sh@10 -- # set +x 00:09:55.956 21:03:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.957 21:03:11 -- target/connect_stress.sh@34 -- # kill -0 2946537 00:09:55.957 21:03:11 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:55.957 21:03:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.957 21:03:11 -- common/autotest_common.sh@10 -- # set +x 00:09:56.215 21:03:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:56.215 21:03:12 -- target/connect_stress.sh@34 -- # kill -0 2946537 00:09:56.215 21:03:12 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:56.215 21:03:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:56.215 21:03:12 -- common/autotest_common.sh@10 -- # set +x 00:09:56.781 21:03:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:56.782 21:03:12 -- target/connect_stress.sh@34 -- # kill -0 2946537 00:09:56.782 21:03:12 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:56.782 21:03:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:56.782 21:03:12 -- common/autotest_common.sh@10 -- # set +x 00:09:57.039 21:03:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:57.039 21:03:12 -- target/connect_stress.sh@34 -- # kill -0 2946537 00:09:57.039 21:03:12 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:57.039 21:03:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:57.039 21:03:12 -- common/autotest_common.sh@10 -- # set +x 00:09:57.298 21:03:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:57.298 21:03:13 -- target/connect_stress.sh@34 -- # kill -0 2946537 00:09:57.298 21:03:13 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:57.298 21:03:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:57.298 21:03:13 -- common/autotest_common.sh@10 -- # set +x 00:09:57.555 21:03:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:57.555 21:03:13 -- target/connect_stress.sh@34 -- # kill -0 2946537 00:09:57.556 21:03:13 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:57.556 21:03:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:57.556 21:03:13 -- common/autotest_common.sh@10 -- # set +x 00:09:57.814 21:03:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:57.814 21:03:13 -- target/connect_stress.sh@34 -- # kill -0 2946537 00:09:57.814 21:03:13 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:57.814 21:03:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:57.814 21:03:13 -- common/autotest_common.sh@10 -- # set +x 00:09:58.380 21:03:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:58.380 21:03:14 -- target/connect_stress.sh@34 -- # kill -0 2946537 00:09:58.380 21:03:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:58.380 21:03:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:58.380 21:03:14 -- common/autotest_common.sh@10 -- # set +x 00:09:58.638 21:03:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:58.639 21:03:14 -- target/connect_stress.sh@34 -- # kill -0 2946537 00:09:58.639 21:03:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:58.639 21:03:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:58.639 21:03:14 -- common/autotest_common.sh@10 -- # set +x 00:09:58.897 21:03:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:58.897 21:03:14 -- target/connect_stress.sh@34 -- # kill -0 2946537 00:09:58.897 21:03:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:58.897 21:03:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:58.897 21:03:14 -- common/autotest_common.sh@10 -- # set +x 00:09:59.155 21:03:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:59.155 21:03:15 -- target/connect_stress.sh@34 -- # kill -0 2946537 00:09:59.155 21:03:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:59.155 21:03:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:59.155 21:03:15 -- common/autotest_common.sh@10 -- # set +x 00:09:59.721 21:03:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:59.721 21:03:15 -- target/connect_stress.sh@34 -- # kill -0 2946537 00:09:59.721 21:03:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:59.721 21:03:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:59.722 21:03:15 -- common/autotest_common.sh@10 -- # set +x 00:09:59.980 21:03:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:59.980 21:03:15 -- target/connect_stress.sh@34 -- # kill -0 2946537 00:09:59.980 21:03:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:59.980 21:03:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:59.980 21:03:15 -- common/autotest_common.sh@10 -- # set +x 00:10:00.238 21:03:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:00.238 21:03:16 -- target/connect_stress.sh@34 -- # kill -0 2946537 00:10:00.238 21:03:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:00.238 21:03:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:00.238 21:03:16 -- common/autotest_common.sh@10 -- # set +x 00:10:00.496 21:03:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:00.496 21:03:16 -- target/connect_stress.sh@34 -- # kill -0 2946537 00:10:00.496 21:03:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:00.496 21:03:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:00.496 21:03:16 -- common/autotest_common.sh@10 -- # set +x 00:10:00.754 21:03:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:00.754 21:03:16 -- target/connect_stress.sh@34 -- # kill -0 2946537 00:10:00.754 21:03:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:00.754 21:03:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:00.754 21:03:16 -- common/autotest_common.sh@10 -- # set +x 00:10:01.320 21:03:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:01.320 21:03:16 -- target/connect_stress.sh@34 -- # kill -0 2946537 00:10:01.320 21:03:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:01.320 21:03:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:01.320 21:03:16 -- common/autotest_common.sh@10 -- # set +x 00:10:01.578 21:03:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:01.578 21:03:17 -- target/connect_stress.sh@34 -- # kill -0 2946537 00:10:01.578 21:03:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:01.578 21:03:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:01.578 21:03:17 -- common/autotest_common.sh@10 -- # set +x 00:10:01.836 21:03:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:01.836 21:03:17 -- target/connect_stress.sh@34 -- # kill -0 2946537 00:10:01.836 21:03:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:01.836 21:03:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:01.837 21:03:17 -- common/autotest_common.sh@10 -- # set +x 00:10:02.095 21:03:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:02.095 21:03:17 -- target/connect_stress.sh@34 -- # kill -0 2946537 00:10:02.095 21:03:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:02.095 21:03:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:02.095 21:03:17 -- common/autotest_common.sh@10 -- # set +x 00:10:02.660 21:03:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:02.660 21:03:18 -- target/connect_stress.sh@34 -- # kill -0 2946537 00:10:02.660 21:03:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:02.660 21:03:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:02.660 21:03:18 -- common/autotest_common.sh@10 -- # set +x 00:10:02.918 21:03:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:02.918 21:03:18 -- target/connect_stress.sh@34 -- # kill -0 2946537 00:10:02.918 21:03:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:02.918 21:03:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:02.918 21:03:18 -- common/autotest_common.sh@10 -- # set +x 00:10:03.177 21:03:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:03.177 21:03:18 -- target/connect_stress.sh@34 -- # kill -0 2946537 00:10:03.177 21:03:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:03.177 21:03:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:03.177 21:03:18 -- common/autotest_common.sh@10 -- # set +x 00:10:03.522 21:03:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:03.522 21:03:19 -- target/connect_stress.sh@34 -- # kill -0 2946537 00:10:03.522 21:03:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:03.523 21:03:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:03.523 21:03:19 -- common/autotest_common.sh@10 -- # set +x 00:10:03.780 21:03:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:03.780 21:03:19 -- target/connect_stress.sh@34 -- # kill -0 2946537 00:10:03.780 21:03:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:03.780 21:03:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:03.780 21:03:19 -- common/autotest_common.sh@10 -- # set +x 00:10:04.038 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:04.038 21:03:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:04.038 21:03:19 -- target/connect_stress.sh@34 -- # kill -0 2946537 00:10:04.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2946537) - No such process 00:10:04.038 21:03:19 -- target/connect_stress.sh@38 -- # wait 2946537 00:10:04.038 21:03:19 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:04.038 21:03:19 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:04.038 21:03:19 -- target/connect_stress.sh@43 -- # nvmftestfini 00:10:04.038 21:03:19 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:04.038 21:03:19 -- nvmf/common.sh@117 -- # sync 00:10:04.038 21:03:19 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:04.038 21:03:19 -- nvmf/common.sh@120 -- # set +e 00:10:04.038 21:03:19 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:04.038 21:03:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:04.038 rmmod nvme_tcp 00:10:04.038 rmmod nvme_fabrics 00:10:04.038 rmmod nvme_keyring 00:10:04.305 21:03:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:04.306 21:03:19 -- nvmf/common.sh@124 -- # set -e 00:10:04.306 21:03:19 -- nvmf/common.sh@125 -- # return 0 00:10:04.306 21:03:19 -- nvmf/common.sh@478 -- # '[' -n 2946104 ']' 00:10:04.306 21:03:19 -- nvmf/common.sh@479 -- # killprocess 2946104 00:10:04.306 21:03:19 -- common/autotest_common.sh@936 -- # '[' -z 2946104 ']' 00:10:04.306 21:03:19 -- common/autotest_common.sh@940 -- # kill -0 2946104 00:10:04.306 21:03:19 -- common/autotest_common.sh@941 -- # uname 00:10:04.306 21:03:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:04.306 21:03:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2946104 00:10:04.306 21:03:20 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:04.306 21:03:20 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:04.306 21:03:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2946104' 00:10:04.306 killing process with pid 2946104 00:10:04.306 21:03:20 -- common/autotest_common.sh@955 -- # kill 2946104 00:10:04.306 21:03:20 -- common/autotest_common.sh@960 -- # wait 2946104 00:10:04.573 21:03:20 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:04.573 21:03:20 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:04.573 21:03:20 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:04.573 21:03:20 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:04.573 21:03:20 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:04.573 21:03:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.573 21:03:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:04.573 21:03:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.481 21:03:22 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:06.481 00:10:06.481 real 0m19.712s 00:10:06.481 user 0m40.971s 00:10:06.481 sys 0m8.716s 00:10:06.481 21:03:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:06.481 21:03:22 -- common/autotest_common.sh@10 -- # set +x 00:10:06.481 ************************************ 00:10:06.481 END TEST nvmf_connect_stress 00:10:06.481 ************************************ 00:10:06.481 21:03:22 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:06.481 21:03:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:06.481 21:03:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:06.481 21:03:22 -- common/autotest_common.sh@10 -- # set +x 00:10:06.740 ************************************ 00:10:06.740 START TEST nvmf_fused_ordering 00:10:06.740 ************************************ 00:10:06.740 21:03:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:06.740 * Looking for test storage... 00:10:06.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:06.740 21:03:22 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:06.740 21:03:22 -- nvmf/common.sh@7 -- # uname -s 00:10:06.740 21:03:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:06.740 21:03:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:06.740 21:03:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:06.740 21:03:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:06.740 21:03:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:06.740 21:03:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:06.740 21:03:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:06.740 21:03:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:06.740 21:03:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:06.740 21:03:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:06.740 21:03:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:06.740 21:03:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:06.740 21:03:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:06.740 21:03:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:06.740 21:03:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:06.740 21:03:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:06.740 21:03:22 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:06.740 21:03:22 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:06.740 21:03:22 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:06.740 21:03:22 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:06.741 21:03:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.741 21:03:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.741 21:03:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.741 21:03:22 -- paths/export.sh@5 -- # export PATH 00:10:06.741 21:03:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.741 21:03:22 -- nvmf/common.sh@47 -- # : 0 00:10:06.741 21:03:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:06.741 21:03:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:06.741 21:03:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:06.741 21:03:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:06.741 21:03:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:06.741 21:03:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:06.741 21:03:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:06.741 21:03:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:06.741 21:03:22 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:10:06.741 21:03:22 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:06.741 21:03:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:06.741 21:03:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:06.741 21:03:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:06.741 21:03:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:06.741 21:03:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.741 21:03:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:06.741 21:03:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.741 21:03:22 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:06.741 21:03:22 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:06.741 21:03:22 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:06.741 21:03:22 -- common/autotest_common.sh@10 -- # set +x 00:10:13.309 21:03:28 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:13.309 21:03:28 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:13.309 21:03:28 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:13.309 21:03:28 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:13.309 21:03:28 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:13.309 21:03:28 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:13.309 21:03:28 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:13.309 21:03:28 -- nvmf/common.sh@295 -- # net_devs=() 00:10:13.309 21:03:28 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:13.309 21:03:28 -- nvmf/common.sh@296 -- # e810=() 00:10:13.309 21:03:28 -- nvmf/common.sh@296 -- # local -ga e810 00:10:13.309 21:03:28 -- nvmf/common.sh@297 -- # x722=() 00:10:13.309 21:03:28 -- nvmf/common.sh@297 -- # local -ga x722 00:10:13.309 21:03:28 -- nvmf/common.sh@298 -- # mlx=() 00:10:13.309 21:03:28 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:13.309 21:03:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:13.309 21:03:28 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:13.309 21:03:28 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:13.309 21:03:28 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:13.309 21:03:28 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:13.309 21:03:28 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:13.309 21:03:28 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:13.309 21:03:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:13.310 21:03:28 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:13.310 21:03:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:13.310 21:03:28 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:13.310 21:03:28 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:13.310 21:03:28 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:13.310 21:03:28 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:13.310 21:03:28 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:13.310 21:03:28 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:13.310 21:03:28 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:13.310 21:03:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:13.310 21:03:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:13.310 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:13.310 21:03:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:13.310 21:03:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:13.310 21:03:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:13.310 21:03:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:13.310 21:03:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:13.310 21:03:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:13.310 21:03:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:13.310 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:13.310 21:03:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:13.310 21:03:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:13.310 21:03:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:13.310 21:03:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:13.310 21:03:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:13.310 21:03:28 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:13.310 21:03:28 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:13.310 21:03:28 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:13.310 21:03:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:13.310 21:03:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:13.310 21:03:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:13.310 21:03:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:13.310 21:03:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:13.310 Found net devices under 0000:86:00.0: cvl_0_0 00:10:13.310 21:03:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:13.310 21:03:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:13.310 21:03:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:13.310 21:03:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:13.310 21:03:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:13.310 21:03:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:13.310 Found net devices under 0000:86:00.1: cvl_0_1 00:10:13.310 21:03:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:13.310 21:03:28 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:13.310 21:03:28 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:13.310 21:03:28 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:13.310 21:03:28 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:10:13.310 21:03:28 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:10:13.310 21:03:28 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:13.310 21:03:28 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:13.310 21:03:28 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:13.310 21:03:28 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:13.310 21:03:28 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:13.310 21:03:28 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:13.310 21:03:28 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:13.310 21:03:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:13.310 21:03:28 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:13.310 21:03:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:13.310 21:03:28 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:13.310 21:03:28 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:13.310 21:03:28 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:13.310 21:03:28 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:13.310 21:03:28 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:13.310 21:03:28 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:13.310 21:03:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:13.310 21:03:28 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:13.310 21:03:28 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:13.310 21:03:28 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:13.310 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:13.310 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:10:13.310 00:10:13.310 --- 10.0.0.2 ping statistics --- 00:10:13.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.310 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:10:13.310 21:03:28 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:13.310 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:13.310 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:10:13.310 00:10:13.310 --- 10.0.0.1 ping statistics --- 00:10:13.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.310 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:10:13.310 21:03:28 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:13.310 21:03:28 -- nvmf/common.sh@411 -- # return 0 00:10:13.310 21:03:28 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:13.310 21:03:28 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:13.310 21:03:28 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:13.310 21:03:28 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:13.310 21:03:28 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:13.310 21:03:28 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:13.310 21:03:28 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:13.310 21:03:28 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:10:13.310 21:03:28 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:13.310 21:03:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:13.310 21:03:28 -- common/autotest_common.sh@10 -- # set +x 00:10:13.310 21:03:28 -- nvmf/common.sh@470 -- # nvmfpid=2952221 00:10:13.310 21:03:28 -- nvmf/common.sh@471 -- # waitforlisten 2952221 00:10:13.310 21:03:28 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:13.310 21:03:28 -- common/autotest_common.sh@817 -- # '[' -z 2952221 ']' 00:10:13.310 21:03:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.310 21:03:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:13.310 21:03:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.310 21:03:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:13.310 21:03:28 -- common/autotest_common.sh@10 -- # set +x 00:10:13.310 [2024-04-18 21:03:28.909283] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:10:13.310 [2024-04-18 21:03:28.909324] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:13.310 EAL: No free 2048 kB hugepages reported on node 1 00:10:13.310 [2024-04-18 21:03:28.972592] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.310 [2024-04-18 21:03:29.046424] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:13.310 [2024-04-18 21:03:29.046464] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:13.310 [2024-04-18 21:03:29.046470] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:13.310 [2024-04-18 21:03:29.046477] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:13.310 [2024-04-18 21:03:29.046482] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:13.310 [2024-04-18 21:03:29.046503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:13.877 21:03:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:13.877 21:03:29 -- common/autotest_common.sh@850 -- # return 0 00:10:13.877 21:03:29 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:13.877 21:03:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:13.877 21:03:29 -- common/autotest_common.sh@10 -- # set +x 00:10:13.877 21:03:29 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:13.877 21:03:29 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:13.877 21:03:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:13.877 21:03:29 -- common/autotest_common.sh@10 -- # set +x 00:10:13.877 [2024-04-18 21:03:29.728831] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:13.877 21:03:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:13.877 21:03:29 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:13.877 21:03:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:13.877 21:03:29 -- common/autotest_common.sh@10 -- # set +x 00:10:13.877 21:03:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:13.877 21:03:29 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:13.877 21:03:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:13.877 21:03:29 -- common/autotest_common.sh@10 -- # set +x 00:10:13.877 [2024-04-18 21:03:29.744968] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:13.877 21:03:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:13.877 21:03:29 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:13.877 21:03:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:13.877 21:03:29 -- common/autotest_common.sh@10 -- # set +x 00:10:13.877 NULL1 00:10:13.877 21:03:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:13.877 21:03:29 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:10:13.877 21:03:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:13.877 21:03:29 -- common/autotest_common.sh@10 -- # set +x 00:10:13.877 21:03:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:13.877 21:03:29 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:13.877 21:03:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:13.877 21:03:29 -- common/autotest_common.sh@10 -- # set +x 00:10:13.877 21:03:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:13.877 21:03:29 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:13.877 [2024-04-18 21:03:29.797595] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:10:13.877 [2024-04-18 21:03:29.797628] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2952303 ] 00:10:14.135 EAL: No free 2048 kB hugepages reported on node 1 00:10:14.702 Attached to nqn.2016-06.io.spdk:cnode1 00:10:14.702 Namespace ID: 1 size: 1GB 00:10:14.702 fused_ordering(0) 00:10:14.702 fused_ordering(1) 00:10:14.702 fused_ordering(2) 00:10:14.702 fused_ordering(3) 00:10:14.702 fused_ordering(4) 00:10:14.702 fused_ordering(5) 00:10:14.702 fused_ordering(6) 00:10:14.702 fused_ordering(7) 00:10:14.702 fused_ordering(8) 00:10:14.702 fused_ordering(9) 00:10:14.702 fused_ordering(10) 00:10:14.702 fused_ordering(11) 00:10:14.702 fused_ordering(12) 00:10:14.702 fused_ordering(13) 00:10:14.702 fused_ordering(14) 00:10:14.702 fused_ordering(15) 00:10:14.702 fused_ordering(16) 00:10:14.702 fused_ordering(17) 00:10:14.702 fused_ordering(18) 00:10:14.702 fused_ordering(19) 00:10:14.702 fused_ordering(20) 00:10:14.702 fused_ordering(21) 00:10:14.702 fused_ordering(22) 00:10:14.702 fused_ordering(23) 00:10:14.702 fused_ordering(24) 00:10:14.702 fused_ordering(25) 00:10:14.702 fused_ordering(26) 00:10:14.702 fused_ordering(27) 00:10:14.702 fused_ordering(28) 00:10:14.702 fused_ordering(29) 00:10:14.702 fused_ordering(30) 00:10:14.702 fused_ordering(31) 00:10:14.702 fused_ordering(32) 00:10:14.702 fused_ordering(33) 00:10:14.702 fused_ordering(34) 00:10:14.702 fused_ordering(35) 00:10:14.702 fused_ordering(36) 00:10:14.702 fused_ordering(37) 00:10:14.702 fused_ordering(38) 00:10:14.702 fused_ordering(39) 00:10:14.702 fused_ordering(40) 00:10:14.702 fused_ordering(41) 00:10:14.702 fused_ordering(42) 00:10:14.702 fused_ordering(43) 00:10:14.702 fused_ordering(44) 00:10:14.702 fused_ordering(45) 00:10:14.702 fused_ordering(46) 00:10:14.702 fused_ordering(47) 00:10:14.702 fused_ordering(48) 00:10:14.702 fused_ordering(49) 00:10:14.702 fused_ordering(50) 00:10:14.702 fused_ordering(51) 00:10:14.702 fused_ordering(52) 00:10:14.702 fused_ordering(53) 00:10:14.702 fused_ordering(54) 00:10:14.702 fused_ordering(55) 00:10:14.702 fused_ordering(56) 00:10:14.702 fused_ordering(57) 00:10:14.702 fused_ordering(58) 00:10:14.702 fused_ordering(59) 00:10:14.702 fused_ordering(60) 00:10:14.702 fused_ordering(61) 00:10:14.702 fused_ordering(62) 00:10:14.702 fused_ordering(63) 00:10:14.702 fused_ordering(64) 00:10:14.702 fused_ordering(65) 00:10:14.702 fused_ordering(66) 00:10:14.702 fused_ordering(67) 00:10:14.702 fused_ordering(68) 00:10:14.702 fused_ordering(69) 00:10:14.702 fused_ordering(70) 00:10:14.702 fused_ordering(71) 00:10:14.702 fused_ordering(72) 00:10:14.702 fused_ordering(73) 00:10:14.702 fused_ordering(74) 00:10:14.702 fused_ordering(75) 00:10:14.702 fused_ordering(76) 00:10:14.702 fused_ordering(77) 00:10:14.702 fused_ordering(78) 00:10:14.702 fused_ordering(79) 00:10:14.702 fused_ordering(80) 00:10:14.702 fused_ordering(81) 00:10:14.702 fused_ordering(82) 00:10:14.702 fused_ordering(83) 00:10:14.702 fused_ordering(84) 00:10:14.702 fused_ordering(85) 00:10:14.702 fused_ordering(86) 00:10:14.702 fused_ordering(87) 00:10:14.702 fused_ordering(88) 00:10:14.702 fused_ordering(89) 00:10:14.702 fused_ordering(90) 00:10:14.702 fused_ordering(91) 00:10:14.702 fused_ordering(92) 00:10:14.702 fused_ordering(93) 00:10:14.702 fused_ordering(94) 00:10:14.702 fused_ordering(95) 00:10:14.702 fused_ordering(96) 00:10:14.702 fused_ordering(97) 00:10:14.702 fused_ordering(98) 00:10:14.702 fused_ordering(99) 00:10:14.702 fused_ordering(100) 00:10:14.702 fused_ordering(101) 00:10:14.702 fused_ordering(102) 00:10:14.702 fused_ordering(103) 00:10:14.702 fused_ordering(104) 00:10:14.702 fused_ordering(105) 00:10:14.702 fused_ordering(106) 00:10:14.702 fused_ordering(107) 00:10:14.702 fused_ordering(108) 00:10:14.702 fused_ordering(109) 00:10:14.702 fused_ordering(110) 00:10:14.702 fused_ordering(111) 00:10:14.702 fused_ordering(112) 00:10:14.702 fused_ordering(113) 00:10:14.702 fused_ordering(114) 00:10:14.702 fused_ordering(115) 00:10:14.702 fused_ordering(116) 00:10:14.702 fused_ordering(117) 00:10:14.702 fused_ordering(118) 00:10:14.702 fused_ordering(119) 00:10:14.702 fused_ordering(120) 00:10:14.702 fused_ordering(121) 00:10:14.702 fused_ordering(122) 00:10:14.702 fused_ordering(123) 00:10:14.702 fused_ordering(124) 00:10:14.702 fused_ordering(125) 00:10:14.702 fused_ordering(126) 00:10:14.702 fused_ordering(127) 00:10:14.702 fused_ordering(128) 00:10:14.702 fused_ordering(129) 00:10:14.702 fused_ordering(130) 00:10:14.702 fused_ordering(131) 00:10:14.702 fused_ordering(132) 00:10:14.702 fused_ordering(133) 00:10:14.702 fused_ordering(134) 00:10:14.702 fused_ordering(135) 00:10:14.702 fused_ordering(136) 00:10:14.702 fused_ordering(137) 00:10:14.702 fused_ordering(138) 00:10:14.702 fused_ordering(139) 00:10:14.702 fused_ordering(140) 00:10:14.702 fused_ordering(141) 00:10:14.702 fused_ordering(142) 00:10:14.702 fused_ordering(143) 00:10:14.702 fused_ordering(144) 00:10:14.702 fused_ordering(145) 00:10:14.702 fused_ordering(146) 00:10:14.702 fused_ordering(147) 00:10:14.702 fused_ordering(148) 00:10:14.702 fused_ordering(149) 00:10:14.703 fused_ordering(150) 00:10:14.703 fused_ordering(151) 00:10:14.703 fused_ordering(152) 00:10:14.703 fused_ordering(153) 00:10:14.703 fused_ordering(154) 00:10:14.703 fused_ordering(155) 00:10:14.703 fused_ordering(156) 00:10:14.703 fused_ordering(157) 00:10:14.703 fused_ordering(158) 00:10:14.703 fused_ordering(159) 00:10:14.703 fused_ordering(160) 00:10:14.703 fused_ordering(161) 00:10:14.703 fused_ordering(162) 00:10:14.703 fused_ordering(163) 00:10:14.703 fused_ordering(164) 00:10:14.703 fused_ordering(165) 00:10:14.703 fused_ordering(166) 00:10:14.703 fused_ordering(167) 00:10:14.703 fused_ordering(168) 00:10:14.703 fused_ordering(169) 00:10:14.703 fused_ordering(170) 00:10:14.703 fused_ordering(171) 00:10:14.703 fused_ordering(172) 00:10:14.703 fused_ordering(173) 00:10:14.703 fused_ordering(174) 00:10:14.703 fused_ordering(175) 00:10:14.703 fused_ordering(176) 00:10:14.703 fused_ordering(177) 00:10:14.703 fused_ordering(178) 00:10:14.703 fused_ordering(179) 00:10:14.703 fused_ordering(180) 00:10:14.703 fused_ordering(181) 00:10:14.703 fused_ordering(182) 00:10:14.703 fused_ordering(183) 00:10:14.703 fused_ordering(184) 00:10:14.703 fused_ordering(185) 00:10:14.703 fused_ordering(186) 00:10:14.703 fused_ordering(187) 00:10:14.703 fused_ordering(188) 00:10:14.703 fused_ordering(189) 00:10:14.703 fused_ordering(190) 00:10:14.703 fused_ordering(191) 00:10:14.703 fused_ordering(192) 00:10:14.703 fused_ordering(193) 00:10:14.703 fused_ordering(194) 00:10:14.703 fused_ordering(195) 00:10:14.703 fused_ordering(196) 00:10:14.703 fused_ordering(197) 00:10:14.703 fused_ordering(198) 00:10:14.703 fused_ordering(199) 00:10:14.703 fused_ordering(200) 00:10:14.703 fused_ordering(201) 00:10:14.703 fused_ordering(202) 00:10:14.703 fused_ordering(203) 00:10:14.703 fused_ordering(204) 00:10:14.703 fused_ordering(205) 00:10:14.961 fused_ordering(206) 00:10:14.961 fused_ordering(207) 00:10:14.961 fused_ordering(208) 00:10:14.961 fused_ordering(209) 00:10:14.962 fused_ordering(210) 00:10:14.962 fused_ordering(211) 00:10:14.962 fused_ordering(212) 00:10:14.962 fused_ordering(213) 00:10:14.962 fused_ordering(214) 00:10:14.962 fused_ordering(215) 00:10:14.962 fused_ordering(216) 00:10:14.962 fused_ordering(217) 00:10:14.962 fused_ordering(218) 00:10:14.962 fused_ordering(219) 00:10:14.962 fused_ordering(220) 00:10:14.962 fused_ordering(221) 00:10:14.962 fused_ordering(222) 00:10:14.962 fused_ordering(223) 00:10:14.962 fused_ordering(224) 00:10:14.962 fused_ordering(225) 00:10:14.962 fused_ordering(226) 00:10:14.962 fused_ordering(227) 00:10:14.962 fused_ordering(228) 00:10:14.962 fused_ordering(229) 00:10:14.962 fused_ordering(230) 00:10:14.962 fused_ordering(231) 00:10:14.962 fused_ordering(232) 00:10:14.962 fused_ordering(233) 00:10:14.962 fused_ordering(234) 00:10:14.962 fused_ordering(235) 00:10:14.962 fused_ordering(236) 00:10:14.962 fused_ordering(237) 00:10:14.962 fused_ordering(238) 00:10:14.962 fused_ordering(239) 00:10:14.962 fused_ordering(240) 00:10:14.962 fused_ordering(241) 00:10:14.962 fused_ordering(242) 00:10:14.962 fused_ordering(243) 00:10:14.962 fused_ordering(244) 00:10:14.962 fused_ordering(245) 00:10:14.962 fused_ordering(246) 00:10:14.962 fused_ordering(247) 00:10:14.962 fused_ordering(248) 00:10:14.962 fused_ordering(249) 00:10:14.962 fused_ordering(250) 00:10:14.962 fused_ordering(251) 00:10:14.962 fused_ordering(252) 00:10:14.962 fused_ordering(253) 00:10:14.962 fused_ordering(254) 00:10:14.962 fused_ordering(255) 00:10:14.962 fused_ordering(256) 00:10:14.962 fused_ordering(257) 00:10:14.962 fused_ordering(258) 00:10:14.962 fused_ordering(259) 00:10:14.962 fused_ordering(260) 00:10:14.962 fused_ordering(261) 00:10:14.962 fused_ordering(262) 00:10:14.962 fused_ordering(263) 00:10:14.962 fused_ordering(264) 00:10:14.962 fused_ordering(265) 00:10:14.962 fused_ordering(266) 00:10:14.962 fused_ordering(267) 00:10:14.962 fused_ordering(268) 00:10:14.962 fused_ordering(269) 00:10:14.962 fused_ordering(270) 00:10:14.962 fused_ordering(271) 00:10:14.962 fused_ordering(272) 00:10:14.962 fused_ordering(273) 00:10:14.962 fused_ordering(274) 00:10:14.962 fused_ordering(275) 00:10:14.962 fused_ordering(276) 00:10:14.962 fused_ordering(277) 00:10:14.962 fused_ordering(278) 00:10:14.962 fused_ordering(279) 00:10:14.962 fused_ordering(280) 00:10:14.962 fused_ordering(281) 00:10:14.962 fused_ordering(282) 00:10:14.962 fused_ordering(283) 00:10:14.962 fused_ordering(284) 00:10:14.962 fused_ordering(285) 00:10:14.962 fused_ordering(286) 00:10:14.962 fused_ordering(287) 00:10:14.962 fused_ordering(288) 00:10:14.962 fused_ordering(289) 00:10:14.962 fused_ordering(290) 00:10:14.962 fused_ordering(291) 00:10:14.962 fused_ordering(292) 00:10:14.962 fused_ordering(293) 00:10:14.962 fused_ordering(294) 00:10:14.962 fused_ordering(295) 00:10:14.962 fused_ordering(296) 00:10:14.962 fused_ordering(297) 00:10:14.962 fused_ordering(298) 00:10:14.962 fused_ordering(299) 00:10:14.962 fused_ordering(300) 00:10:14.962 fused_ordering(301) 00:10:14.962 fused_ordering(302) 00:10:14.962 fused_ordering(303) 00:10:14.962 fused_ordering(304) 00:10:14.962 fused_ordering(305) 00:10:14.962 fused_ordering(306) 00:10:14.962 fused_ordering(307) 00:10:14.962 fused_ordering(308) 00:10:14.962 fused_ordering(309) 00:10:14.962 fused_ordering(310) 00:10:14.962 fused_ordering(311) 00:10:14.962 fused_ordering(312) 00:10:14.962 fused_ordering(313) 00:10:14.962 fused_ordering(314) 00:10:14.962 fused_ordering(315) 00:10:14.962 fused_ordering(316) 00:10:14.962 fused_ordering(317) 00:10:14.962 fused_ordering(318) 00:10:14.962 fused_ordering(319) 00:10:14.962 fused_ordering(320) 00:10:14.962 fused_ordering(321) 00:10:14.962 fused_ordering(322) 00:10:14.962 fused_ordering(323) 00:10:14.962 fused_ordering(324) 00:10:14.962 fused_ordering(325) 00:10:14.962 fused_ordering(326) 00:10:14.962 fused_ordering(327) 00:10:14.962 fused_ordering(328) 00:10:14.962 fused_ordering(329) 00:10:14.962 fused_ordering(330) 00:10:14.962 fused_ordering(331) 00:10:14.962 fused_ordering(332) 00:10:14.962 fused_ordering(333) 00:10:14.962 fused_ordering(334) 00:10:14.962 fused_ordering(335) 00:10:14.962 fused_ordering(336) 00:10:14.962 fused_ordering(337) 00:10:14.962 fused_ordering(338) 00:10:14.962 fused_ordering(339) 00:10:14.962 fused_ordering(340) 00:10:14.962 fused_ordering(341) 00:10:14.962 fused_ordering(342) 00:10:14.962 fused_ordering(343) 00:10:14.962 fused_ordering(344) 00:10:14.962 fused_ordering(345) 00:10:14.962 fused_ordering(346) 00:10:14.962 fused_ordering(347) 00:10:14.962 fused_ordering(348) 00:10:14.962 fused_ordering(349) 00:10:14.962 fused_ordering(350) 00:10:14.962 fused_ordering(351) 00:10:14.962 fused_ordering(352) 00:10:14.962 fused_ordering(353) 00:10:14.962 fused_ordering(354) 00:10:14.962 fused_ordering(355) 00:10:14.962 fused_ordering(356) 00:10:14.962 fused_ordering(357) 00:10:14.962 fused_ordering(358) 00:10:14.962 fused_ordering(359) 00:10:14.962 fused_ordering(360) 00:10:14.962 fused_ordering(361) 00:10:14.962 fused_ordering(362) 00:10:14.962 fused_ordering(363) 00:10:14.962 fused_ordering(364) 00:10:14.962 fused_ordering(365) 00:10:14.962 fused_ordering(366) 00:10:14.962 fused_ordering(367) 00:10:14.962 fused_ordering(368) 00:10:14.962 fused_ordering(369) 00:10:14.962 fused_ordering(370) 00:10:14.962 fused_ordering(371) 00:10:14.962 fused_ordering(372) 00:10:14.962 fused_ordering(373) 00:10:14.962 fused_ordering(374) 00:10:14.962 fused_ordering(375) 00:10:14.962 fused_ordering(376) 00:10:14.962 fused_ordering(377) 00:10:14.962 fused_ordering(378) 00:10:14.962 fused_ordering(379) 00:10:14.962 fused_ordering(380) 00:10:14.962 fused_ordering(381) 00:10:14.962 fused_ordering(382) 00:10:14.962 fused_ordering(383) 00:10:14.962 fused_ordering(384) 00:10:14.962 fused_ordering(385) 00:10:14.962 fused_ordering(386) 00:10:14.962 fused_ordering(387) 00:10:14.962 fused_ordering(388) 00:10:14.962 fused_ordering(389) 00:10:14.962 fused_ordering(390) 00:10:14.962 fused_ordering(391) 00:10:14.962 fused_ordering(392) 00:10:14.962 fused_ordering(393) 00:10:14.962 fused_ordering(394) 00:10:14.962 fused_ordering(395) 00:10:14.962 fused_ordering(396) 00:10:14.962 fused_ordering(397) 00:10:14.962 fused_ordering(398) 00:10:14.962 fused_ordering(399) 00:10:14.962 fused_ordering(400) 00:10:14.962 fused_ordering(401) 00:10:14.962 fused_ordering(402) 00:10:14.962 fused_ordering(403) 00:10:14.963 fused_ordering(404) 00:10:14.963 fused_ordering(405) 00:10:14.963 fused_ordering(406) 00:10:14.963 fused_ordering(407) 00:10:14.963 fused_ordering(408) 00:10:14.963 fused_ordering(409) 00:10:14.963 fused_ordering(410) 00:10:15.529 fused_ordering(411) 00:10:15.529 fused_ordering(412) 00:10:15.529 fused_ordering(413) 00:10:15.529 fused_ordering(414) 00:10:15.529 fused_ordering(415) 00:10:15.529 fused_ordering(416) 00:10:15.529 fused_ordering(417) 00:10:15.529 fused_ordering(418) 00:10:15.529 fused_ordering(419) 00:10:15.529 fused_ordering(420) 00:10:15.529 fused_ordering(421) 00:10:15.529 fused_ordering(422) 00:10:15.529 fused_ordering(423) 00:10:15.529 fused_ordering(424) 00:10:15.529 fused_ordering(425) 00:10:15.529 fused_ordering(426) 00:10:15.529 fused_ordering(427) 00:10:15.529 fused_ordering(428) 00:10:15.529 fused_ordering(429) 00:10:15.529 fused_ordering(430) 00:10:15.529 fused_ordering(431) 00:10:15.529 fused_ordering(432) 00:10:15.529 fused_ordering(433) 00:10:15.529 fused_ordering(434) 00:10:15.529 fused_ordering(435) 00:10:15.529 fused_ordering(436) 00:10:15.529 fused_ordering(437) 00:10:15.529 fused_ordering(438) 00:10:15.529 fused_ordering(439) 00:10:15.529 fused_ordering(440) 00:10:15.529 fused_ordering(441) 00:10:15.529 fused_ordering(442) 00:10:15.529 fused_ordering(443) 00:10:15.529 fused_ordering(444) 00:10:15.529 fused_ordering(445) 00:10:15.529 fused_ordering(446) 00:10:15.529 fused_ordering(447) 00:10:15.529 fused_ordering(448) 00:10:15.529 fused_ordering(449) 00:10:15.529 fused_ordering(450) 00:10:15.529 fused_ordering(451) 00:10:15.529 fused_ordering(452) 00:10:15.529 fused_ordering(453) 00:10:15.529 fused_ordering(454) 00:10:15.529 fused_ordering(455) 00:10:15.529 fused_ordering(456) 00:10:15.529 fused_ordering(457) 00:10:15.529 fused_ordering(458) 00:10:15.529 fused_ordering(459) 00:10:15.529 fused_ordering(460) 00:10:15.529 fused_ordering(461) 00:10:15.529 fused_ordering(462) 00:10:15.529 fused_ordering(463) 00:10:15.529 fused_ordering(464) 00:10:15.529 fused_ordering(465) 00:10:15.529 fused_ordering(466) 00:10:15.529 fused_ordering(467) 00:10:15.529 fused_ordering(468) 00:10:15.529 fused_ordering(469) 00:10:15.529 fused_ordering(470) 00:10:15.529 fused_ordering(471) 00:10:15.529 fused_ordering(472) 00:10:15.529 fused_ordering(473) 00:10:15.529 fused_ordering(474) 00:10:15.529 fused_ordering(475) 00:10:15.529 fused_ordering(476) 00:10:15.529 fused_ordering(477) 00:10:15.529 fused_ordering(478) 00:10:15.529 fused_ordering(479) 00:10:15.529 fused_ordering(480) 00:10:15.529 fused_ordering(481) 00:10:15.529 fused_ordering(482) 00:10:15.529 fused_ordering(483) 00:10:15.529 fused_ordering(484) 00:10:15.529 fused_ordering(485) 00:10:15.529 fused_ordering(486) 00:10:15.529 fused_ordering(487) 00:10:15.529 fused_ordering(488) 00:10:15.529 fused_ordering(489) 00:10:15.529 fused_ordering(490) 00:10:15.529 fused_ordering(491) 00:10:15.529 fused_ordering(492) 00:10:15.529 fused_ordering(493) 00:10:15.529 fused_ordering(494) 00:10:15.529 fused_ordering(495) 00:10:15.529 fused_ordering(496) 00:10:15.529 fused_ordering(497) 00:10:15.529 fused_ordering(498) 00:10:15.529 fused_ordering(499) 00:10:15.529 fused_ordering(500) 00:10:15.529 fused_ordering(501) 00:10:15.529 fused_ordering(502) 00:10:15.529 fused_ordering(503) 00:10:15.529 fused_ordering(504) 00:10:15.529 fused_ordering(505) 00:10:15.529 fused_ordering(506) 00:10:15.529 fused_ordering(507) 00:10:15.529 fused_ordering(508) 00:10:15.529 fused_ordering(509) 00:10:15.529 fused_ordering(510) 00:10:15.529 fused_ordering(511) 00:10:15.529 fused_ordering(512) 00:10:15.529 fused_ordering(513) 00:10:15.529 fused_ordering(514) 00:10:15.529 fused_ordering(515) 00:10:15.529 fused_ordering(516) 00:10:15.529 fused_ordering(517) 00:10:15.529 fused_ordering(518) 00:10:15.529 fused_ordering(519) 00:10:15.529 fused_ordering(520) 00:10:15.529 fused_ordering(521) 00:10:15.529 fused_ordering(522) 00:10:15.529 fused_ordering(523) 00:10:15.529 fused_ordering(524) 00:10:15.529 fused_ordering(525) 00:10:15.529 fused_ordering(526) 00:10:15.529 fused_ordering(527) 00:10:15.529 fused_ordering(528) 00:10:15.529 fused_ordering(529) 00:10:15.529 fused_ordering(530) 00:10:15.529 fused_ordering(531) 00:10:15.529 fused_ordering(532) 00:10:15.529 fused_ordering(533) 00:10:15.529 fused_ordering(534) 00:10:15.529 fused_ordering(535) 00:10:15.529 fused_ordering(536) 00:10:15.529 fused_ordering(537) 00:10:15.529 fused_ordering(538) 00:10:15.529 fused_ordering(539) 00:10:15.529 fused_ordering(540) 00:10:15.529 fused_ordering(541) 00:10:15.529 fused_ordering(542) 00:10:15.529 fused_ordering(543) 00:10:15.529 fused_ordering(544) 00:10:15.529 fused_ordering(545) 00:10:15.529 fused_ordering(546) 00:10:15.529 fused_ordering(547) 00:10:15.529 fused_ordering(548) 00:10:15.529 fused_ordering(549) 00:10:15.529 fused_ordering(550) 00:10:15.529 fused_ordering(551) 00:10:15.529 fused_ordering(552) 00:10:15.529 fused_ordering(553) 00:10:15.529 fused_ordering(554) 00:10:15.529 fused_ordering(555) 00:10:15.529 fused_ordering(556) 00:10:15.529 fused_ordering(557) 00:10:15.529 fused_ordering(558) 00:10:15.529 fused_ordering(559) 00:10:15.529 fused_ordering(560) 00:10:15.529 fused_ordering(561) 00:10:15.529 fused_ordering(562) 00:10:15.529 fused_ordering(563) 00:10:15.529 fused_ordering(564) 00:10:15.529 fused_ordering(565) 00:10:15.529 fused_ordering(566) 00:10:15.529 fused_ordering(567) 00:10:15.529 fused_ordering(568) 00:10:15.529 fused_ordering(569) 00:10:15.529 fused_ordering(570) 00:10:15.529 fused_ordering(571) 00:10:15.529 fused_ordering(572) 00:10:15.529 fused_ordering(573) 00:10:15.529 fused_ordering(574) 00:10:15.529 fused_ordering(575) 00:10:15.529 fused_ordering(576) 00:10:15.529 fused_ordering(577) 00:10:15.529 fused_ordering(578) 00:10:15.529 fused_ordering(579) 00:10:15.529 fused_ordering(580) 00:10:15.529 fused_ordering(581) 00:10:15.529 fused_ordering(582) 00:10:15.529 fused_ordering(583) 00:10:15.529 fused_ordering(584) 00:10:15.529 fused_ordering(585) 00:10:15.529 fused_ordering(586) 00:10:15.529 fused_ordering(587) 00:10:15.529 fused_ordering(588) 00:10:15.529 fused_ordering(589) 00:10:15.529 fused_ordering(590) 00:10:15.529 fused_ordering(591) 00:10:15.529 fused_ordering(592) 00:10:15.529 fused_ordering(593) 00:10:15.529 fused_ordering(594) 00:10:15.529 fused_ordering(595) 00:10:15.529 fused_ordering(596) 00:10:15.529 fused_ordering(597) 00:10:15.529 fused_ordering(598) 00:10:15.529 fused_ordering(599) 00:10:15.530 fused_ordering(600) 00:10:15.530 fused_ordering(601) 00:10:15.530 fused_ordering(602) 00:10:15.530 fused_ordering(603) 00:10:15.530 fused_ordering(604) 00:10:15.530 fused_ordering(605) 00:10:15.530 fused_ordering(606) 00:10:15.530 fused_ordering(607) 00:10:15.530 fused_ordering(608) 00:10:15.530 fused_ordering(609) 00:10:15.530 fused_ordering(610) 00:10:15.530 fused_ordering(611) 00:10:15.530 fused_ordering(612) 00:10:15.530 fused_ordering(613) 00:10:15.530 fused_ordering(614) 00:10:15.530 fused_ordering(615) 00:10:16.096 fused_ordering(616) 00:10:16.096 fused_ordering(617) 00:10:16.096 fused_ordering(618) 00:10:16.096 fused_ordering(619) 00:10:16.096 fused_ordering(620) 00:10:16.096 fused_ordering(621) 00:10:16.096 fused_ordering(622) 00:10:16.096 fused_ordering(623) 00:10:16.096 fused_ordering(624) 00:10:16.096 fused_ordering(625) 00:10:16.096 fused_ordering(626) 00:10:16.096 fused_ordering(627) 00:10:16.096 fused_ordering(628) 00:10:16.096 fused_ordering(629) 00:10:16.096 fused_ordering(630) 00:10:16.096 fused_ordering(631) 00:10:16.096 fused_ordering(632) 00:10:16.096 fused_ordering(633) 00:10:16.096 fused_ordering(634) 00:10:16.096 fused_ordering(635) 00:10:16.096 fused_ordering(636) 00:10:16.096 fused_ordering(637) 00:10:16.096 fused_ordering(638) 00:10:16.096 fused_ordering(639) 00:10:16.096 fused_ordering(640) 00:10:16.096 fused_ordering(641) 00:10:16.096 fused_ordering(642) 00:10:16.096 fused_ordering(643) 00:10:16.096 fused_ordering(644) 00:10:16.096 fused_ordering(645) 00:10:16.096 fused_ordering(646) 00:10:16.096 fused_ordering(647) 00:10:16.096 fused_ordering(648) 00:10:16.096 fused_ordering(649) 00:10:16.096 fused_ordering(650) 00:10:16.096 fused_ordering(651) 00:10:16.096 fused_ordering(652) 00:10:16.096 fused_ordering(653) 00:10:16.096 fused_ordering(654) 00:10:16.096 fused_ordering(655) 00:10:16.096 fused_ordering(656) 00:10:16.096 fused_ordering(657) 00:10:16.096 fused_ordering(658) 00:10:16.096 fused_ordering(659) 00:10:16.096 fused_ordering(660) 00:10:16.096 fused_ordering(661) 00:10:16.096 fused_ordering(662) 00:10:16.096 fused_ordering(663) 00:10:16.096 fused_ordering(664) 00:10:16.096 fused_ordering(665) 00:10:16.096 fused_ordering(666) 00:10:16.096 fused_ordering(667) 00:10:16.096 fused_ordering(668) 00:10:16.096 fused_ordering(669) 00:10:16.096 fused_ordering(670) 00:10:16.096 fused_ordering(671) 00:10:16.096 fused_ordering(672) 00:10:16.096 fused_ordering(673) 00:10:16.096 fused_ordering(674) 00:10:16.096 fused_ordering(675) 00:10:16.096 fused_ordering(676) 00:10:16.096 fused_ordering(677) 00:10:16.096 fused_ordering(678) 00:10:16.096 fused_ordering(679) 00:10:16.096 fused_ordering(680) 00:10:16.096 fused_ordering(681) 00:10:16.096 fused_ordering(682) 00:10:16.096 fused_ordering(683) 00:10:16.096 fused_ordering(684) 00:10:16.096 fused_ordering(685) 00:10:16.096 fused_ordering(686) 00:10:16.096 fused_ordering(687) 00:10:16.096 fused_ordering(688) 00:10:16.096 fused_ordering(689) 00:10:16.096 fused_ordering(690) 00:10:16.096 fused_ordering(691) 00:10:16.096 fused_ordering(692) 00:10:16.096 fused_ordering(693) 00:10:16.096 fused_ordering(694) 00:10:16.096 fused_ordering(695) 00:10:16.096 fused_ordering(696) 00:10:16.096 fused_ordering(697) 00:10:16.096 fused_ordering(698) 00:10:16.096 fused_ordering(699) 00:10:16.096 fused_ordering(700) 00:10:16.096 fused_ordering(701) 00:10:16.096 fused_ordering(702) 00:10:16.096 fused_ordering(703) 00:10:16.096 fused_ordering(704) 00:10:16.096 fused_ordering(705) 00:10:16.096 fused_ordering(706) 00:10:16.096 fused_ordering(707) 00:10:16.096 fused_ordering(708) 00:10:16.096 fused_ordering(709) 00:10:16.096 fused_ordering(710) 00:10:16.096 fused_ordering(711) 00:10:16.096 fused_ordering(712) 00:10:16.096 fused_ordering(713) 00:10:16.096 fused_ordering(714) 00:10:16.096 fused_ordering(715) 00:10:16.096 fused_ordering(716) 00:10:16.096 fused_ordering(717) 00:10:16.096 fused_ordering(718) 00:10:16.096 fused_ordering(719) 00:10:16.096 fused_ordering(720) 00:10:16.096 fused_ordering(721) 00:10:16.096 fused_ordering(722) 00:10:16.096 fused_ordering(723) 00:10:16.096 fused_ordering(724) 00:10:16.096 fused_ordering(725) 00:10:16.096 fused_ordering(726) 00:10:16.096 fused_ordering(727) 00:10:16.096 fused_ordering(728) 00:10:16.096 fused_ordering(729) 00:10:16.096 fused_ordering(730) 00:10:16.096 fused_ordering(731) 00:10:16.096 fused_ordering(732) 00:10:16.096 fused_ordering(733) 00:10:16.096 fused_ordering(734) 00:10:16.096 fused_ordering(735) 00:10:16.096 fused_ordering(736) 00:10:16.096 fused_ordering(737) 00:10:16.096 fused_ordering(738) 00:10:16.096 fused_ordering(739) 00:10:16.096 fused_ordering(740) 00:10:16.096 fused_ordering(741) 00:10:16.096 fused_ordering(742) 00:10:16.096 fused_ordering(743) 00:10:16.096 fused_ordering(744) 00:10:16.096 fused_ordering(745) 00:10:16.096 fused_ordering(746) 00:10:16.096 fused_ordering(747) 00:10:16.096 fused_ordering(748) 00:10:16.096 fused_ordering(749) 00:10:16.096 fused_ordering(750) 00:10:16.096 fused_ordering(751) 00:10:16.096 fused_ordering(752) 00:10:16.096 fused_ordering(753) 00:10:16.096 fused_ordering(754) 00:10:16.096 fused_ordering(755) 00:10:16.096 fused_ordering(756) 00:10:16.096 fused_ordering(757) 00:10:16.096 fused_ordering(758) 00:10:16.096 fused_ordering(759) 00:10:16.096 fused_ordering(760) 00:10:16.096 fused_ordering(761) 00:10:16.096 fused_ordering(762) 00:10:16.096 fused_ordering(763) 00:10:16.096 fused_ordering(764) 00:10:16.096 fused_ordering(765) 00:10:16.096 fused_ordering(766) 00:10:16.096 fused_ordering(767) 00:10:16.096 fused_ordering(768) 00:10:16.096 fused_ordering(769) 00:10:16.096 fused_ordering(770) 00:10:16.096 fused_ordering(771) 00:10:16.096 fused_ordering(772) 00:10:16.097 fused_ordering(773) 00:10:16.097 fused_ordering(774) 00:10:16.097 fused_ordering(775) 00:10:16.097 fused_ordering(776) 00:10:16.097 fused_ordering(777) 00:10:16.097 fused_ordering(778) 00:10:16.097 fused_ordering(779) 00:10:16.097 fused_ordering(780) 00:10:16.097 fused_ordering(781) 00:10:16.097 fused_ordering(782) 00:10:16.097 fused_ordering(783) 00:10:16.097 fused_ordering(784) 00:10:16.097 fused_ordering(785) 00:10:16.097 fused_ordering(786) 00:10:16.097 fused_ordering(787) 00:10:16.097 fused_ordering(788) 00:10:16.097 fused_ordering(789) 00:10:16.097 fused_ordering(790) 00:10:16.097 fused_ordering(791) 00:10:16.097 fused_ordering(792) 00:10:16.097 fused_ordering(793) 00:10:16.097 fused_ordering(794) 00:10:16.097 fused_ordering(795) 00:10:16.097 fused_ordering(796) 00:10:16.097 fused_ordering(797) 00:10:16.097 fused_ordering(798) 00:10:16.097 fused_ordering(799) 00:10:16.097 fused_ordering(800) 00:10:16.097 fused_ordering(801) 00:10:16.097 fused_ordering(802) 00:10:16.097 fused_ordering(803) 00:10:16.097 fused_ordering(804) 00:10:16.097 fused_ordering(805) 00:10:16.097 fused_ordering(806) 00:10:16.097 fused_ordering(807) 00:10:16.097 fused_ordering(808) 00:10:16.097 fused_ordering(809) 00:10:16.097 fused_ordering(810) 00:10:16.097 fused_ordering(811) 00:10:16.097 fused_ordering(812) 00:10:16.097 fused_ordering(813) 00:10:16.097 fused_ordering(814) 00:10:16.097 fused_ordering(815) 00:10:16.097 fused_ordering(816) 00:10:16.097 fused_ordering(817) 00:10:16.097 fused_ordering(818) 00:10:16.097 fused_ordering(819) 00:10:16.097 fused_ordering(820) 00:10:17.032 fused_ordering(821) 00:10:17.032 fused_ordering(822) 00:10:17.032 fused_ordering(823) 00:10:17.032 fused_ordering(824) 00:10:17.032 fused_ordering(825) 00:10:17.032 fused_ordering(826) 00:10:17.032 fused_ordering(827) 00:10:17.032 fused_ordering(828) 00:10:17.032 fused_ordering(829) 00:10:17.032 fused_ordering(830) 00:10:17.032 fused_ordering(831) 00:10:17.032 fused_ordering(832) 00:10:17.032 fused_ordering(833) 00:10:17.032 fused_ordering(834) 00:10:17.032 fused_ordering(835) 00:10:17.032 fused_ordering(836) 00:10:17.032 fused_ordering(837) 00:10:17.032 fused_ordering(838) 00:10:17.032 fused_ordering(839) 00:10:17.032 fused_ordering(840) 00:10:17.032 fused_ordering(841) 00:10:17.032 fused_ordering(842) 00:10:17.032 fused_ordering(843) 00:10:17.032 fused_ordering(844) 00:10:17.032 fused_ordering(845) 00:10:17.032 fused_ordering(846) 00:10:17.032 fused_ordering(847) 00:10:17.032 fused_ordering(848) 00:10:17.032 fused_ordering(849) 00:10:17.032 fused_ordering(850) 00:10:17.032 fused_ordering(851) 00:10:17.032 fused_ordering(852) 00:10:17.032 fused_ordering(853) 00:10:17.032 fused_ordering(854) 00:10:17.032 fused_ordering(855) 00:10:17.032 fused_ordering(856) 00:10:17.032 fused_ordering(857) 00:10:17.032 fused_ordering(858) 00:10:17.032 fused_ordering(859) 00:10:17.032 fused_ordering(860) 00:10:17.032 fused_ordering(861) 00:10:17.032 fused_ordering(862) 00:10:17.032 fused_ordering(863) 00:10:17.032 fused_ordering(864) 00:10:17.032 fused_ordering(865) 00:10:17.032 fused_ordering(866) 00:10:17.032 fused_ordering(867) 00:10:17.032 fused_ordering(868) 00:10:17.032 fused_ordering(869) 00:10:17.032 fused_ordering(870) 00:10:17.032 fused_ordering(871) 00:10:17.032 fused_ordering(872) 00:10:17.032 fused_ordering(873) 00:10:17.032 fused_ordering(874) 00:10:17.032 fused_ordering(875) 00:10:17.032 fused_ordering(876) 00:10:17.032 fused_ordering(877) 00:10:17.032 fused_ordering(878) 00:10:17.032 fused_ordering(879) 00:10:17.032 fused_ordering(880) 00:10:17.032 fused_ordering(881) 00:10:17.032 fused_ordering(882) 00:10:17.032 fused_ordering(883) 00:10:17.032 fused_ordering(884) 00:10:17.032 fused_ordering(885) 00:10:17.032 fused_ordering(886) 00:10:17.032 fused_ordering(887) 00:10:17.032 fused_ordering(888) 00:10:17.032 fused_ordering(889) 00:10:17.032 fused_ordering(890) 00:10:17.032 fused_ordering(891) 00:10:17.032 fused_ordering(892) 00:10:17.032 fused_ordering(893) 00:10:17.032 fused_ordering(894) 00:10:17.032 fused_ordering(895) 00:10:17.032 fused_ordering(896) 00:10:17.032 fused_ordering(897) 00:10:17.032 fused_ordering(898) 00:10:17.032 fused_ordering(899) 00:10:17.032 fused_ordering(900) 00:10:17.032 fused_ordering(901) 00:10:17.032 fused_ordering(902) 00:10:17.032 fused_ordering(903) 00:10:17.032 fused_ordering(904) 00:10:17.032 fused_ordering(905) 00:10:17.032 fused_ordering(906) 00:10:17.032 fused_ordering(907) 00:10:17.032 fused_ordering(908) 00:10:17.032 fused_ordering(909) 00:10:17.032 fused_ordering(910) 00:10:17.032 fused_ordering(911) 00:10:17.032 fused_ordering(912) 00:10:17.032 fused_ordering(913) 00:10:17.032 fused_ordering(914) 00:10:17.032 fused_ordering(915) 00:10:17.032 fused_ordering(916) 00:10:17.033 fused_ordering(917) 00:10:17.033 fused_ordering(918) 00:10:17.033 fused_ordering(919) 00:10:17.033 fused_ordering(920) 00:10:17.033 fused_ordering(921) 00:10:17.033 fused_ordering(922) 00:10:17.033 fused_ordering(923) 00:10:17.033 fused_ordering(924) 00:10:17.033 fused_ordering(925) 00:10:17.033 fused_ordering(926) 00:10:17.033 fused_ordering(927) 00:10:17.033 fused_ordering(928) 00:10:17.033 fused_ordering(929) 00:10:17.033 fused_ordering(930) 00:10:17.033 fused_ordering(931) 00:10:17.033 fused_ordering(932) 00:10:17.033 fused_ordering(933) 00:10:17.033 fused_ordering(934) 00:10:17.033 fused_ordering(935) 00:10:17.033 fused_ordering(936) 00:10:17.033 fused_ordering(937) 00:10:17.033 fused_ordering(938) 00:10:17.033 fused_ordering(939) 00:10:17.033 fused_ordering(940) 00:10:17.033 fused_ordering(941) 00:10:17.033 fused_ordering(942) 00:10:17.033 fused_ordering(943) 00:10:17.033 fused_ordering(944) 00:10:17.033 fused_ordering(945) 00:10:17.033 fused_ordering(946) 00:10:17.033 fused_ordering(947) 00:10:17.033 fused_ordering(948) 00:10:17.033 fused_ordering(949) 00:10:17.033 fused_ordering(950) 00:10:17.033 fused_ordering(951) 00:10:17.033 fused_ordering(952) 00:10:17.033 fused_ordering(953) 00:10:17.033 fused_ordering(954) 00:10:17.033 fused_ordering(955) 00:10:17.033 fused_ordering(956) 00:10:17.033 fused_ordering(957) 00:10:17.033 fused_ordering(958) 00:10:17.033 fused_ordering(959) 00:10:17.033 fused_ordering(960) 00:10:17.033 fused_ordering(961) 00:10:17.033 fused_ordering(962) 00:10:17.033 fused_ordering(963) 00:10:17.033 fused_ordering(964) 00:10:17.033 fused_ordering(965) 00:10:17.033 fused_ordering(966) 00:10:17.033 fused_ordering(967) 00:10:17.033 fused_ordering(968) 00:10:17.033 fused_ordering(969) 00:10:17.033 fused_ordering(970) 00:10:17.033 fused_ordering(971) 00:10:17.033 fused_ordering(972) 00:10:17.033 fused_ordering(973) 00:10:17.033 fused_ordering(974) 00:10:17.033 fused_ordering(975) 00:10:17.033 fused_ordering(976) 00:10:17.033 fused_ordering(977) 00:10:17.033 fused_ordering(978) 00:10:17.033 fused_ordering(979) 00:10:17.033 fused_ordering(980) 00:10:17.033 fused_ordering(981) 00:10:17.033 fused_ordering(982) 00:10:17.033 fused_ordering(983) 00:10:17.033 fused_ordering(984) 00:10:17.033 fused_ordering(985) 00:10:17.033 fused_ordering(986) 00:10:17.033 fused_ordering(987) 00:10:17.033 fused_ordering(988) 00:10:17.033 fused_ordering(989) 00:10:17.033 fused_ordering(990) 00:10:17.033 fused_ordering(991) 00:10:17.033 fused_ordering(992) 00:10:17.033 fused_ordering(993) 00:10:17.033 fused_ordering(994) 00:10:17.033 fused_ordering(995) 00:10:17.033 fused_ordering(996) 00:10:17.033 fused_ordering(997) 00:10:17.033 fused_ordering(998) 00:10:17.033 fused_ordering(999) 00:10:17.033 fused_ordering(1000) 00:10:17.033 fused_ordering(1001) 00:10:17.033 fused_ordering(1002) 00:10:17.033 fused_ordering(1003) 00:10:17.033 fused_ordering(1004) 00:10:17.033 fused_ordering(1005) 00:10:17.033 fused_ordering(1006) 00:10:17.033 fused_ordering(1007) 00:10:17.033 fused_ordering(1008) 00:10:17.033 fused_ordering(1009) 00:10:17.033 fused_ordering(1010) 00:10:17.033 fused_ordering(1011) 00:10:17.033 fused_ordering(1012) 00:10:17.033 fused_ordering(1013) 00:10:17.033 fused_ordering(1014) 00:10:17.033 fused_ordering(1015) 00:10:17.033 fused_ordering(1016) 00:10:17.033 fused_ordering(1017) 00:10:17.033 fused_ordering(1018) 00:10:17.033 fused_ordering(1019) 00:10:17.033 fused_ordering(1020) 00:10:17.033 fused_ordering(1021) 00:10:17.033 fused_ordering(1022) 00:10:17.033 fused_ordering(1023) 00:10:17.033 21:03:32 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:10:17.033 21:03:32 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:10:17.033 21:03:32 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:17.033 21:03:32 -- nvmf/common.sh@117 -- # sync 00:10:17.033 21:03:32 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:17.033 21:03:32 -- nvmf/common.sh@120 -- # set +e 00:10:17.033 21:03:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:17.033 21:03:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:17.033 rmmod nvme_tcp 00:10:17.033 rmmod nvme_fabrics 00:10:17.033 rmmod nvme_keyring 00:10:17.033 21:03:32 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:17.033 21:03:32 -- nvmf/common.sh@124 -- # set -e 00:10:17.033 21:03:32 -- nvmf/common.sh@125 -- # return 0 00:10:17.033 21:03:32 -- nvmf/common.sh@478 -- # '[' -n 2952221 ']' 00:10:17.033 21:03:32 -- nvmf/common.sh@479 -- # killprocess 2952221 00:10:17.033 21:03:32 -- common/autotest_common.sh@936 -- # '[' -z 2952221 ']' 00:10:17.033 21:03:32 -- common/autotest_common.sh@940 -- # kill -0 2952221 00:10:17.033 21:03:32 -- common/autotest_common.sh@941 -- # uname 00:10:17.033 21:03:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:17.033 21:03:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2952221 00:10:17.033 21:03:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:17.033 21:03:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:17.033 21:03:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2952221' 00:10:17.033 killing process with pid 2952221 00:10:17.033 21:03:32 -- common/autotest_common.sh@955 -- # kill 2952221 00:10:17.033 21:03:32 -- common/autotest_common.sh@960 -- # wait 2952221 00:10:17.293 21:03:32 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:17.293 21:03:32 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:17.293 21:03:32 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:17.293 21:03:32 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:17.293 21:03:32 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:17.293 21:03:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.293 21:03:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:17.293 21:03:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.197 21:03:35 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:19.197 00:10:19.197 real 0m12.576s 00:10:19.197 user 0m6.942s 00:10:19.197 sys 0m6.921s 00:10:19.197 21:03:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:19.197 21:03:35 -- common/autotest_common.sh@10 -- # set +x 00:10:19.197 ************************************ 00:10:19.197 END TEST nvmf_fused_ordering 00:10:19.197 ************************************ 00:10:19.197 21:03:35 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:19.197 21:03:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:19.197 21:03:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:19.197 21:03:35 -- common/autotest_common.sh@10 -- # set +x 00:10:19.455 ************************************ 00:10:19.455 START TEST nvmf_delete_subsystem 00:10:19.455 ************************************ 00:10:19.455 21:03:35 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:19.455 * Looking for test storage... 00:10:19.455 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:19.455 21:03:35 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:19.455 21:03:35 -- nvmf/common.sh@7 -- # uname -s 00:10:19.455 21:03:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:19.455 21:03:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:19.455 21:03:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:19.455 21:03:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:19.455 21:03:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:19.455 21:03:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:19.455 21:03:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:19.455 21:03:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:19.455 21:03:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:19.455 21:03:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:19.455 21:03:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:19.455 21:03:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:19.455 21:03:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:19.455 21:03:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:19.455 21:03:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:19.455 21:03:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:19.455 21:03:35 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:19.455 21:03:35 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:19.455 21:03:35 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:19.455 21:03:35 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:19.455 21:03:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.455 21:03:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.455 21:03:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.455 21:03:35 -- paths/export.sh@5 -- # export PATH 00:10:19.455 21:03:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.455 21:03:35 -- nvmf/common.sh@47 -- # : 0 00:10:19.455 21:03:35 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:19.455 21:03:35 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:19.455 21:03:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:19.455 21:03:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:19.455 21:03:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:19.455 21:03:35 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:19.455 21:03:35 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:19.455 21:03:35 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:19.455 21:03:35 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:10:19.455 21:03:35 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:19.455 21:03:35 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:19.455 21:03:35 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:19.455 21:03:35 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:19.455 21:03:35 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:19.455 21:03:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.455 21:03:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:19.455 21:03:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.455 21:03:35 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:19.455 21:03:35 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:19.455 21:03:35 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:19.455 21:03:35 -- common/autotest_common.sh@10 -- # set +x 00:10:26.011 21:03:41 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:26.011 21:03:41 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:26.011 21:03:41 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:26.011 21:03:41 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:26.011 21:03:41 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:26.011 21:03:41 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:26.011 21:03:41 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:26.011 21:03:41 -- nvmf/common.sh@295 -- # net_devs=() 00:10:26.011 21:03:41 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:26.011 21:03:41 -- nvmf/common.sh@296 -- # e810=() 00:10:26.011 21:03:41 -- nvmf/common.sh@296 -- # local -ga e810 00:10:26.011 21:03:41 -- nvmf/common.sh@297 -- # x722=() 00:10:26.011 21:03:41 -- nvmf/common.sh@297 -- # local -ga x722 00:10:26.011 21:03:41 -- nvmf/common.sh@298 -- # mlx=() 00:10:26.011 21:03:41 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:26.011 21:03:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:26.011 21:03:41 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:26.011 21:03:41 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:26.011 21:03:41 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:26.011 21:03:41 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:26.011 21:03:41 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:26.011 21:03:41 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:26.011 21:03:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:26.011 21:03:41 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:26.011 21:03:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:26.011 21:03:41 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:26.011 21:03:41 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:26.011 21:03:41 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:26.011 21:03:41 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:26.011 21:03:41 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:26.012 21:03:41 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:26.012 21:03:41 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:26.012 21:03:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:26.012 21:03:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:26.012 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:26.012 21:03:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:26.012 21:03:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:26.012 21:03:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:26.012 21:03:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:26.012 21:03:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:26.012 21:03:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:26.012 21:03:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:26.012 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:26.012 21:03:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:26.012 21:03:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:26.012 21:03:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:26.012 21:03:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:26.012 21:03:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:26.012 21:03:41 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:26.012 21:03:41 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:26.012 21:03:41 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:26.012 21:03:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:26.012 21:03:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:26.012 21:03:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:26.012 21:03:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:26.012 21:03:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:26.012 Found net devices under 0000:86:00.0: cvl_0_0 00:10:26.012 21:03:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:26.012 21:03:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:26.012 21:03:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:26.012 21:03:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:26.012 21:03:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:26.012 21:03:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:26.012 Found net devices under 0000:86:00.1: cvl_0_1 00:10:26.012 21:03:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:26.012 21:03:41 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:26.012 21:03:41 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:26.012 21:03:41 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:26.012 21:03:41 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:10:26.012 21:03:41 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:10:26.012 21:03:41 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:26.012 21:03:41 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:26.012 21:03:41 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:26.012 21:03:41 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:26.012 21:03:41 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:26.012 21:03:41 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:26.012 21:03:41 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:26.012 21:03:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:26.012 21:03:41 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:26.012 21:03:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:26.012 21:03:41 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:26.012 21:03:41 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:26.012 21:03:41 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:26.012 21:03:41 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:26.012 21:03:41 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:26.012 21:03:41 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:26.012 21:03:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:26.012 21:03:41 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:26.012 21:03:41 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:26.012 21:03:41 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:26.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:26.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:10:26.012 00:10:26.012 --- 10.0.0.2 ping statistics --- 00:10:26.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.012 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:10:26.012 21:03:41 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:26.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:26.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:10:26.012 00:10:26.012 --- 10.0.0.1 ping statistics --- 00:10:26.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.012 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:10:26.012 21:03:41 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:26.012 21:03:41 -- nvmf/common.sh@411 -- # return 0 00:10:26.012 21:03:41 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:26.012 21:03:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:26.012 21:03:41 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:26.012 21:03:41 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:26.012 21:03:41 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:26.012 21:03:41 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:26.012 21:03:41 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:26.012 21:03:41 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:10:26.012 21:03:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:26.012 21:03:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:26.012 21:03:41 -- common/autotest_common.sh@10 -- # set +x 00:10:26.012 21:03:41 -- nvmf/common.sh@470 -- # nvmfpid=2956747 00:10:26.012 21:03:41 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:10:26.012 21:03:41 -- nvmf/common.sh@471 -- # waitforlisten 2956747 00:10:26.012 21:03:41 -- common/autotest_common.sh@817 -- # '[' -z 2956747 ']' 00:10:26.012 21:03:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.012 21:03:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:26.012 21:03:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.012 21:03:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:26.012 21:03:41 -- common/autotest_common.sh@10 -- # set +x 00:10:26.012 [2024-04-18 21:03:41.538320] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:10:26.012 [2024-04-18 21:03:41.538365] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:26.012 EAL: No free 2048 kB hugepages reported on node 1 00:10:26.012 [2024-04-18 21:03:41.601942] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:26.012 [2024-04-18 21:03:41.679439] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:26.012 [2024-04-18 21:03:41.679473] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:26.012 [2024-04-18 21:03:41.679480] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:26.012 [2024-04-18 21:03:41.679486] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:26.012 [2024-04-18 21:03:41.679491] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:26.012 [2024-04-18 21:03:41.679527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:26.012 [2024-04-18 21:03:41.679530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.576 21:03:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:26.576 21:03:42 -- common/autotest_common.sh@850 -- # return 0 00:10:26.576 21:03:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:26.576 21:03:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:26.576 21:03:42 -- common/autotest_common.sh@10 -- # set +x 00:10:26.576 21:03:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:26.576 21:03:42 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:26.576 21:03:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:26.576 21:03:42 -- common/autotest_common.sh@10 -- # set +x 00:10:26.576 [2024-04-18 21:03:42.380769] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:26.576 21:03:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:26.576 21:03:42 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:26.576 21:03:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:26.576 21:03:42 -- common/autotest_common.sh@10 -- # set +x 00:10:26.576 21:03:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:26.576 21:03:42 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:26.576 21:03:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:26.576 21:03:42 -- common/autotest_common.sh@10 -- # set +x 00:10:26.576 [2024-04-18 21:03:42.396901] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:26.576 21:03:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:26.576 21:03:42 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:26.576 21:03:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:26.576 21:03:42 -- common/autotest_common.sh@10 -- # set +x 00:10:26.576 NULL1 00:10:26.576 21:03:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:26.576 21:03:42 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:26.576 21:03:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:26.577 21:03:42 -- common/autotest_common.sh@10 -- # set +x 00:10:26.577 Delay0 00:10:26.577 21:03:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:26.577 21:03:42 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.577 21:03:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:26.577 21:03:42 -- common/autotest_common.sh@10 -- # set +x 00:10:26.577 21:03:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:26.577 21:03:42 -- target/delete_subsystem.sh@28 -- # perf_pid=2956790 00:10:26.577 21:03:42 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:26.577 21:03:42 -- target/delete_subsystem.sh@30 -- # sleep 2 00:10:26.577 EAL: No free 2048 kB hugepages reported on node 1 00:10:26.577 [2024-04-18 21:03:42.471554] subsystem.c:1517:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:29.102 21:03:44 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:29.102 21:03:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:29.102 21:03:44 -- common/autotest_common.sh@10 -- # set +x 00:10:29.102 Write completed with error (sct=0, sc=8) 00:10:29.102 Write completed with error (sct=0, sc=8) 00:10:29.102 Read completed with error (sct=0, sc=8) 00:10:29.102 starting I/O failed: -6 00:10:29.102 Read completed with error (sct=0, sc=8) 00:10:29.102 Read completed with error (sct=0, sc=8) 00:10:29.102 Write completed with error (sct=0, sc=8) 00:10:29.102 Read completed with error (sct=0, sc=8) 00:10:29.102 starting I/O failed: -6 00:10:29.102 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 starting I/O failed: -6 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 starting I/O failed: -6 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 starting I/O failed: -6 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 starting I/O failed: -6 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 starting I/O failed: -6 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 starting I/O failed: -6 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 starting I/O failed: -6 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 starting I/O failed: -6 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 starting I/O failed: -6 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 starting I/O failed: -6 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 starting I/O failed: -6 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 starting I/O failed: -6 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 starting I/O failed: -6 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 starting I/O failed: -6 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 starting I/O failed: -6 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 starting I/O failed: -6 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 starting I/O failed: -6 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 starting I/O failed: -6 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 starting I/O failed: -6 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 starting I/O failed: -6 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 starting I/O failed: -6 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 starting I/O failed: -6 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 [2024-04-18 21:03:44.601529] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efc48000c00 is same with the state(5) to be set 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 Write completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.103 Read completed with error (sct=0, sc=8) 00:10:29.104 Read completed with error (sct=0, sc=8) 00:10:29.104 Write completed with error (sct=0, sc=8) 00:10:29.104 Write completed with error (sct=0, sc=8) 00:10:29.104 Read completed with error (sct=0, sc=8) 00:10:29.104 Read completed with error (sct=0, sc=8) 00:10:29.104 Read completed with error (sct=0, sc=8) 00:10:29.104 Write completed with error (sct=0, sc=8) 00:10:29.104 Write completed with error (sct=0, sc=8) 00:10:29.104 Write completed with error (sct=0, sc=8) 00:10:29.104 Read completed with error (sct=0, sc=8) 00:10:29.104 Write completed with error (sct=0, sc=8) 00:10:29.104 Read completed with error (sct=0, sc=8) 00:10:29.104 Write completed with error (sct=0, sc=8) 00:10:29.104 Write completed with error (sct=0, sc=8) 00:10:29.104 Read completed with error (sct=0, sc=8) 00:10:29.104 Read completed with error (sct=0, sc=8) 00:10:29.104 Read completed with error (sct=0, sc=8) 00:10:29.104 Write completed with error (sct=0, sc=8) 00:10:29.104 Write completed with error (sct=0, sc=8) 00:10:29.104 Read completed with error (sct=0, sc=8) 00:10:29.668 [2024-04-18 21:03:45.567677] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a20e0 is same with the state(5) to be set 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Write completed with error (sct=0, sc=8) 00:10:29.946 Write completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Write completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Write completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Write completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 [2024-04-18 21:03:45.603086] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efc4800c2f0 is same with the state(5) to be set 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Write completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Write completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Write completed with error (sct=0, sc=8) 00:10:29.946 Write completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Write completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Write completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Write completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 [2024-04-18 21:03:45.604723] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a3140 is same with the state(5) to be set 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Write completed with error (sct=0, sc=8) 00:10:29.946 Write completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Write completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Write completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Write completed with error (sct=0, sc=8) 00:10:29.946 Write completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Write completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Write completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Write completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.946 Write completed with error (sct=0, sc=8) 00:10:29.946 Read completed with error (sct=0, sc=8) 00:10:29.947 Write completed with error (sct=0, sc=8) 00:10:29.947 Read completed with error (sct=0, sc=8) 00:10:29.947 [2024-04-18 21:03:45.604990] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a3f90 is same with the state(5) to be set 00:10:29.947 Read completed with error (sct=0, sc=8) 00:10:29.947 Read completed with error (sct=0, sc=8) 00:10:29.947 Read completed with error (sct=0, sc=8) 00:10:29.947 Read completed with error (sct=0, sc=8) 00:10:29.947 Read completed with error (sct=0, sc=8) 00:10:29.947 Read completed with error (sct=0, sc=8) 00:10:29.947 Write completed with error (sct=0, sc=8) 00:10:29.947 Read completed with error (sct=0, sc=8) 00:10:29.947 Read completed with error (sct=0, sc=8) 00:10:29.947 Read completed with error (sct=0, sc=8) 00:10:29.947 Read completed with error (sct=0, sc=8) 00:10:29.947 Read completed with error (sct=0, sc=8) 00:10:29.947 Read completed with error (sct=0, sc=8) 00:10:29.947 Write completed with error (sct=0, sc=8) 00:10:29.947 Write completed with error (sct=0, sc=8) 00:10:29.947 Read completed with error (sct=0, sc=8) 00:10:29.947 Write completed with error (sct=0, sc=8) 00:10:29.947 Read completed with error (sct=0, sc=8) 00:10:29.947 Write completed with error (sct=0, sc=8) 00:10:29.947 Read completed with error (sct=0, sc=8) 00:10:29.947 Read completed with error (sct=0, sc=8) 00:10:29.947 Read completed with error (sct=0, sc=8) 00:10:29.947 Read completed with error (sct=0, sc=8) 00:10:29.947 Read completed with error (sct=0, sc=8) 00:10:29.947 Write completed with error (sct=0, sc=8) 00:10:29.947 Write completed with error (sct=0, sc=8) 00:10:29.947 Write completed with error (sct=0, sc=8) 00:10:29.947 Write completed with error (sct=0, sc=8) 00:10:29.947 Write completed with error (sct=0, sc=8) 00:10:29.947 Write completed with error (sct=0, sc=8) 00:10:29.947 Write completed with error (sct=0, sc=8) 00:10:29.947 Read completed with error (sct=0, sc=8) 00:10:29.947 Read completed with error (sct=0, sc=8) 00:10:29.947 Read completed with error (sct=0, sc=8) 00:10:29.947 Write completed with error (sct=0, sc=8) 00:10:29.947 Read completed with error (sct=0, sc=8) 00:10:29.947 Read completed with error (sct=0, sc=8) 00:10:29.947 Read completed with error (sct=0, sc=8) 00:10:29.947 [2024-04-18 21:03:45.605138] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aaca0 is same with the state(5) to be set 00:10:29.947 [2024-04-18 21:03:45.605671] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a20e0 (9): Bad file descriptor 00:10:29.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:10:29.947 21:03:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:29.947 21:03:45 -- target/delete_subsystem.sh@34 -- # delay=0 00:10:29.947 21:03:45 -- target/delete_subsystem.sh@35 -- # kill -0 2956790 00:10:29.947 21:03:45 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:29.947 Initializing NVMe Controllers 00:10:29.947 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:29.947 Controller IO queue size 128, less than required. 00:10:29.947 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:29.947 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:29.947 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:29.947 Initialization complete. Launching workers. 00:10:29.947 ======================================================== 00:10:29.947 Latency(us) 00:10:29.947 Device Information : IOPS MiB/s Average min max 00:10:29.947 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 195.76 0.10 943784.33 445.12 1011412.70 00:10:29.947 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 153.03 0.07 901775.93 238.03 2000766.26 00:10:29.947 ======================================================== 00:10:29.947 Total : 348.78 0.17 925353.29 238.03 2000766.26 00:10:29.947 00:10:30.247 21:03:46 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:30.247 21:03:46 -- target/delete_subsystem.sh@35 -- # kill -0 2956790 00:10:30.247 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2956790) - No such process 00:10:30.247 21:03:46 -- target/delete_subsystem.sh@45 -- # NOT wait 2956790 00:10:30.247 21:03:46 -- common/autotest_common.sh@638 -- # local es=0 00:10:30.247 21:03:46 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 2956790 00:10:30.247 21:03:46 -- common/autotest_common.sh@626 -- # local arg=wait 00:10:30.247 21:03:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:30.247 21:03:46 -- common/autotest_common.sh@630 -- # type -t wait 00:10:30.248 21:03:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:30.248 21:03:46 -- common/autotest_common.sh@641 -- # wait 2956790 00:10:30.248 21:03:46 -- common/autotest_common.sh@641 -- # es=1 00:10:30.248 21:03:46 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:30.248 21:03:46 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:30.248 21:03:46 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:30.248 21:03:46 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:30.248 21:03:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:30.248 21:03:46 -- common/autotest_common.sh@10 -- # set +x 00:10:30.248 21:03:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:30.248 21:03:46 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:30.248 21:03:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:30.248 21:03:46 -- common/autotest_common.sh@10 -- # set +x 00:10:30.248 [2024-04-18 21:03:46.135977] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:30.248 21:03:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:30.248 21:03:46 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.248 21:03:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:30.248 21:03:46 -- common/autotest_common.sh@10 -- # set +x 00:10:30.248 21:03:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:30.248 21:03:46 -- target/delete_subsystem.sh@54 -- # perf_pid=2957465 00:10:30.248 21:03:46 -- target/delete_subsystem.sh@56 -- # delay=0 00:10:30.248 21:03:46 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:30.248 21:03:46 -- target/delete_subsystem.sh@57 -- # kill -0 2957465 00:10:30.248 21:03:46 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:30.505 EAL: No free 2048 kB hugepages reported on node 1 00:10:30.505 [2024-04-18 21:03:46.202501] subsystem.c:1517:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:30.763 21:03:46 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:30.763 21:03:46 -- target/delete_subsystem.sh@57 -- # kill -0 2957465 00:10:30.763 21:03:46 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:31.329 21:03:47 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:31.329 21:03:47 -- target/delete_subsystem.sh@57 -- # kill -0 2957465 00:10:31.329 21:03:47 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:31.893 21:03:47 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:31.893 21:03:47 -- target/delete_subsystem.sh@57 -- # kill -0 2957465 00:10:31.893 21:03:47 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:32.458 21:03:48 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:32.458 21:03:48 -- target/delete_subsystem.sh@57 -- # kill -0 2957465 00:10:32.458 21:03:48 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:33.023 21:03:48 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:33.023 21:03:48 -- target/delete_subsystem.sh@57 -- # kill -0 2957465 00:10:33.023 21:03:48 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:33.280 21:03:49 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:33.280 21:03:49 -- target/delete_subsystem.sh@57 -- # kill -0 2957465 00:10:33.280 21:03:49 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:33.539 Initializing NVMe Controllers 00:10:33.539 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:33.539 Controller IO queue size 128, less than required. 00:10:33.539 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:33.539 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:33.539 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:33.539 Initialization complete. Launching workers. 00:10:33.539 ======================================================== 00:10:33.539 Latency(us) 00:10:33.539 Device Information : IOPS MiB/s Average min max 00:10:33.539 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003971.77 1000285.06 1042944.68 00:10:33.539 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005543.10 1000344.17 1013650.90 00:10:33.539 ======================================================== 00:10:33.539 Total : 256.00 0.12 1004757.44 1000285.06 1042944.68 00:10:33.539 00:10:33.797 21:03:49 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:33.797 21:03:49 -- target/delete_subsystem.sh@57 -- # kill -0 2957465 00:10:33.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2957465) - No such process 00:10:33.797 21:03:49 -- target/delete_subsystem.sh@67 -- # wait 2957465 00:10:33.797 21:03:49 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:33.797 21:03:49 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:10:33.797 21:03:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:33.797 21:03:49 -- nvmf/common.sh@117 -- # sync 00:10:33.797 21:03:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:33.797 21:03:49 -- nvmf/common.sh@120 -- # set +e 00:10:33.797 21:03:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:33.797 21:03:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:33.797 rmmod nvme_tcp 00:10:33.797 rmmod nvme_fabrics 00:10:33.797 rmmod nvme_keyring 00:10:34.055 21:03:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:34.055 21:03:49 -- nvmf/common.sh@124 -- # set -e 00:10:34.055 21:03:49 -- nvmf/common.sh@125 -- # return 0 00:10:34.055 21:03:49 -- nvmf/common.sh@478 -- # '[' -n 2956747 ']' 00:10:34.055 21:03:49 -- nvmf/common.sh@479 -- # killprocess 2956747 00:10:34.055 21:03:49 -- common/autotest_common.sh@936 -- # '[' -z 2956747 ']' 00:10:34.055 21:03:49 -- common/autotest_common.sh@940 -- # kill -0 2956747 00:10:34.055 21:03:49 -- common/autotest_common.sh@941 -- # uname 00:10:34.055 21:03:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:34.055 21:03:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2956747 00:10:34.055 21:03:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:34.055 21:03:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:34.055 21:03:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2956747' 00:10:34.055 killing process with pid 2956747 00:10:34.055 21:03:49 -- common/autotest_common.sh@955 -- # kill 2956747 00:10:34.055 21:03:49 -- common/autotest_common.sh@960 -- # wait 2956747 00:10:34.313 21:03:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:34.313 21:03:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:34.313 21:03:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:34.313 21:03:49 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:34.313 21:03:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:34.313 21:03:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:34.313 21:03:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:34.313 21:03:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.215 21:03:52 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:36.215 00:10:36.215 real 0m16.838s 00:10:36.215 user 0m30.471s 00:10:36.215 sys 0m5.412s 00:10:36.215 21:03:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:36.215 21:03:52 -- common/autotest_common.sh@10 -- # set +x 00:10:36.215 ************************************ 00:10:36.215 END TEST nvmf_delete_subsystem 00:10:36.215 ************************************ 00:10:36.215 21:03:52 -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:10:36.215 21:03:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:36.215 21:03:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:36.215 21:03:52 -- common/autotest_common.sh@10 -- # set +x 00:10:36.473 ************************************ 00:10:36.473 START TEST nvmf_ns_masking 00:10:36.473 ************************************ 00:10:36.473 21:03:52 -- common/autotest_common.sh@1111 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:10:36.473 * Looking for test storage... 00:10:36.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:36.473 21:03:52 -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:36.473 21:03:52 -- nvmf/common.sh@7 -- # uname -s 00:10:36.473 21:03:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:36.473 21:03:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:36.473 21:03:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:36.473 21:03:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:36.473 21:03:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:36.473 21:03:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:36.473 21:03:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:36.473 21:03:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:36.473 21:03:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:36.473 21:03:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:36.473 21:03:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:36.473 21:03:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:36.473 21:03:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:36.473 21:03:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:36.473 21:03:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:36.473 21:03:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:36.473 21:03:52 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:36.473 21:03:52 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:36.473 21:03:52 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:36.473 21:03:52 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:36.474 21:03:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.474 21:03:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.474 21:03:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.474 21:03:52 -- paths/export.sh@5 -- # export PATH 00:10:36.474 21:03:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.474 21:03:52 -- nvmf/common.sh@47 -- # : 0 00:10:36.474 21:03:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:36.474 21:03:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:36.474 21:03:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:36.474 21:03:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:36.474 21:03:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:36.474 21:03:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:36.474 21:03:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:36.474 21:03:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:36.474 21:03:52 -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:36.474 21:03:52 -- target/ns_masking.sh@11 -- # loops=5 00:10:36.474 21:03:52 -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:10:36.474 21:03:52 -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:10:36.474 21:03:52 -- target/ns_masking.sh@15 -- # uuidgen 00:10:36.474 21:03:52 -- target/ns_masking.sh@15 -- # HOSTID=6e41440f-6b33-4137-b361-ba94a5bbaa5b 00:10:36.474 21:03:52 -- target/ns_masking.sh@44 -- # nvmftestinit 00:10:36.474 21:03:52 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:36.474 21:03:52 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:36.474 21:03:52 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:36.474 21:03:52 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:36.474 21:03:52 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:36.474 21:03:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.474 21:03:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:36.474 21:03:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.474 21:03:52 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:36.474 21:03:52 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:36.474 21:03:52 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:36.474 21:03:52 -- common/autotest_common.sh@10 -- # set +x 00:10:43.037 21:03:58 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:43.037 21:03:58 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:43.037 21:03:58 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:43.037 21:03:58 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:43.037 21:03:58 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:43.037 21:03:58 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:43.037 21:03:58 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:43.037 21:03:58 -- nvmf/common.sh@295 -- # net_devs=() 00:10:43.037 21:03:58 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:43.037 21:03:58 -- nvmf/common.sh@296 -- # e810=() 00:10:43.037 21:03:58 -- nvmf/common.sh@296 -- # local -ga e810 00:10:43.037 21:03:58 -- nvmf/common.sh@297 -- # x722=() 00:10:43.037 21:03:58 -- nvmf/common.sh@297 -- # local -ga x722 00:10:43.037 21:03:58 -- nvmf/common.sh@298 -- # mlx=() 00:10:43.037 21:03:58 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:43.037 21:03:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:43.037 21:03:58 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:43.037 21:03:58 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:43.037 21:03:58 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:43.037 21:03:58 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:43.037 21:03:58 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:43.037 21:03:58 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:43.037 21:03:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:43.037 21:03:58 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:43.037 21:03:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:43.037 21:03:58 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:43.037 21:03:58 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:43.037 21:03:58 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:43.037 21:03:58 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:43.037 21:03:58 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:43.037 21:03:58 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:43.037 21:03:58 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:43.037 21:03:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:43.037 21:03:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:43.037 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:43.037 21:03:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:43.037 21:03:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:43.037 21:03:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:43.037 21:03:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:43.037 21:03:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:43.037 21:03:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:43.037 21:03:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:43.037 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:43.037 21:03:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:43.037 21:03:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:43.037 21:03:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:43.037 21:03:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:43.037 21:03:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:43.037 21:03:58 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:43.037 21:03:58 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:43.037 21:03:58 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:43.037 21:03:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:43.037 21:03:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:43.037 21:03:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:43.037 21:03:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:43.037 21:03:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:43.037 Found net devices under 0000:86:00.0: cvl_0_0 00:10:43.037 21:03:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:43.037 21:03:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:43.037 21:03:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:43.037 21:03:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:43.037 21:03:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:43.037 21:03:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:43.037 Found net devices under 0000:86:00.1: cvl_0_1 00:10:43.037 21:03:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:43.037 21:03:58 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:43.037 21:03:58 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:43.037 21:03:58 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:43.037 21:03:58 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:10:43.037 21:03:58 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:10:43.037 21:03:58 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:43.037 21:03:58 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:43.037 21:03:58 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:43.037 21:03:58 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:43.037 21:03:58 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:43.037 21:03:58 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:43.037 21:03:58 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:43.037 21:03:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:43.037 21:03:58 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:43.037 21:03:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:43.037 21:03:58 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:43.037 21:03:58 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:43.037 21:03:58 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:43.037 21:03:58 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:43.037 21:03:58 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:43.037 21:03:58 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:43.037 21:03:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:43.037 21:03:58 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:43.037 21:03:58 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:43.037 21:03:58 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:43.037 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:43.037 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:10:43.037 00:10:43.037 --- 10.0.0.2 ping statistics --- 00:10:43.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.037 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:10:43.037 21:03:58 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:43.037 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:43.037 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:10:43.037 00:10:43.037 --- 10.0.0.1 ping statistics --- 00:10:43.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.037 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:10:43.037 21:03:58 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:43.037 21:03:58 -- nvmf/common.sh@411 -- # return 0 00:10:43.037 21:03:58 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:43.037 21:03:58 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:43.037 21:03:58 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:43.038 21:03:58 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:43.038 21:03:58 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:43.038 21:03:58 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:43.038 21:03:58 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:43.038 21:03:58 -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:10:43.038 21:03:58 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:43.038 21:03:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:43.038 21:03:58 -- common/autotest_common.sh@10 -- # set +x 00:10:43.038 21:03:58 -- nvmf/common.sh@470 -- # nvmfpid=2961985 00:10:43.038 21:03:58 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:43.038 21:03:58 -- nvmf/common.sh@471 -- # waitforlisten 2961985 00:10:43.038 21:03:58 -- common/autotest_common.sh@817 -- # '[' -z 2961985 ']' 00:10:43.038 21:03:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.038 21:03:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:43.038 21:03:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.038 21:03:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:43.038 21:03:58 -- common/autotest_common.sh@10 -- # set +x 00:10:43.038 [2024-04-18 21:03:58.675271] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:10:43.038 [2024-04-18 21:03:58.675311] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:43.038 EAL: No free 2048 kB hugepages reported on node 1 00:10:43.038 [2024-04-18 21:03:58.742385] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:43.038 [2024-04-18 21:03:58.818834] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:43.038 [2024-04-18 21:03:58.818877] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:43.038 [2024-04-18 21:03:58.818884] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:43.038 [2024-04-18 21:03:58.818890] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:43.038 [2024-04-18 21:03:58.818894] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:43.038 [2024-04-18 21:03:58.818937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:43.038 [2024-04-18 21:03:58.819044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:43.038 [2024-04-18 21:03:58.819129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:43.038 [2024-04-18 21:03:58.819130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.602 21:03:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:43.602 21:03:59 -- common/autotest_common.sh@850 -- # return 0 00:10:43.602 21:03:59 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:43.602 21:03:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:43.602 21:03:59 -- common/autotest_common.sh@10 -- # set +x 00:10:43.602 21:03:59 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:43.602 21:03:59 -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:43.860 [2024-04-18 21:03:59.662905] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:43.860 21:03:59 -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:10:43.860 21:03:59 -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:10:43.860 21:03:59 -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:44.118 Malloc1 00:10:44.118 21:03:59 -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:44.375 Malloc2 00:10:44.375 21:04:00 -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:44.375 21:04:00 -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:10:44.633 21:04:00 -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:44.890 [2024-04-18 21:04:00.579606] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:44.890 21:04:00 -- target/ns_masking.sh@61 -- # connect 00:10:44.890 21:04:00 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6e41440f-6b33-4137-b361-ba94a5bbaa5b -a 10.0.0.2 -s 4420 -i 4 00:10:44.890 21:04:00 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:10:44.890 21:04:00 -- common/autotest_common.sh@1184 -- # local i=0 00:10:44.890 21:04:00 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:10:44.890 21:04:00 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:10:44.890 21:04:00 -- common/autotest_common.sh@1191 -- # sleep 2 00:10:47.414 21:04:02 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:10:47.414 21:04:02 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:10:47.414 21:04:02 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:10:47.414 21:04:02 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:10:47.414 21:04:02 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:10:47.414 21:04:02 -- common/autotest_common.sh@1194 -- # return 0 00:10:47.414 21:04:02 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:10:47.414 21:04:02 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:47.414 21:04:02 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:10:47.414 21:04:02 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:10:47.414 21:04:02 -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:10:47.414 21:04:02 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:47.415 21:04:02 -- target/ns_masking.sh@39 -- # grep 0x1 00:10:47.415 [ 0]:0x1 00:10:47.415 21:04:02 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:47.415 21:04:02 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:47.415 21:04:02 -- target/ns_masking.sh@40 -- # nguid=d75d05afd5b64595aec80a4e792e6f65 00:10:47.415 21:04:02 -- target/ns_masking.sh@41 -- # [[ d75d05afd5b64595aec80a4e792e6f65 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:47.415 21:04:02 -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:10:47.415 21:04:03 -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:10:47.415 21:04:03 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:47.415 21:04:03 -- target/ns_masking.sh@39 -- # grep 0x1 00:10:47.415 [ 0]:0x1 00:10:47.415 21:04:03 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:47.415 21:04:03 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:47.415 21:04:03 -- target/ns_masking.sh@40 -- # nguid=d75d05afd5b64595aec80a4e792e6f65 00:10:47.415 21:04:03 -- target/ns_masking.sh@41 -- # [[ d75d05afd5b64595aec80a4e792e6f65 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:47.415 21:04:03 -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:10:47.415 21:04:03 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:47.415 21:04:03 -- target/ns_masking.sh@39 -- # grep 0x2 00:10:47.415 [ 1]:0x2 00:10:47.415 21:04:03 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:47.415 21:04:03 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:47.415 21:04:03 -- target/ns_masking.sh@40 -- # nguid=55667ec7af954951948708298d7a2ed0 00:10:47.415 21:04:03 -- target/ns_masking.sh@41 -- # [[ 55667ec7af954951948708298d7a2ed0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:47.415 21:04:03 -- target/ns_masking.sh@69 -- # disconnect 00:10:47.415 21:04:03 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:47.672 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.672 21:04:03 -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.672 21:04:03 -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:10:47.928 21:04:03 -- target/ns_masking.sh@77 -- # connect 1 00:10:47.928 21:04:03 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6e41440f-6b33-4137-b361-ba94a5bbaa5b -a 10.0.0.2 -s 4420 -i 4 00:10:48.184 21:04:03 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:10:48.184 21:04:03 -- common/autotest_common.sh@1184 -- # local i=0 00:10:48.184 21:04:03 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:10:48.184 21:04:03 -- common/autotest_common.sh@1186 -- # [[ -n 1 ]] 00:10:48.184 21:04:03 -- common/autotest_common.sh@1187 -- # nvme_device_counter=1 00:10:48.184 21:04:03 -- common/autotest_common.sh@1191 -- # sleep 2 00:10:50.078 21:04:05 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:10:50.079 21:04:05 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:10:50.079 21:04:05 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:10:50.079 21:04:05 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:10:50.079 21:04:05 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:10:50.079 21:04:05 -- common/autotest_common.sh@1194 -- # return 0 00:10:50.079 21:04:05 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:10:50.079 21:04:05 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:50.079 21:04:05 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:10:50.079 21:04:05 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:10:50.079 21:04:05 -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:10:50.079 21:04:05 -- common/autotest_common.sh@638 -- # local es=0 00:10:50.079 21:04:05 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:10:50.079 21:04:05 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:10:50.079 21:04:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:50.079 21:04:05 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:10:50.079 21:04:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:50.079 21:04:05 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:10:50.079 21:04:05 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:50.079 21:04:05 -- target/ns_masking.sh@39 -- # grep 0x1 00:10:50.079 21:04:05 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:50.079 21:04:05 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:50.079 21:04:05 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:10:50.079 21:04:05 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:50.079 21:04:05 -- common/autotest_common.sh@641 -- # es=1 00:10:50.079 21:04:05 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:50.079 21:04:05 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:50.079 21:04:05 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:50.079 21:04:05 -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:10:50.079 21:04:05 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:50.079 21:04:05 -- target/ns_masking.sh@39 -- # grep 0x2 00:10:50.079 [ 0]:0x2 00:10:50.079 21:04:05 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:50.079 21:04:05 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:50.336 21:04:06 -- target/ns_masking.sh@40 -- # nguid=55667ec7af954951948708298d7a2ed0 00:10:50.336 21:04:06 -- target/ns_masking.sh@41 -- # [[ 55667ec7af954951948708298d7a2ed0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:50.336 21:04:06 -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:50.336 21:04:06 -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:10:50.336 21:04:06 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:50.336 21:04:06 -- target/ns_masking.sh@39 -- # grep 0x1 00:10:50.336 [ 0]:0x1 00:10:50.336 21:04:06 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:50.336 21:04:06 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:50.336 21:04:06 -- target/ns_masking.sh@40 -- # nguid=d75d05afd5b64595aec80a4e792e6f65 00:10:50.336 21:04:06 -- target/ns_masking.sh@41 -- # [[ d75d05afd5b64595aec80a4e792e6f65 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:50.336 21:04:06 -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:10:50.593 21:04:06 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:50.593 21:04:06 -- target/ns_masking.sh@39 -- # grep 0x2 00:10:50.593 [ 1]:0x2 00:10:50.593 21:04:06 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:50.593 21:04:06 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:50.593 21:04:06 -- target/ns_masking.sh@40 -- # nguid=55667ec7af954951948708298d7a2ed0 00:10:50.593 21:04:06 -- target/ns_masking.sh@41 -- # [[ 55667ec7af954951948708298d7a2ed0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:50.593 21:04:06 -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:50.851 21:04:06 -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:10:50.851 21:04:06 -- common/autotest_common.sh@638 -- # local es=0 00:10:50.851 21:04:06 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:10:50.851 21:04:06 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:10:50.851 21:04:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:50.851 21:04:06 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:10:50.851 21:04:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:50.851 21:04:06 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:10:50.851 21:04:06 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:50.851 21:04:06 -- target/ns_masking.sh@39 -- # grep 0x1 00:10:50.851 21:04:06 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:50.851 21:04:06 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:50.851 21:04:06 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:10:50.851 21:04:06 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:50.851 21:04:06 -- common/autotest_common.sh@641 -- # es=1 00:10:50.851 21:04:06 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:50.851 21:04:06 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:50.851 21:04:06 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:50.851 21:04:06 -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:10:50.851 21:04:06 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:50.851 21:04:06 -- target/ns_masking.sh@39 -- # grep 0x2 00:10:50.851 [ 0]:0x2 00:10:50.851 21:04:06 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:50.851 21:04:06 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:50.851 21:04:06 -- target/ns_masking.sh@40 -- # nguid=55667ec7af954951948708298d7a2ed0 00:10:50.851 21:04:06 -- target/ns_masking.sh@41 -- # [[ 55667ec7af954951948708298d7a2ed0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:50.851 21:04:06 -- target/ns_masking.sh@91 -- # disconnect 00:10:50.851 21:04:06 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:50.851 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.851 21:04:06 -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:51.108 21:04:06 -- target/ns_masking.sh@95 -- # connect 2 00:10:51.108 21:04:06 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6e41440f-6b33-4137-b361-ba94a5bbaa5b -a 10.0.0.2 -s 4420 -i 4 00:10:51.366 21:04:07 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:10:51.366 21:04:07 -- common/autotest_common.sh@1184 -- # local i=0 00:10:51.366 21:04:07 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:10:51.366 21:04:07 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:10:51.366 21:04:07 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:10:51.366 21:04:07 -- common/autotest_common.sh@1191 -- # sleep 2 00:10:53.313 21:04:09 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:10:53.313 21:04:09 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:10:53.313 21:04:09 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:10:53.313 21:04:09 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:10:53.313 21:04:09 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:10:53.313 21:04:09 -- common/autotest_common.sh@1194 -- # return 0 00:10:53.313 21:04:09 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:10:53.313 21:04:09 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:53.313 21:04:09 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:10:53.314 21:04:09 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:10:53.314 21:04:09 -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:10:53.314 21:04:09 -- target/ns_masking.sh@39 -- # grep 0x1 00:10:53.314 21:04:09 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:53.572 [ 0]:0x1 00:10:53.572 21:04:09 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:53.572 21:04:09 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:53.572 21:04:09 -- target/ns_masking.sh@40 -- # nguid=d75d05afd5b64595aec80a4e792e6f65 00:10:53.572 21:04:09 -- target/ns_masking.sh@41 -- # [[ d75d05afd5b64595aec80a4e792e6f65 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:53.572 21:04:09 -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:10:53.572 21:04:09 -- target/ns_masking.sh@39 -- # grep 0x2 00:10:53.572 21:04:09 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:53.572 [ 1]:0x2 00:10:53.572 21:04:09 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:53.572 21:04:09 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:53.572 21:04:09 -- target/ns_masking.sh@40 -- # nguid=55667ec7af954951948708298d7a2ed0 00:10:53.572 21:04:09 -- target/ns_masking.sh@41 -- # [[ 55667ec7af954951948708298d7a2ed0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:53.572 21:04:09 -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:53.830 21:04:09 -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:10:53.830 21:04:09 -- common/autotest_common.sh@638 -- # local es=0 00:10:53.830 21:04:09 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:10:53.830 21:04:09 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:10:53.830 21:04:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:53.830 21:04:09 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:10:53.830 21:04:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:53.830 21:04:09 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:10:53.830 21:04:09 -- target/ns_masking.sh@39 -- # grep 0x1 00:10:53.830 21:04:09 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:53.830 21:04:09 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:53.830 21:04:09 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:53.830 21:04:09 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:10:53.830 21:04:09 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:53.830 21:04:09 -- common/autotest_common.sh@641 -- # es=1 00:10:53.830 21:04:09 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:53.830 21:04:09 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:53.830 21:04:09 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:53.830 21:04:09 -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:10:53.830 21:04:09 -- target/ns_masking.sh@39 -- # grep 0x2 00:10:53.830 21:04:09 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:53.830 [ 0]:0x2 00:10:53.830 21:04:09 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:53.830 21:04:09 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:53.830 21:04:09 -- target/ns_masking.sh@40 -- # nguid=55667ec7af954951948708298d7a2ed0 00:10:53.830 21:04:09 -- target/ns_masking.sh@41 -- # [[ 55667ec7af954951948708298d7a2ed0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:53.830 21:04:09 -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:53.830 21:04:09 -- common/autotest_common.sh@638 -- # local es=0 00:10:53.830 21:04:09 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:53.830 21:04:09 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:53.830 21:04:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:53.830 21:04:09 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:53.830 21:04:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:53.830 21:04:09 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:53.830 21:04:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:53.830 21:04:09 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:53.830 21:04:09 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:53.830 21:04:09 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:54.089 [2024-04-18 21:04:09.873490] nvmf_rpc.c:1783:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:10:54.089 request: 00:10:54.089 { 00:10:54.089 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:54.089 "nsid": 2, 00:10:54.089 "host": "nqn.2016-06.io.spdk:host1", 00:10:54.089 "method": "nvmf_ns_remove_host", 00:10:54.089 "req_id": 1 00:10:54.089 } 00:10:54.089 Got JSON-RPC error response 00:10:54.089 response: 00:10:54.089 { 00:10:54.089 "code": -32602, 00:10:54.089 "message": "Invalid parameters" 00:10:54.089 } 00:10:54.089 21:04:09 -- common/autotest_common.sh@641 -- # es=1 00:10:54.089 21:04:09 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:54.089 21:04:09 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:54.089 21:04:09 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:54.090 21:04:09 -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:10:54.090 21:04:09 -- common/autotest_common.sh@638 -- # local es=0 00:10:54.090 21:04:09 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:10:54.090 21:04:09 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:10:54.090 21:04:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:54.090 21:04:09 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:10:54.090 21:04:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:54.090 21:04:09 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:10:54.090 21:04:09 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:54.090 21:04:09 -- target/ns_masking.sh@39 -- # grep 0x1 00:10:54.090 21:04:09 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:54.090 21:04:09 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:54.090 21:04:09 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:10:54.090 21:04:09 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:54.090 21:04:09 -- common/autotest_common.sh@641 -- # es=1 00:10:54.090 21:04:09 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:54.090 21:04:09 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:54.090 21:04:09 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:54.090 21:04:09 -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:10:54.090 21:04:09 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:54.090 21:04:09 -- target/ns_masking.sh@39 -- # grep 0x2 00:10:54.090 [ 0]:0x2 00:10:54.090 21:04:09 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:54.090 21:04:09 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:54.090 21:04:10 -- target/ns_masking.sh@40 -- # nguid=55667ec7af954951948708298d7a2ed0 00:10:54.090 21:04:10 -- target/ns_masking.sh@41 -- # [[ 55667ec7af954951948708298d7a2ed0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:54.090 21:04:10 -- target/ns_masking.sh@108 -- # disconnect 00:10:54.090 21:04:10 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:54.348 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.348 21:04:10 -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:54.348 21:04:10 -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:10:54.348 21:04:10 -- target/ns_masking.sh@114 -- # nvmftestfini 00:10:54.348 21:04:10 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:54.348 21:04:10 -- nvmf/common.sh@117 -- # sync 00:10:54.348 21:04:10 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:54.348 21:04:10 -- nvmf/common.sh@120 -- # set +e 00:10:54.348 21:04:10 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:54.348 21:04:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:54.348 rmmod nvme_tcp 00:10:54.348 rmmod nvme_fabrics 00:10:54.348 rmmod nvme_keyring 00:10:54.606 21:04:10 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:54.606 21:04:10 -- nvmf/common.sh@124 -- # set -e 00:10:54.606 21:04:10 -- nvmf/common.sh@125 -- # return 0 00:10:54.606 21:04:10 -- nvmf/common.sh@478 -- # '[' -n 2961985 ']' 00:10:54.606 21:04:10 -- nvmf/common.sh@479 -- # killprocess 2961985 00:10:54.606 21:04:10 -- common/autotest_common.sh@936 -- # '[' -z 2961985 ']' 00:10:54.606 21:04:10 -- common/autotest_common.sh@940 -- # kill -0 2961985 00:10:54.606 21:04:10 -- common/autotest_common.sh@941 -- # uname 00:10:54.606 21:04:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:54.606 21:04:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2961985 00:10:54.606 21:04:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:54.606 21:04:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:54.606 21:04:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2961985' 00:10:54.606 killing process with pid 2961985 00:10:54.606 21:04:10 -- common/autotest_common.sh@955 -- # kill 2961985 00:10:54.606 21:04:10 -- common/autotest_common.sh@960 -- # wait 2961985 00:10:54.865 21:04:10 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:54.865 21:04:10 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:54.865 21:04:10 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:54.865 21:04:10 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:54.865 21:04:10 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:54.865 21:04:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.865 21:04:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:54.865 21:04:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:56.768 21:04:12 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:56.768 00:10:56.768 real 0m20.441s 00:10:56.768 user 0m51.230s 00:10:56.768 sys 0m6.356s 00:10:56.768 21:04:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:56.768 21:04:12 -- common/autotest_common.sh@10 -- # set +x 00:10:56.768 ************************************ 00:10:56.768 END TEST nvmf_ns_masking 00:10:56.768 ************************************ 00:10:56.768 21:04:12 -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:10:56.768 21:04:12 -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:10:56.769 21:04:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:56.769 21:04:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:56.769 21:04:12 -- common/autotest_common.sh@10 -- # set +x 00:10:57.027 ************************************ 00:10:57.027 START TEST nvmf_nvme_cli 00:10:57.027 ************************************ 00:10:57.027 21:04:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:10:57.027 * Looking for test storage... 00:10:57.027 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:57.027 21:04:12 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:57.027 21:04:12 -- nvmf/common.sh@7 -- # uname -s 00:10:57.027 21:04:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:57.027 21:04:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:57.027 21:04:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:57.027 21:04:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:57.027 21:04:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:57.027 21:04:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:57.027 21:04:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:57.027 21:04:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:57.027 21:04:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:57.027 21:04:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:57.027 21:04:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:57.027 21:04:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:57.027 21:04:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:57.027 21:04:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:57.027 21:04:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:57.027 21:04:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:57.027 21:04:12 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:57.027 21:04:12 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:57.027 21:04:12 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:57.027 21:04:12 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:57.027 21:04:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.027 21:04:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.027 21:04:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.027 21:04:12 -- paths/export.sh@5 -- # export PATH 00:10:57.027 21:04:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.027 21:04:12 -- nvmf/common.sh@47 -- # : 0 00:10:57.027 21:04:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:57.027 21:04:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:57.027 21:04:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:57.027 21:04:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:57.027 21:04:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:57.027 21:04:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:57.027 21:04:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:57.027 21:04:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:57.027 21:04:12 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:57.027 21:04:12 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:57.027 21:04:12 -- target/nvme_cli.sh@14 -- # devs=() 00:10:57.027 21:04:12 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:10:57.027 21:04:12 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:57.027 21:04:12 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:57.027 21:04:12 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:57.027 21:04:12 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:57.027 21:04:12 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:57.027 21:04:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.027 21:04:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:57.027 21:04:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.027 21:04:12 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:57.027 21:04:12 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:57.027 21:04:12 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:57.027 21:04:12 -- common/autotest_common.sh@10 -- # set +x 00:11:03.586 21:04:18 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:03.586 21:04:18 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:03.586 21:04:18 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:03.586 21:04:18 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:03.586 21:04:18 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:03.586 21:04:18 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:03.586 21:04:18 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:03.586 21:04:18 -- nvmf/common.sh@295 -- # net_devs=() 00:11:03.586 21:04:18 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:03.586 21:04:18 -- nvmf/common.sh@296 -- # e810=() 00:11:03.586 21:04:18 -- nvmf/common.sh@296 -- # local -ga e810 00:11:03.587 21:04:18 -- nvmf/common.sh@297 -- # x722=() 00:11:03.587 21:04:18 -- nvmf/common.sh@297 -- # local -ga x722 00:11:03.587 21:04:18 -- nvmf/common.sh@298 -- # mlx=() 00:11:03.587 21:04:18 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:03.587 21:04:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:03.587 21:04:18 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:03.587 21:04:18 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:03.587 21:04:18 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:03.587 21:04:18 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:03.587 21:04:18 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:03.587 21:04:18 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:03.587 21:04:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:03.587 21:04:18 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:03.587 21:04:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:03.587 21:04:18 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:03.587 21:04:18 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:03.587 21:04:18 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:03.587 21:04:18 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:03.587 21:04:18 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:03.587 21:04:18 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:03.587 21:04:18 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:03.587 21:04:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:03.587 21:04:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:03.587 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:03.587 21:04:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:03.587 21:04:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:03.587 21:04:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:03.587 21:04:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:03.587 21:04:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:03.587 21:04:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:03.587 21:04:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:03.587 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:03.587 21:04:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:03.587 21:04:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:03.587 21:04:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:03.587 21:04:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:03.587 21:04:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:03.587 21:04:18 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:03.587 21:04:18 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:03.587 21:04:18 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:03.587 21:04:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:03.587 21:04:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:03.587 21:04:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:03.587 21:04:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:03.587 21:04:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:03.587 Found net devices under 0000:86:00.0: cvl_0_0 00:11:03.587 21:04:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:03.587 21:04:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:03.587 21:04:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:03.587 21:04:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:03.587 21:04:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:03.587 21:04:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:03.587 Found net devices under 0000:86:00.1: cvl_0_1 00:11:03.587 21:04:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:03.587 21:04:18 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:03.587 21:04:18 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:03.587 21:04:18 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:03.587 21:04:18 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:03.587 21:04:18 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:03.587 21:04:18 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:03.587 21:04:18 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:03.587 21:04:18 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:03.587 21:04:18 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:03.587 21:04:18 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:03.587 21:04:18 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:03.587 21:04:18 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:03.587 21:04:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:03.587 21:04:18 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:03.587 21:04:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:03.587 21:04:18 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:03.587 21:04:18 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:03.587 21:04:18 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:03.587 21:04:18 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:03.587 21:04:18 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:03.587 21:04:18 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:03.587 21:04:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:03.587 21:04:19 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:03.587 21:04:19 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:03.587 21:04:19 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:03.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:03.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:11:03.587 00:11:03.587 --- 10.0.0.2 ping statistics --- 00:11:03.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.587 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:11:03.587 21:04:19 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:03.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:03.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:11:03.587 00:11:03.587 --- 10.0.0.1 ping statistics --- 00:11:03.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.587 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:11:03.587 21:04:19 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:03.587 21:04:19 -- nvmf/common.sh@411 -- # return 0 00:11:03.587 21:04:19 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:03.587 21:04:19 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:03.587 21:04:19 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:03.587 21:04:19 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:03.587 21:04:19 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:03.587 21:04:19 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:03.587 21:04:19 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:03.587 21:04:19 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:11:03.587 21:04:19 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:03.587 21:04:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:03.587 21:04:19 -- common/autotest_common.sh@10 -- # set +x 00:11:03.587 21:04:19 -- nvmf/common.sh@470 -- # nvmfpid=2967986 00:11:03.587 21:04:19 -- nvmf/common.sh@471 -- # waitforlisten 2967986 00:11:03.587 21:04:19 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:03.587 21:04:19 -- common/autotest_common.sh@817 -- # '[' -z 2967986 ']' 00:11:03.587 21:04:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.587 21:04:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:03.587 21:04:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.587 21:04:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:03.587 21:04:19 -- common/autotest_common.sh@10 -- # set +x 00:11:03.587 [2024-04-18 21:04:19.159100] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:11:03.587 [2024-04-18 21:04:19.159145] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:03.587 EAL: No free 2048 kB hugepages reported on node 1 00:11:03.587 [2024-04-18 21:04:19.222780] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:03.587 [2024-04-18 21:04:19.301017] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:03.587 [2024-04-18 21:04:19.301061] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:03.587 [2024-04-18 21:04:19.301069] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:03.587 [2024-04-18 21:04:19.301075] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:03.587 [2024-04-18 21:04:19.301080] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:03.587 [2024-04-18 21:04:19.301126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:03.587 [2024-04-18 21:04:19.301224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:03.587 [2024-04-18 21:04:19.301304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:03.587 [2024-04-18 21:04:19.301305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.152 21:04:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:04.152 21:04:19 -- common/autotest_common.sh@850 -- # return 0 00:11:04.152 21:04:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:04.152 21:04:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:04.152 21:04:19 -- common/autotest_common.sh@10 -- # set +x 00:11:04.152 21:04:20 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:04.152 21:04:20 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:04.152 21:04:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:04.152 21:04:20 -- common/autotest_common.sh@10 -- # set +x 00:11:04.152 [2024-04-18 21:04:20.016700] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:04.152 21:04:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:04.152 21:04:20 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:04.152 21:04:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:04.152 21:04:20 -- common/autotest_common.sh@10 -- # set +x 00:11:04.152 Malloc0 00:11:04.152 21:04:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:04.152 21:04:20 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:04.152 21:04:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:04.152 21:04:20 -- common/autotest_common.sh@10 -- # set +x 00:11:04.152 Malloc1 00:11:04.152 21:04:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:04.152 21:04:20 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:11:04.152 21:04:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:04.152 21:04:20 -- common/autotest_common.sh@10 -- # set +x 00:11:04.152 21:04:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:04.152 21:04:20 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:04.152 21:04:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:04.152 21:04:20 -- common/autotest_common.sh@10 -- # set +x 00:11:04.152 21:04:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:04.152 21:04:20 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:04.152 21:04:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:04.152 21:04:20 -- common/autotest_common.sh@10 -- # set +x 00:11:04.409 21:04:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:04.409 21:04:20 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:04.409 21:04:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:04.410 21:04:20 -- common/autotest_common.sh@10 -- # set +x 00:11:04.410 [2024-04-18 21:04:20.093227] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:04.410 21:04:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:04.410 21:04:20 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:04.410 21:04:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:04.410 21:04:20 -- common/autotest_common.sh@10 -- # set +x 00:11:04.410 21:04:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:04.410 21:04:20 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:11:04.410 00:11:04.410 Discovery Log Number of Records 2, Generation counter 2 00:11:04.410 =====Discovery Log Entry 0====== 00:11:04.410 trtype: tcp 00:11:04.410 adrfam: ipv4 00:11:04.410 subtype: current discovery subsystem 00:11:04.410 treq: not required 00:11:04.410 portid: 0 00:11:04.410 trsvcid: 4420 00:11:04.410 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:04.410 traddr: 10.0.0.2 00:11:04.410 eflags: explicit discovery connections, duplicate discovery information 00:11:04.410 sectype: none 00:11:04.410 =====Discovery Log Entry 1====== 00:11:04.410 trtype: tcp 00:11:04.410 adrfam: ipv4 00:11:04.410 subtype: nvme subsystem 00:11:04.410 treq: not required 00:11:04.410 portid: 0 00:11:04.410 trsvcid: 4420 00:11:04.410 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:04.410 traddr: 10.0.0.2 00:11:04.410 eflags: none 00:11:04.410 sectype: none 00:11:04.410 21:04:20 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:11:04.410 21:04:20 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:11:04.410 21:04:20 -- nvmf/common.sh@511 -- # local dev _ 00:11:04.410 21:04:20 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:04.410 21:04:20 -- nvmf/common.sh@510 -- # nvme list 00:11:04.410 21:04:20 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:11:04.410 21:04:20 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:04.410 21:04:20 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:11:04.410 21:04:20 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:04.410 21:04:20 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:11:04.410 21:04:20 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:05.780 21:04:21 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:05.780 21:04:21 -- common/autotest_common.sh@1184 -- # local i=0 00:11:05.780 21:04:21 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:05.780 21:04:21 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:11:05.780 21:04:21 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:11:05.780 21:04:21 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:07.675 21:04:23 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:07.675 21:04:23 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:07.675 21:04:23 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:07.675 21:04:23 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:11:07.675 21:04:23 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:07.675 21:04:23 -- common/autotest_common.sh@1194 -- # return 0 00:11:07.675 21:04:23 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:11:07.675 21:04:23 -- nvmf/common.sh@511 -- # local dev _ 00:11:07.675 21:04:23 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:07.676 21:04:23 -- nvmf/common.sh@510 -- # nvme list 00:11:07.676 21:04:23 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:11:07.676 21:04:23 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:07.676 21:04:23 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:11:07.676 21:04:23 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:07.676 21:04:23 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:07.676 21:04:23 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:11:07.676 21:04:23 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:07.676 21:04:23 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:07.676 21:04:23 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:11:07.676 21:04:23 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:07.676 21:04:23 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:11:07.676 /dev/nvme0n1 ]] 00:11:07.676 21:04:23 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:11:07.676 21:04:23 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:11:07.676 21:04:23 -- nvmf/common.sh@511 -- # local dev _ 00:11:07.676 21:04:23 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:07.676 21:04:23 -- nvmf/common.sh@510 -- # nvme list 00:11:07.676 21:04:23 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:11:07.676 21:04:23 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:07.676 21:04:23 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:11:07.676 21:04:23 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:07.676 21:04:23 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:07.676 21:04:23 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:11:07.676 21:04:23 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:07.676 21:04:23 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:07.676 21:04:23 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:11:07.676 21:04:23 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:07.676 21:04:23 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:11:07.676 21:04:23 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:07.933 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.933 21:04:23 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:07.933 21:04:23 -- common/autotest_common.sh@1205 -- # local i=0 00:11:07.933 21:04:23 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:11:07.933 21:04:23 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:07.933 21:04:23 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:11:07.933 21:04:23 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:07.933 21:04:23 -- common/autotest_common.sh@1217 -- # return 0 00:11:07.933 21:04:23 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:11:07.933 21:04:23 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:07.933 21:04:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:07.933 21:04:23 -- common/autotest_common.sh@10 -- # set +x 00:11:07.933 21:04:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:07.933 21:04:23 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:07.933 21:04:23 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:11:07.933 21:04:23 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:07.933 21:04:23 -- nvmf/common.sh@117 -- # sync 00:11:07.933 21:04:23 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:07.933 21:04:23 -- nvmf/common.sh@120 -- # set +e 00:11:07.933 21:04:23 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:07.933 21:04:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:07.933 rmmod nvme_tcp 00:11:07.933 rmmod nvme_fabrics 00:11:07.933 rmmod nvme_keyring 00:11:07.933 21:04:23 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:07.933 21:04:23 -- nvmf/common.sh@124 -- # set -e 00:11:07.933 21:04:23 -- nvmf/common.sh@125 -- # return 0 00:11:07.933 21:04:23 -- nvmf/common.sh@478 -- # '[' -n 2967986 ']' 00:11:07.933 21:04:23 -- nvmf/common.sh@479 -- # killprocess 2967986 00:11:07.933 21:04:23 -- common/autotest_common.sh@936 -- # '[' -z 2967986 ']' 00:11:07.933 21:04:23 -- common/autotest_common.sh@940 -- # kill -0 2967986 00:11:07.933 21:04:23 -- common/autotest_common.sh@941 -- # uname 00:11:07.933 21:04:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:07.933 21:04:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2967986 00:11:07.933 21:04:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:07.933 21:04:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:07.933 21:04:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2967986' 00:11:07.933 killing process with pid 2967986 00:11:07.933 21:04:23 -- common/autotest_common.sh@955 -- # kill 2967986 00:11:07.933 21:04:23 -- common/autotest_common.sh@960 -- # wait 2967986 00:11:08.192 21:04:24 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:08.192 21:04:24 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:08.192 21:04:24 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:08.192 21:04:24 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:08.192 21:04:24 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:08.192 21:04:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.192 21:04:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:08.192 21:04:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.723 21:04:26 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:10.723 00:11:10.723 real 0m13.279s 00:11:10.723 user 0m20.539s 00:11:10.723 sys 0m5.196s 00:11:10.723 21:04:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:10.723 21:04:26 -- common/autotest_common.sh@10 -- # set +x 00:11:10.723 ************************************ 00:11:10.723 END TEST nvmf_nvme_cli 00:11:10.723 ************************************ 00:11:10.723 21:04:26 -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:11:10.723 21:04:26 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:10.723 21:04:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:10.723 21:04:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:10.723 21:04:26 -- common/autotest_common.sh@10 -- # set +x 00:11:10.723 ************************************ 00:11:10.723 START TEST nvmf_vfio_user 00:11:10.723 ************************************ 00:11:10.723 21:04:26 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:10.723 * Looking for test storage... 00:11:10.723 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:10.723 21:04:26 -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:10.723 21:04:26 -- nvmf/common.sh@7 -- # uname -s 00:11:10.723 21:04:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:10.723 21:04:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:10.723 21:04:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:10.723 21:04:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:10.723 21:04:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:10.723 21:04:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:10.723 21:04:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:10.723 21:04:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:10.723 21:04:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:10.723 21:04:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:10.723 21:04:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:10.723 21:04:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:10.723 21:04:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:10.723 21:04:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:10.723 21:04:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:10.723 21:04:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:10.723 21:04:26 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:10.723 21:04:26 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:10.723 21:04:26 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:10.723 21:04:26 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:10.723 21:04:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.723 21:04:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.723 21:04:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.723 21:04:26 -- paths/export.sh@5 -- # export PATH 00:11:10.724 21:04:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.724 21:04:26 -- nvmf/common.sh@47 -- # : 0 00:11:10.724 21:04:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:10.724 21:04:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:10.724 21:04:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:10.724 21:04:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:10.724 21:04:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:10.724 21:04:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:10.724 21:04:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:10.724 21:04:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:10.724 21:04:26 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:11:10.724 21:04:26 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:10.724 21:04:26 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:11:10.724 21:04:26 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:10.724 21:04:26 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:11:10.724 21:04:26 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:11:10.724 21:04:26 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:11:10.724 21:04:26 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:11:10.724 21:04:26 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:11:10.724 21:04:26 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:11:10.724 21:04:26 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2969314 00:11:10.724 21:04:26 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2969314' 00:11:10.724 Process pid: 2969314 00:11:10.724 21:04:26 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:10.724 21:04:26 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2969314 00:11:10.724 21:04:26 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:11:10.724 21:04:26 -- common/autotest_common.sh@817 -- # '[' -z 2969314 ']' 00:11:10.724 21:04:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.724 21:04:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:10.724 21:04:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.724 21:04:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:10.724 21:04:26 -- common/autotest_common.sh@10 -- # set +x 00:11:10.724 [2024-04-18 21:04:26.416064] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:11:10.724 [2024-04-18 21:04:26.416103] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:10.724 EAL: No free 2048 kB hugepages reported on node 1 00:11:10.724 [2024-04-18 21:04:26.476207] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:10.724 [2024-04-18 21:04:26.549340] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:10.724 [2024-04-18 21:04:26.549384] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:10.724 [2024-04-18 21:04:26.549391] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:10.724 [2024-04-18 21:04:26.549397] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:10.724 [2024-04-18 21:04:26.549402] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:10.724 [2024-04-18 21:04:26.549443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:10.724 [2024-04-18 21:04:26.549545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:10.724 [2024-04-18 21:04:26.549610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:10.724 [2024-04-18 21:04:26.549611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.653 21:04:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:11.653 21:04:27 -- common/autotest_common.sh@850 -- # return 0 00:11:11.653 21:04:27 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:11:12.584 21:04:28 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:11:12.584 21:04:28 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:11:12.584 21:04:28 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:11:12.584 21:04:28 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:12.584 21:04:28 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:11:12.584 21:04:28 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:12.841 Malloc1 00:11:12.841 21:04:28 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:11:13.097 21:04:28 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:11:13.097 21:04:28 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:11:13.353 21:04:29 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:13.353 21:04:29 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:11:13.354 21:04:29 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:13.610 Malloc2 00:11:13.610 21:04:29 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:11:13.610 21:04:29 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:11:13.866 21:04:29 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:11:14.125 21:04:29 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:11:14.125 21:04:29 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:11:14.125 21:04:29 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:14.125 21:04:29 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:11:14.125 21:04:29 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:11:14.125 21:04:29 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:11:14.125 [2024-04-18 21:04:29.914673] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:11:14.125 [2024-04-18 21:04:29.914718] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2969805 ] 00:11:14.125 EAL: No free 2048 kB hugepages reported on node 1 00:11:14.125 [2024-04-18 21:04:29.943074] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:11:14.125 [2024-04-18 21:04:29.952836] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:14.125 [2024-04-18 21:04:29.952855] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fc981867000 00:11:14.125 [2024-04-18 21:04:29.953839] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:14.125 [2024-04-18 21:04:29.954841] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:14.125 [2024-04-18 21:04:29.955844] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:14.125 [2024-04-18 21:04:29.956848] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:14.125 [2024-04-18 21:04:29.957854] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:14.125 [2024-04-18 21:04:29.958859] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:14.125 [2024-04-18 21:04:29.959858] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:14.125 [2024-04-18 21:04:29.960866] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:14.125 [2024-04-18 21:04:29.961875] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:14.125 [2024-04-18 21:04:29.961887] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fc98185c000 00:11:14.125 [2024-04-18 21:04:29.962831] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:14.125 [2024-04-18 21:04:29.975971] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:11:14.125 [2024-04-18 21:04:29.975995] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:11:14.125 [2024-04-18 21:04:29.978981] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:14.125 [2024-04-18 21:04:29.979023] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:11:14.125 [2024-04-18 21:04:29.979100] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:11:14.125 [2024-04-18 21:04:29.979118] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:11:14.125 [2024-04-18 21:04:29.979124] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:11:14.125 [2024-04-18 21:04:29.983518] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:11:14.125 [2024-04-18 21:04:29.983527] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:11:14.125 [2024-04-18 21:04:29.983534] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:11:14.125 [2024-04-18 21:04:29.983998] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:14.125 [2024-04-18 21:04:29.984008] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:11:14.125 [2024-04-18 21:04:29.984015] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:11:14.125 [2024-04-18 21:04:29.985008] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:11:14.125 [2024-04-18 21:04:29.985017] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:11:14.125 [2024-04-18 21:04:29.986014] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:11:14.125 [2024-04-18 21:04:29.986021] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:11:14.125 [2024-04-18 21:04:29.986026] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:11:14.125 [2024-04-18 21:04:29.986031] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:11:14.125 [2024-04-18 21:04:29.986137] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:11:14.125 [2024-04-18 21:04:29.986141] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:11:14.125 [2024-04-18 21:04:29.986145] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:11:14.125 [2024-04-18 21:04:29.987021] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:11:14.125 [2024-04-18 21:04:29.988026] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:11:14.125 [2024-04-18 21:04:29.989031] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:14.125 [2024-04-18 21:04:29.990027] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:14.125 [2024-04-18 21:04:29.990102] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:11:14.125 [2024-04-18 21:04:29.991045] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:11:14.125 [2024-04-18 21:04:29.991053] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:11:14.125 [2024-04-18 21:04:29.991057] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:11:14.125 [2024-04-18 21:04:29.991074] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:11:14.125 [2024-04-18 21:04:29.991081] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:11:14.125 [2024-04-18 21:04:29.991093] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:14.125 [2024-04-18 21:04:29.991098] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:14.125 [2024-04-18 21:04:29.991110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:14.125 [2024-04-18 21:04:29.991148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:11:14.125 [2024-04-18 21:04:29.991160] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:11:14.125 [2024-04-18 21:04:29.991164] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:11:14.125 [2024-04-18 21:04:29.991168] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:11:14.125 [2024-04-18 21:04:29.991172] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:11:14.125 [2024-04-18 21:04:29.991176] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:11:14.125 [2024-04-18 21:04:29.991180] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:11:14.125 [2024-04-18 21:04:29.991184] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:11:14.125 [2024-04-18 21:04:29.991193] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:11:14.125 [2024-04-18 21:04:29.991204] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:11:14.126 [2024-04-18 21:04:29.991215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:11:14.126 [2024-04-18 21:04:29.991225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.126 [2024-04-18 21:04:29.991232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.126 [2024-04-18 21:04:29.991240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.126 [2024-04-18 21:04:29.991247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.126 [2024-04-18 21:04:29.991252] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:11:14.126 [2024-04-18 21:04:29.991259] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:11:14.126 [2024-04-18 21:04:29.991268] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:11:14.126 [2024-04-18 21:04:29.991277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:11:14.126 [2024-04-18 21:04:29.991282] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:11:14.126 [2024-04-18 21:04:29.991287] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:11:14.126 [2024-04-18 21:04:29.991296] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:11:14.126 [2024-04-18 21:04:29.991302] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:11:14.126 [2024-04-18 21:04:29.991309] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:14.126 [2024-04-18 21:04:29.991320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:11:14.126 [2024-04-18 21:04:29.991358] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:11:14.126 [2024-04-18 21:04:29.991367] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:11:14.126 [2024-04-18 21:04:29.991374] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:11:14.126 [2024-04-18 21:04:29.991378] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:11:14.126 [2024-04-18 21:04:29.991383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:11:14.126 [2024-04-18 21:04:29.991393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:11:14.126 [2024-04-18 21:04:29.991403] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:11:14.126 [2024-04-18 21:04:29.991413] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:11:14.126 [2024-04-18 21:04:29.991419] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:11:14.126 [2024-04-18 21:04:29.991425] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:14.126 [2024-04-18 21:04:29.991429] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:14.126 [2024-04-18 21:04:29.991435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:14.126 [2024-04-18 21:04:29.991451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:11:14.126 [2024-04-18 21:04:29.991463] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:11:14.126 [2024-04-18 21:04:29.991470] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:11:14.126 [2024-04-18 21:04:29.991476] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:14.126 [2024-04-18 21:04:29.991480] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:14.126 [2024-04-18 21:04:29.991485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:14.126 [2024-04-18 21:04:29.991499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:11:14.126 [2024-04-18 21:04:29.991506] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:11:14.126 [2024-04-18 21:04:29.991517] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:11:14.126 [2024-04-18 21:04:29.991524] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:11:14.126 [2024-04-18 21:04:29.991530] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:11:14.126 [2024-04-18 21:04:29.991534] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:11:14.126 [2024-04-18 21:04:29.991539] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:11:14.126 [2024-04-18 21:04:29.991543] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:11:14.126 [2024-04-18 21:04:29.991549] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:11:14.126 [2024-04-18 21:04:29.991565] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:11:14.126 [2024-04-18 21:04:29.991576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:11:14.126 [2024-04-18 21:04:29.991586] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:11:14.126 [2024-04-18 21:04:29.991597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:11:14.126 [2024-04-18 21:04:29.991606] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:11:14.126 [2024-04-18 21:04:29.991617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:11:14.126 [2024-04-18 21:04:29.991627] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:14.126 [2024-04-18 21:04:29.991641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:11:14.126 [2024-04-18 21:04:29.991650] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:11:14.126 [2024-04-18 21:04:29.991654] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:11:14.126 [2024-04-18 21:04:29.991657] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:11:14.126 [2024-04-18 21:04:29.991660] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:11:14.126 [2024-04-18 21:04:29.991666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:11:14.126 [2024-04-18 21:04:29.991673] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:11:14.126 [2024-04-18 21:04:29.991676] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:11:14.126 [2024-04-18 21:04:29.991682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:11:14.126 [2024-04-18 21:04:29.991688] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:11:14.126 [2024-04-18 21:04:29.991692] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:14.126 [2024-04-18 21:04:29.991697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:14.126 [2024-04-18 21:04:29.991704] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:11:14.126 [2024-04-18 21:04:29.991708] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:11:14.126 [2024-04-18 21:04:29.991713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:11:14.126 [2024-04-18 21:04:29.991720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:11:14.126 [2024-04-18 21:04:29.991732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:11:14.126 [2024-04-18 21:04:29.991741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:11:14.126 [2024-04-18 21:04:29.991747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:11:14.126 ===================================================== 00:11:14.126 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:14.126 ===================================================== 00:11:14.126 Controller Capabilities/Features 00:11:14.126 ================================ 00:11:14.126 Vendor ID: 4e58 00:11:14.126 Subsystem Vendor ID: 4e58 00:11:14.126 Serial Number: SPDK1 00:11:14.126 Model Number: SPDK bdev Controller 00:11:14.126 Firmware Version: 24.05 00:11:14.126 Recommended Arb Burst: 6 00:11:14.126 IEEE OUI Identifier: 8d 6b 50 00:11:14.126 Multi-path I/O 00:11:14.126 May have multiple subsystem ports: Yes 00:11:14.126 May have multiple controllers: Yes 00:11:14.126 Associated with SR-IOV VF: No 00:11:14.126 Max Data Transfer Size: 131072 00:11:14.126 Max Number of Namespaces: 32 00:11:14.126 Max Number of I/O Queues: 127 00:11:14.126 NVMe Specification Version (VS): 1.3 00:11:14.126 NVMe Specification Version (Identify): 1.3 00:11:14.126 Maximum Queue Entries: 256 00:11:14.126 Contiguous Queues Required: Yes 00:11:14.126 Arbitration Mechanisms Supported 00:11:14.126 Weighted Round Robin: Not Supported 00:11:14.126 Vendor Specific: Not Supported 00:11:14.126 Reset Timeout: 15000 ms 00:11:14.127 Doorbell Stride: 4 bytes 00:11:14.127 NVM Subsystem Reset: Not Supported 00:11:14.127 Command Sets Supported 00:11:14.127 NVM Command Set: Supported 00:11:14.127 Boot Partition: Not Supported 00:11:14.127 Memory Page Size Minimum: 4096 bytes 00:11:14.127 Memory Page Size Maximum: 4096 bytes 00:11:14.127 Persistent Memory Region: Not Supported 00:11:14.127 Optional Asynchronous Events Supported 00:11:14.127 Namespace Attribute Notices: Supported 00:11:14.127 Firmware Activation Notices: Not Supported 00:11:14.127 ANA Change Notices: Not Supported 00:11:14.127 PLE Aggregate Log Change Notices: Not Supported 00:11:14.127 LBA Status Info Alert Notices: Not Supported 00:11:14.127 EGE Aggregate Log Change Notices: Not Supported 00:11:14.127 Normal NVM Subsystem Shutdown event: Not Supported 00:11:14.127 Zone Descriptor Change Notices: Not Supported 00:11:14.127 Discovery Log Change Notices: Not Supported 00:11:14.127 Controller Attributes 00:11:14.127 128-bit Host Identifier: Supported 00:11:14.127 Non-Operational Permissive Mode: Not Supported 00:11:14.127 NVM Sets: Not Supported 00:11:14.127 Read Recovery Levels: Not Supported 00:11:14.127 Endurance Groups: Not Supported 00:11:14.127 Predictable Latency Mode: Not Supported 00:11:14.127 Traffic Based Keep ALive: Not Supported 00:11:14.127 Namespace Granularity: Not Supported 00:11:14.127 SQ Associations: Not Supported 00:11:14.127 UUID List: Not Supported 00:11:14.127 Multi-Domain Subsystem: Not Supported 00:11:14.127 Fixed Capacity Management: Not Supported 00:11:14.127 Variable Capacity Management: Not Supported 00:11:14.127 Delete Endurance Group: Not Supported 00:11:14.127 Delete NVM Set: Not Supported 00:11:14.127 Extended LBA Formats Supported: Not Supported 00:11:14.127 Flexible Data Placement Supported: Not Supported 00:11:14.127 00:11:14.127 Controller Memory Buffer Support 00:11:14.127 ================================ 00:11:14.127 Supported: No 00:11:14.127 00:11:14.127 Persistent Memory Region Support 00:11:14.127 ================================ 00:11:14.127 Supported: No 00:11:14.127 00:11:14.127 Admin Command Set Attributes 00:11:14.127 ============================ 00:11:14.127 Security Send/Receive: Not Supported 00:11:14.127 Format NVM: Not Supported 00:11:14.127 Firmware Activate/Download: Not Supported 00:11:14.127 Namespace Management: Not Supported 00:11:14.127 Device Self-Test: Not Supported 00:11:14.127 Directives: Not Supported 00:11:14.127 NVMe-MI: Not Supported 00:11:14.127 Virtualization Management: Not Supported 00:11:14.127 Doorbell Buffer Config: Not Supported 00:11:14.127 Get LBA Status Capability: Not Supported 00:11:14.127 Command & Feature Lockdown Capability: Not Supported 00:11:14.127 Abort Command Limit: 4 00:11:14.127 Async Event Request Limit: 4 00:11:14.127 Number of Firmware Slots: N/A 00:11:14.127 Firmware Slot 1 Read-Only: N/A 00:11:14.127 Firmware Activation Without Reset: N/A 00:11:14.127 Multiple Update Detection Support: N/A 00:11:14.127 Firmware Update Granularity: No Information Provided 00:11:14.127 Per-Namespace SMART Log: No 00:11:14.127 Asymmetric Namespace Access Log Page: Not Supported 00:11:14.127 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:11:14.127 Command Effects Log Page: Supported 00:11:14.127 Get Log Page Extended Data: Supported 00:11:14.127 Telemetry Log Pages: Not Supported 00:11:14.127 Persistent Event Log Pages: Not Supported 00:11:14.127 Supported Log Pages Log Page: May Support 00:11:14.127 Commands Supported & Effects Log Page: Not Supported 00:11:14.127 Feature Identifiers & Effects Log Page:May Support 00:11:14.127 NVMe-MI Commands & Effects Log Page: May Support 00:11:14.127 Data Area 4 for Telemetry Log: Not Supported 00:11:14.127 Error Log Page Entries Supported: 128 00:11:14.127 Keep Alive: Supported 00:11:14.127 Keep Alive Granularity: 10000 ms 00:11:14.127 00:11:14.127 NVM Command Set Attributes 00:11:14.127 ========================== 00:11:14.127 Submission Queue Entry Size 00:11:14.127 Max: 64 00:11:14.127 Min: 64 00:11:14.127 Completion Queue Entry Size 00:11:14.127 Max: 16 00:11:14.127 Min: 16 00:11:14.127 Number of Namespaces: 32 00:11:14.127 Compare Command: Supported 00:11:14.127 Write Uncorrectable Command: Not Supported 00:11:14.127 Dataset Management Command: Supported 00:11:14.127 Write Zeroes Command: Supported 00:11:14.127 Set Features Save Field: Not Supported 00:11:14.127 Reservations: Not Supported 00:11:14.127 Timestamp: Not Supported 00:11:14.127 Copy: Supported 00:11:14.127 Volatile Write Cache: Present 00:11:14.127 Atomic Write Unit (Normal): 1 00:11:14.127 Atomic Write Unit (PFail): 1 00:11:14.127 Atomic Compare & Write Unit: 1 00:11:14.127 Fused Compare & Write: Supported 00:11:14.127 Scatter-Gather List 00:11:14.127 SGL Command Set: Supported (Dword aligned) 00:11:14.127 SGL Keyed: Not Supported 00:11:14.127 SGL Bit Bucket Descriptor: Not Supported 00:11:14.127 SGL Metadata Pointer: Not Supported 00:11:14.127 Oversized SGL: Not Supported 00:11:14.127 SGL Metadata Address: Not Supported 00:11:14.127 SGL Offset: Not Supported 00:11:14.127 Transport SGL Data Block: Not Supported 00:11:14.127 Replay Protected Memory Block: Not Supported 00:11:14.127 00:11:14.127 Firmware Slot Information 00:11:14.127 ========================= 00:11:14.127 Active slot: 1 00:11:14.127 Slot 1 Firmware Revision: 24.05 00:11:14.127 00:11:14.127 00:11:14.127 Commands Supported and Effects 00:11:14.127 ============================== 00:11:14.127 Admin Commands 00:11:14.127 -------------- 00:11:14.127 Get Log Page (02h): Supported 00:11:14.127 Identify (06h): Supported 00:11:14.127 Abort (08h): Supported 00:11:14.127 Set Features (09h): Supported 00:11:14.127 Get Features (0Ah): Supported 00:11:14.127 Asynchronous Event Request (0Ch): Supported 00:11:14.127 Keep Alive (18h): Supported 00:11:14.127 I/O Commands 00:11:14.127 ------------ 00:11:14.127 Flush (00h): Supported LBA-Change 00:11:14.127 Write (01h): Supported LBA-Change 00:11:14.127 Read (02h): Supported 00:11:14.127 Compare (05h): Supported 00:11:14.127 Write Zeroes (08h): Supported LBA-Change 00:11:14.127 Dataset Management (09h): Supported LBA-Change 00:11:14.127 Copy (19h): Supported LBA-Change 00:11:14.127 Unknown (79h): Supported LBA-Change 00:11:14.127 Unknown (7Ah): Supported 00:11:14.127 00:11:14.127 Error Log 00:11:14.127 ========= 00:11:14.127 00:11:14.127 Arbitration 00:11:14.127 =========== 00:11:14.127 Arbitration Burst: 1 00:11:14.127 00:11:14.127 Power Management 00:11:14.127 ================ 00:11:14.127 Number of Power States: 1 00:11:14.127 Current Power State: Power State #0 00:11:14.127 Power State #0: 00:11:14.127 Max Power: 0.00 W 00:11:14.127 Non-Operational State: Operational 00:11:14.127 Entry Latency: Not Reported 00:11:14.127 Exit Latency: Not Reported 00:11:14.127 Relative Read Throughput: 0 00:11:14.127 Relative Read Latency: 0 00:11:14.127 Relative Write Throughput: 0 00:11:14.127 Relative Write Latency: 0 00:11:14.127 Idle Power: Not Reported 00:11:14.127 Active Power: Not Reported 00:11:14.127 Non-Operational Permissive Mode: Not Supported 00:11:14.128 00:11:14.128 Health Information 00:11:14.128 ================== 00:11:14.128 Critical Warnings: 00:11:14.128 Available Spare Space: OK 00:11:14.128 Temperature: OK 00:11:14.128 Device Reliability: OK 00:11:14.128 Read Only: No 00:11:14.128 Volatile Memory Backup: OK 00:11:14.128 Current Temperature: 0 Kelvin (-2[2024-04-18 21:04:29.991841] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:11:14.128 [2024-04-18 21:04:29.991849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:11:14.128 [2024-04-18 21:04:29.991874] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:11:14.128 [2024-04-18 21:04:29.991882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.128 [2024-04-18 21:04:29.991888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.128 [2024-04-18 21:04:29.991893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.128 [2024-04-18 21:04:29.991899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.128 [2024-04-18 21:04:29.992051] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:14.128 [2024-04-18 21:04:29.992060] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:11:14.128 [2024-04-18 21:04:29.993056] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:14.128 [2024-04-18 21:04:29.993107] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:11:14.128 [2024-04-18 21:04:29.993113] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:11:14.128 [2024-04-18 21:04:29.994063] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:11:14.128 [2024-04-18 21:04:29.994072] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:11:14.128 [2024-04-18 21:04:29.994119] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:11:14.128 [2024-04-18 21:04:29.996089] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:14.128 73 Celsius) 00:11:14.128 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:11:14.128 Available Spare: 0% 00:11:14.128 Available Spare Threshold: 0% 00:11:14.128 Life Percentage Used: 0% 00:11:14.128 Data Units Read: 0 00:11:14.128 Data Units Written: 0 00:11:14.128 Host Read Commands: 0 00:11:14.128 Host Write Commands: 0 00:11:14.128 Controller Busy Time: 0 minutes 00:11:14.128 Power Cycles: 0 00:11:14.128 Power On Hours: 0 hours 00:11:14.128 Unsafe Shutdowns: 0 00:11:14.128 Unrecoverable Media Errors: 0 00:11:14.128 Lifetime Error Log Entries: 0 00:11:14.128 Warning Temperature Time: 0 minutes 00:11:14.128 Critical Temperature Time: 0 minutes 00:11:14.128 00:11:14.128 Number of Queues 00:11:14.128 ================ 00:11:14.128 Number of I/O Submission Queues: 127 00:11:14.128 Number of I/O Completion Queues: 127 00:11:14.128 00:11:14.128 Active Namespaces 00:11:14.128 ================= 00:11:14.128 Namespace ID:1 00:11:14.128 Error Recovery Timeout: Unlimited 00:11:14.128 Command Set Identifier: NVM (00h) 00:11:14.128 Deallocate: Supported 00:11:14.128 Deallocated/Unwritten Error: Not Supported 00:11:14.128 Deallocated Read Value: Unknown 00:11:14.128 Deallocate in Write Zeroes: Not Supported 00:11:14.128 Deallocated Guard Field: 0xFFFF 00:11:14.128 Flush: Supported 00:11:14.128 Reservation: Supported 00:11:14.128 Namespace Sharing Capabilities: Multiple Controllers 00:11:14.128 Size (in LBAs): 131072 (0GiB) 00:11:14.128 Capacity (in LBAs): 131072 (0GiB) 00:11:14.128 Utilization (in LBAs): 131072 (0GiB) 00:11:14.128 NGUID: 1AD79A728BD446BFAD1511E773A7DB35 00:11:14.128 UUID: 1ad79a72-8bd4-46bf-ad15-11e773a7db35 00:11:14.128 Thin Provisioning: Not Supported 00:11:14.128 Per-NS Atomic Units: Yes 00:11:14.128 Atomic Boundary Size (Normal): 0 00:11:14.128 Atomic Boundary Size (PFail): 0 00:11:14.128 Atomic Boundary Offset: 0 00:11:14.128 Maximum Single Source Range Length: 65535 00:11:14.128 Maximum Copy Length: 65535 00:11:14.128 Maximum Source Range Count: 1 00:11:14.128 NGUID/EUI64 Never Reused: No 00:11:14.128 Namespace Write Protected: No 00:11:14.128 Number of LBA Formats: 1 00:11:14.128 Current LBA Format: LBA Format #00 00:11:14.128 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:14.128 00:11:14.128 21:04:30 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:11:14.386 EAL: No free 2048 kB hugepages reported on node 1 00:11:14.386 [2024-04-18 21:04:30.208385] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:19.651 [2024-04-18 21:04:35.229100] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:19.651 Initializing NVMe Controllers 00:11:19.651 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:19.651 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:19.651 Initialization complete. Launching workers. 00:11:19.651 ======================================================== 00:11:19.651 Latency(us) 00:11:19.651 Device Information : IOPS MiB/s Average min max 00:11:19.651 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39915.56 155.92 3206.36 984.33 6634.35 00:11:19.651 ======================================================== 00:11:19.651 Total : 39915.56 155.92 3206.36 984.33 6634.35 00:11:19.651 00:11:19.651 21:04:35 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:11:19.651 EAL: No free 2048 kB hugepages reported on node 1 00:11:19.651 [2024-04-18 21:04:35.441057] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:24.907 [2024-04-18 21:04:40.477722] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:24.907 Initializing NVMe Controllers 00:11:24.907 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:24.907 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:24.907 Initialization complete. Launching workers. 00:11:24.907 ======================================================== 00:11:24.907 Latency(us) 00:11:24.907 Device Information : IOPS MiB/s Average min max 00:11:24.907 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7982.71 5982.71 10977.71 00:11:24.907 ======================================================== 00:11:24.907 Total : 16051.20 62.70 7982.71 5982.71 10977.71 00:11:24.907 00:11:24.907 21:04:40 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:11:24.907 EAL: No free 2048 kB hugepages reported on node 1 00:11:24.907 [2024-04-18 21:04:40.673696] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:30.165 [2024-04-18 21:04:45.764913] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:30.165 Initializing NVMe Controllers 00:11:30.165 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:30.165 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:30.165 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:11:30.165 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:11:30.165 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:11:30.165 Initialization complete. Launching workers. 00:11:30.166 Starting thread on core 2 00:11:30.166 Starting thread on core 3 00:11:30.166 Starting thread on core 1 00:11:30.166 21:04:45 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:11:30.166 EAL: No free 2048 kB hugepages reported on node 1 00:11:30.166 [2024-04-18 21:04:46.052941] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:33.446 [2024-04-18 21:04:49.114836] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:33.446 Initializing NVMe Controllers 00:11:33.446 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:33.446 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:33.446 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:11:33.446 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:11:33.446 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:11:33.446 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:11:33.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:11:33.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:11:33.446 Initialization complete. Launching workers. 00:11:33.446 Starting thread on core 1 with urgent priority queue 00:11:33.446 Starting thread on core 2 with urgent priority queue 00:11:33.446 Starting thread on core 3 with urgent priority queue 00:11:33.446 Starting thread on core 0 with urgent priority queue 00:11:33.446 SPDK bdev Controller (SPDK1 ) core 0: 6637.33 IO/s 15.07 secs/100000 ios 00:11:33.446 SPDK bdev Controller (SPDK1 ) core 1: 4996.67 IO/s 20.01 secs/100000 ios 00:11:33.446 SPDK bdev Controller (SPDK1 ) core 2: 4597.67 IO/s 21.75 secs/100000 ios 00:11:33.446 SPDK bdev Controller (SPDK1 ) core 3: 6330.67 IO/s 15.80 secs/100000 ios 00:11:33.446 ======================================================== 00:11:33.447 00:11:33.447 21:04:49 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:33.447 EAL: No free 2048 kB hugepages reported on node 1 00:11:33.704 [2024-04-18 21:04:49.389916] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:33.704 [2024-04-18 21:04:49.423124] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:33.704 Initializing NVMe Controllers 00:11:33.704 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:33.704 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:33.704 Namespace ID: 1 size: 0GB 00:11:33.704 Initialization complete. 00:11:33.704 INFO: using host memory buffer for IO 00:11:33.704 Hello world! 00:11:33.704 21:04:49 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:33.704 EAL: No free 2048 kB hugepages reported on node 1 00:11:33.961 [2024-04-18 21:04:49.696910] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:34.893 Initializing NVMe Controllers 00:11:34.893 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:34.893 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:34.893 Initialization complete. Launching workers. 00:11:34.893 submit (in ns) avg, min, max = 5874.6, 3240.9, 4000370.4 00:11:34.893 complete (in ns) avg, min, max = 20192.5, 1795.7, 4001885.2 00:11:34.893 00:11:34.893 Submit histogram 00:11:34.893 ================ 00:11:34.893 Range in us Cumulative Count 00:11:34.893 3.228 - 3.242: 0.0060% ( 1) 00:11:34.893 3.242 - 3.256: 0.0544% ( 8) 00:11:34.893 3.256 - 3.270: 0.7435% ( 114) 00:11:34.893 3.270 - 3.283: 2.4784% ( 287) 00:11:34.893 3.283 - 3.297: 4.2798% ( 298) 00:11:34.893 3.297 - 3.311: 6.9516% ( 442) 00:11:34.893 3.311 - 3.325: 10.7357% ( 626) 00:11:34.893 3.325 - 3.339: 15.5292% ( 793) 00:11:34.893 3.339 - 3.353: 20.7036% ( 856) 00:11:34.893 3.353 - 3.367: 26.3012% ( 926) 00:11:34.893 3.367 - 3.381: 31.6267% ( 881) 00:11:34.893 3.381 - 3.395: 36.7769% ( 852) 00:11:34.893 3.395 - 3.409: 41.6672% ( 809) 00:11:34.893 3.409 - 3.423: 47.3191% ( 935) 00:11:34.894 3.423 - 3.437: 52.2698% ( 819) 00:11:34.894 3.437 - 3.450: 56.3320% ( 672) 00:11:34.894 3.450 - 3.464: 61.0832% ( 786) 00:11:34.894 3.464 - 3.478: 66.9588% ( 972) 00:11:34.894 3.478 - 3.492: 71.9640% ( 828) 00:11:34.894 3.492 - 3.506: 75.9354% ( 657) 00:11:34.894 3.506 - 3.520: 79.8706% ( 651) 00:11:34.894 3.520 - 3.534: 82.7238% ( 472) 00:11:34.894 3.534 - 3.548: 84.7609% ( 337) 00:11:34.894 3.548 - 3.562: 86.0606% ( 215) 00:11:34.894 3.562 - 3.590: 87.8801% ( 301) 00:11:34.894 3.590 - 3.617: 89.1797% ( 215) 00:11:34.894 3.617 - 3.645: 90.7574% ( 261) 00:11:34.894 3.645 - 3.673: 92.4802% ( 285) 00:11:34.894 3.673 - 3.701: 94.1365% ( 274) 00:11:34.894 3.701 - 3.729: 95.7867% ( 273) 00:11:34.894 3.729 - 3.757: 97.1045% ( 218) 00:11:34.894 3.757 - 3.784: 98.0838% ( 162) 00:11:34.894 3.784 - 3.812: 98.7185% ( 105) 00:11:34.894 3.812 - 3.840: 99.1900% ( 78) 00:11:34.894 3.840 - 3.868: 99.4620% ( 45) 00:11:34.894 3.868 - 3.896: 99.5527% ( 15) 00:11:34.894 3.896 - 3.923: 99.5829% ( 5) 00:11:34.894 3.923 - 3.951: 99.6010% ( 3) 00:11:34.894 3.979 - 4.007: 99.6071% ( 1) 00:11:34.894 4.035 - 4.063: 99.6131% ( 1) 00:11:34.894 4.063 - 4.090: 99.6192% ( 1) 00:11:34.894 4.146 - 4.174: 99.6252% ( 1) 00:11:34.894 4.563 - 4.591: 99.6313% ( 1) 00:11:34.894 4.953 - 4.981: 99.6373% ( 1) 00:11:34.894 5.315 - 5.343: 99.6434% ( 1) 00:11:34.894 5.398 - 5.426: 99.6494% ( 1) 00:11:34.894 5.482 - 5.510: 99.6675% ( 3) 00:11:34.894 5.510 - 5.537: 99.6736% ( 1) 00:11:34.894 5.593 - 5.621: 99.6857% ( 2) 00:11:34.894 5.621 - 5.649: 99.6917% ( 1) 00:11:34.894 5.677 - 5.704: 99.6978% ( 1) 00:11:34.894 5.732 - 5.760: 99.7098% ( 2) 00:11:34.894 5.760 - 5.788: 99.7159% ( 1) 00:11:34.894 5.843 - 5.871: 99.7219% ( 1) 00:11:34.894 5.955 - 5.983: 99.7280% ( 1) 00:11:34.894 6.038 - 6.066: 99.7340% ( 1) 00:11:34.894 6.094 - 6.122: 99.7461% ( 2) 00:11:34.894 6.205 - 6.233: 99.7522% ( 1) 00:11:34.894 6.233 - 6.261: 99.7582% ( 1) 00:11:34.894 6.261 - 6.289: 99.7643% ( 1) 00:11:34.894 6.317 - 6.344: 99.7703% ( 1) 00:11:34.894 6.372 - 6.400: 99.7763% ( 1) 00:11:34.894 6.400 - 6.428: 99.7824% ( 1) 00:11:34.894 6.511 - 6.539: 99.7884% ( 1) 00:11:34.894 6.678 - 6.706: 99.7945% ( 1) 00:11:34.894 6.706 - 6.734: 99.8066% ( 2) 00:11:34.894 6.762 - 6.790: 99.8126% ( 1) 00:11:34.894 6.790 - 6.817: 99.8187% ( 1) 00:11:34.894 6.984 - 7.012: 99.8247% ( 1) 00:11:34.894 7.012 - 7.040: 99.8307% ( 1) 00:11:34.894 7.040 - 7.068: 99.8368% ( 1) 00:11:34.894 7.096 - 7.123: 99.8428% ( 1) 00:11:34.894 7.179 - 7.235: 99.8489% ( 1) 00:11:34.894 7.290 - 7.346: 99.8549% ( 1) 00:11:34.894 7.402 - 7.457: 99.8610% ( 1) 00:11:34.894 7.457 - 7.513: 99.8670% ( 1) 00:11:34.894 7.569 - 7.624: 99.8731% ( 1) 00:11:34.894 7.680 - 7.736: 99.8791% ( 1) 00:11:34.894 7.736 - 7.791: 99.8851% ( 1) 00:11:34.894 [2024-04-18 21:04:50.718963] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:34.894 7.791 - 7.847: 99.8912% ( 1) 00:11:34.894 8.014 - 8.070: 99.8972% ( 1) 00:11:34.894 8.403 - 8.459: 99.9033% ( 1) 00:11:34.894 8.570 - 8.626: 99.9093% ( 1) 00:11:34.894 8.682 - 8.737: 99.9154% ( 1) 00:11:34.894 9.016 - 9.071: 99.9214% ( 1) 00:11:34.894 9.071 - 9.127: 99.9275% ( 1) 00:11:34.894 9.350 - 9.405: 99.9335% ( 1) 00:11:34.894 9.795 - 9.850: 99.9396% ( 1) 00:11:34.894 3989.148 - 4017.642: 100.0000% ( 10) 00:11:34.894 00:11:34.894 Complete histogram 00:11:34.894 ================== 00:11:34.894 Range in us Cumulative Count 00:11:34.894 1.795 - 1.809: 5.2711% ( 872) 00:11:34.894 1.809 - 1.823: 52.2638% ( 7774) 00:11:34.894 1.823 - 1.837: 79.4294% ( 4494) 00:11:34.894 1.837 - 1.850: 83.4069% ( 658) 00:11:34.894 1.850 - 1.864: 85.9639% ( 423) 00:11:34.894 1.864 - 1.878: 91.9422% ( 989) 00:11:34.894 1.878 - 1.892: 95.5268% ( 593) 00:11:34.894 1.892 - 1.906: 97.7513% ( 368) 00:11:34.894 1.906 - 1.920: 98.4223% ( 111) 00:11:34.894 1.920 - 1.934: 98.6278% ( 34) 00:11:34.894 1.934 - 1.948: 98.7608% ( 22) 00:11:34.894 1.948 - 1.962: 98.9482% ( 31) 00:11:34.894 1.962 - 1.976: 99.0328% ( 14) 00:11:34.894 1.976 - 1.990: 99.0630% ( 5) 00:11:34.894 1.990 - 2.003: 99.1900% ( 21) 00:11:34.894 2.003 - 2.017: 99.2807% ( 15) 00:11:34.894 2.017 - 2.031: 99.3048% ( 4) 00:11:34.894 2.045 - 2.059: 99.3109% ( 1) 00:11:34.894 2.059 - 2.073: 99.3230% ( 2) 00:11:34.894 2.073 - 2.087: 99.3290% ( 1) 00:11:34.894 2.157 - 2.170: 99.3351% ( 1) 00:11:34.894 2.184 - 2.198: 99.3411% ( 1) 00:11:34.894 2.226 - 2.240: 99.3472% ( 1) 00:11:34.894 2.463 - 2.477: 99.3532% ( 1) 00:11:34.894 2.490 - 2.504: 99.3592% ( 1) 00:11:34.894 3.868 - 3.896: 99.3653% ( 1) 00:11:34.894 3.896 - 3.923: 99.3713% ( 1) 00:11:34.894 4.035 - 4.063: 99.3774% ( 1) 00:11:34.894 4.090 - 4.118: 99.3834% ( 1) 00:11:34.894 4.174 - 4.202: 99.3895% ( 1) 00:11:34.894 4.814 - 4.842: 99.3955% ( 1) 00:11:34.894 4.897 - 4.925: 99.4016% ( 1) 00:11:34.894 4.925 - 4.953: 99.4076% ( 1) 00:11:34.894 4.981 - 5.009: 99.4136% ( 1) 00:11:34.894 5.092 - 5.120: 99.4197% ( 1) 00:11:34.894 5.370 - 5.398: 99.4257% ( 1) 00:11:34.894 5.454 - 5.482: 99.4318% ( 1) 00:11:34.894 5.565 - 5.593: 99.4378% ( 1) 00:11:34.894 5.704 - 5.732: 99.4499% ( 2) 00:11:34.894 5.732 - 5.760: 99.4560% ( 1) 00:11:34.894 5.816 - 5.843: 99.4620% ( 1) 00:11:34.894 6.010 - 6.038: 99.4681% ( 1) 00:11:34.894 6.066 - 6.094: 99.4741% ( 1) 00:11:34.894 6.122 - 6.150: 99.4801% ( 1) 00:11:34.894 6.595 - 6.623: 99.4862% ( 1) 00:11:34.894 6.762 - 6.790: 99.4922% ( 1) 00:11:34.894 6.845 - 6.873: 99.4983% ( 1) 00:11:34.894 7.179 - 7.235: 99.5043% ( 1) 00:11:34.894 7.624 - 7.680: 99.5104% ( 1) 00:11:34.894 8.403 - 8.459: 99.5164% ( 1) 00:11:34.894 9.071 - 9.127: 99.5225% ( 1) 00:11:34.894 10.741 - 10.797: 99.5285% ( 1) 00:11:34.894 12.410 - 12.466: 99.5345% ( 1) 00:11:34.894 14.470 - 14.581: 99.5406% ( 1) 00:11:34.894 3989.148 - 4017.642: 100.0000% ( 76) 00:11:34.894 00:11:34.894 21:04:50 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:11:34.894 21:04:50 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:11:34.894 21:04:50 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:11:34.894 21:04:50 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:11:34.894 21:04:50 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:35.152 [2024-04-18 21:04:50.918336] nvmf_rpc.c: 279:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:11:35.152 [ 00:11:35.152 { 00:11:35.152 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:35.152 "subtype": "Discovery", 00:11:35.152 "listen_addresses": [], 00:11:35.152 "allow_any_host": true, 00:11:35.152 "hosts": [] 00:11:35.152 }, 00:11:35.152 { 00:11:35.152 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:35.152 "subtype": "NVMe", 00:11:35.152 "listen_addresses": [ 00:11:35.152 { 00:11:35.152 "transport": "VFIOUSER", 00:11:35.152 "trtype": "VFIOUSER", 00:11:35.152 "adrfam": "IPv4", 00:11:35.152 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:35.152 "trsvcid": "0" 00:11:35.152 } 00:11:35.152 ], 00:11:35.152 "allow_any_host": true, 00:11:35.152 "hosts": [], 00:11:35.152 "serial_number": "SPDK1", 00:11:35.152 "model_number": "SPDK bdev Controller", 00:11:35.152 "max_namespaces": 32, 00:11:35.152 "min_cntlid": 1, 00:11:35.152 "max_cntlid": 65519, 00:11:35.152 "namespaces": [ 00:11:35.152 { 00:11:35.152 "nsid": 1, 00:11:35.152 "bdev_name": "Malloc1", 00:11:35.152 "name": "Malloc1", 00:11:35.152 "nguid": "1AD79A728BD446BFAD1511E773A7DB35", 00:11:35.152 "uuid": "1ad79a72-8bd4-46bf-ad15-11e773a7db35" 00:11:35.152 } 00:11:35.152 ] 00:11:35.152 }, 00:11:35.152 { 00:11:35.152 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:35.152 "subtype": "NVMe", 00:11:35.152 "listen_addresses": [ 00:11:35.152 { 00:11:35.152 "transport": "VFIOUSER", 00:11:35.152 "trtype": "VFIOUSER", 00:11:35.152 "adrfam": "IPv4", 00:11:35.152 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:35.152 "trsvcid": "0" 00:11:35.152 } 00:11:35.152 ], 00:11:35.152 "allow_any_host": true, 00:11:35.152 "hosts": [], 00:11:35.152 "serial_number": "SPDK2", 00:11:35.152 "model_number": "SPDK bdev Controller", 00:11:35.152 "max_namespaces": 32, 00:11:35.152 "min_cntlid": 1, 00:11:35.152 "max_cntlid": 65519, 00:11:35.152 "namespaces": [ 00:11:35.152 { 00:11:35.152 "nsid": 1, 00:11:35.152 "bdev_name": "Malloc2", 00:11:35.152 "name": "Malloc2", 00:11:35.152 "nguid": "B73B09B009A048428081A94CB357B932", 00:11:35.152 "uuid": "b73b09b0-09a0-4842-8081-a94cb357b932" 00:11:35.152 } 00:11:35.152 ] 00:11:35.152 } 00:11:35.152 ] 00:11:35.152 21:04:50 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:11:35.152 21:04:50 -- target/nvmf_vfio_user.sh@34 -- # aerpid=2973317 00:11:35.152 21:04:50 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:11:35.152 21:04:50 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:11:35.152 21:04:50 -- common/autotest_common.sh@1251 -- # local i=0 00:11:35.152 21:04:50 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:35.152 21:04:50 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:35.152 21:04:50 -- common/autotest_common.sh@1262 -- # return 0 00:11:35.152 21:04:50 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:11:35.152 21:04:50 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:11:35.152 EAL: No free 2048 kB hugepages reported on node 1 00:11:35.409 [2024-04-18 21:04:51.106949] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:35.409 Malloc3 00:11:35.409 21:04:51 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:11:35.409 [2024-04-18 21:04:51.326533] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:35.666 21:04:51 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:35.666 Asynchronous Event Request test 00:11:35.666 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:35.666 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:35.666 Registering asynchronous event callbacks... 00:11:35.666 Starting namespace attribute notice tests for all controllers... 00:11:35.666 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:11:35.666 aer_cb - Changed Namespace 00:11:35.666 Cleaning up... 00:11:35.666 [ 00:11:35.666 { 00:11:35.666 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:35.666 "subtype": "Discovery", 00:11:35.666 "listen_addresses": [], 00:11:35.666 "allow_any_host": true, 00:11:35.666 "hosts": [] 00:11:35.666 }, 00:11:35.666 { 00:11:35.666 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:35.666 "subtype": "NVMe", 00:11:35.666 "listen_addresses": [ 00:11:35.666 { 00:11:35.666 "transport": "VFIOUSER", 00:11:35.666 "trtype": "VFIOUSER", 00:11:35.666 "adrfam": "IPv4", 00:11:35.666 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:35.666 "trsvcid": "0" 00:11:35.666 } 00:11:35.666 ], 00:11:35.666 "allow_any_host": true, 00:11:35.666 "hosts": [], 00:11:35.666 "serial_number": "SPDK1", 00:11:35.666 "model_number": "SPDK bdev Controller", 00:11:35.666 "max_namespaces": 32, 00:11:35.666 "min_cntlid": 1, 00:11:35.666 "max_cntlid": 65519, 00:11:35.666 "namespaces": [ 00:11:35.666 { 00:11:35.666 "nsid": 1, 00:11:35.666 "bdev_name": "Malloc1", 00:11:35.666 "name": "Malloc1", 00:11:35.666 "nguid": "1AD79A728BD446BFAD1511E773A7DB35", 00:11:35.666 "uuid": "1ad79a72-8bd4-46bf-ad15-11e773a7db35" 00:11:35.666 }, 00:11:35.666 { 00:11:35.666 "nsid": 2, 00:11:35.666 "bdev_name": "Malloc3", 00:11:35.666 "name": "Malloc3", 00:11:35.666 "nguid": "D51A10A6C7154A58B2E3BAC79B9F4FD1", 00:11:35.666 "uuid": "d51a10a6-c715-4a58-b2e3-bac79b9f4fd1" 00:11:35.666 } 00:11:35.666 ] 00:11:35.666 }, 00:11:35.666 { 00:11:35.666 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:35.666 "subtype": "NVMe", 00:11:35.666 "listen_addresses": [ 00:11:35.666 { 00:11:35.666 "transport": "VFIOUSER", 00:11:35.666 "trtype": "VFIOUSER", 00:11:35.666 "adrfam": "IPv4", 00:11:35.666 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:35.666 "trsvcid": "0" 00:11:35.666 } 00:11:35.666 ], 00:11:35.666 "allow_any_host": true, 00:11:35.666 "hosts": [], 00:11:35.666 "serial_number": "SPDK2", 00:11:35.666 "model_number": "SPDK bdev Controller", 00:11:35.666 "max_namespaces": 32, 00:11:35.666 "min_cntlid": 1, 00:11:35.666 "max_cntlid": 65519, 00:11:35.666 "namespaces": [ 00:11:35.666 { 00:11:35.666 "nsid": 1, 00:11:35.666 "bdev_name": "Malloc2", 00:11:35.666 "name": "Malloc2", 00:11:35.666 "nguid": "B73B09B009A048428081A94CB357B932", 00:11:35.666 "uuid": "b73b09b0-09a0-4842-8081-a94cb357b932" 00:11:35.666 } 00:11:35.666 ] 00:11:35.666 } 00:11:35.666 ] 00:11:35.666 21:04:51 -- target/nvmf_vfio_user.sh@44 -- # wait 2973317 00:11:35.666 21:04:51 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:35.666 21:04:51 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:11:35.666 21:04:51 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:11:35.666 21:04:51 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:11:35.666 [2024-04-18 21:04:51.551991] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:11:35.666 [2024-04-18 21:04:51.552027] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2973487 ] 00:11:35.666 EAL: No free 2048 kB hugepages reported on node 1 00:11:35.666 [2024-04-18 21:04:51.582919] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:11:35.666 [2024-04-18 21:04:51.592765] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:35.666 [2024-04-18 21:04:51.592786] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fd5ba0df000 00:11:35.666 [2024-04-18 21:04:51.593762] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:35.666 [2024-04-18 21:04:51.594771] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:35.666 [2024-04-18 21:04:51.595778] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:35.666 [2024-04-18 21:04:51.596787] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:35.925 [2024-04-18 21:04:51.597795] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:35.925 [2024-04-18 21:04:51.598803] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:35.925 [2024-04-18 21:04:51.599816] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:35.925 [2024-04-18 21:04:51.600820] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:35.925 [2024-04-18 21:04:51.601828] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:35.925 [2024-04-18 21:04:51.601841] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fd5ba0d4000 00:11:35.925 [2024-04-18 21:04:51.602781] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:35.925 [2024-04-18 21:04:51.611300] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:11:35.925 [2024-04-18 21:04:51.611321] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:11:35.925 [2024-04-18 21:04:51.616415] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:11:35.925 [2024-04-18 21:04:51.616453] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:11:35.925 [2024-04-18 21:04:51.616523] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:11:35.925 [2024-04-18 21:04:51.616539] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:11:35.925 [2024-04-18 21:04:51.616544] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:11:35.925 [2024-04-18 21:04:51.617417] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:11:35.925 [2024-04-18 21:04:51.617425] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:11:35.925 [2024-04-18 21:04:51.617431] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:11:35.925 [2024-04-18 21:04:51.618421] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:11:35.925 [2024-04-18 21:04:51.618429] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:11:35.925 [2024-04-18 21:04:51.618436] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:11:35.925 [2024-04-18 21:04:51.619425] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:11:35.925 [2024-04-18 21:04:51.619433] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:11:35.925 [2024-04-18 21:04:51.620428] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:11:35.925 [2024-04-18 21:04:51.620436] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:11:35.925 [2024-04-18 21:04:51.620440] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:11:35.925 [2024-04-18 21:04:51.620446] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:11:35.925 [2024-04-18 21:04:51.620551] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:11:35.925 [2024-04-18 21:04:51.620555] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:11:35.925 [2024-04-18 21:04:51.620560] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:11:35.925 [2024-04-18 21:04:51.621441] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:11:35.925 [2024-04-18 21:04:51.622442] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:11:35.925 [2024-04-18 21:04:51.623448] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:11:35.925 [2024-04-18 21:04:51.624449] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:35.925 [2024-04-18 21:04:51.624486] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:11:35.925 [2024-04-18 21:04:51.625461] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:11:35.925 [2024-04-18 21:04:51.625469] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:11:35.925 [2024-04-18 21:04:51.625473] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:11:35.925 [2024-04-18 21:04:51.625490] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:11:35.925 [2024-04-18 21:04:51.625497] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:11:35.925 [2024-04-18 21:04:51.625507] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:35.925 [2024-04-18 21:04:51.625515] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:35.925 [2024-04-18 21:04:51.625526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:35.925 [2024-04-18 21:04:51.633521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:11:35.925 [2024-04-18 21:04:51.633532] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:11:35.925 [2024-04-18 21:04:51.633537] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:11:35.925 [2024-04-18 21:04:51.633540] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:11:35.925 [2024-04-18 21:04:51.633544] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:11:35.925 [2024-04-18 21:04:51.633548] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:11:35.925 [2024-04-18 21:04:51.633552] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:11:35.925 [2024-04-18 21:04:51.633557] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:11:35.925 [2024-04-18 21:04:51.633565] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:11:35.925 [2024-04-18 21:04:51.633576] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:11:35.925 [2024-04-18 21:04:51.641520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:11:35.925 [2024-04-18 21:04:51.641531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.925 [2024-04-18 21:04:51.641538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.925 [2024-04-18 21:04:51.641545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.925 [2024-04-18 21:04:51.641552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.925 [2024-04-18 21:04:51.641557] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:11:35.925 [2024-04-18 21:04:51.641565] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:11:35.925 [2024-04-18 21:04:51.641573] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:11:35.925 [2024-04-18 21:04:51.649516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:11:35.925 [2024-04-18 21:04:51.649524] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:11:35.925 [2024-04-18 21:04:51.649528] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:11:35.926 [2024-04-18 21:04:51.649536] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:11:35.926 [2024-04-18 21:04:51.649541] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:11:35.926 [2024-04-18 21:04:51.649549] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:35.926 [2024-04-18 21:04:51.657516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:11:35.926 [2024-04-18 21:04:51.657559] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:11:35.926 [2024-04-18 21:04:51.657566] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:11:35.926 [2024-04-18 21:04:51.657573] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:11:35.926 [2024-04-18 21:04:51.657577] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:11:35.926 [2024-04-18 21:04:51.657583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:11:35.926 [2024-04-18 21:04:51.665516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:11:35.926 [2024-04-18 21:04:51.665528] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:11:35.926 [2024-04-18 21:04:51.665540] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:11:35.926 [2024-04-18 21:04:51.665547] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:11:35.926 [2024-04-18 21:04:51.665554] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:35.926 [2024-04-18 21:04:51.665557] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:35.926 [2024-04-18 21:04:51.665563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:35.926 [2024-04-18 21:04:51.673516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:11:35.926 [2024-04-18 21:04:51.673530] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:11:35.926 [2024-04-18 21:04:51.673536] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:11:35.926 [2024-04-18 21:04:51.673543] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:35.926 [2024-04-18 21:04:51.673546] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:35.926 [2024-04-18 21:04:51.673552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:35.926 [2024-04-18 21:04:51.681515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:11:35.926 [2024-04-18 21:04:51.681525] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:11:35.926 [2024-04-18 21:04:51.681531] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:11:35.926 [2024-04-18 21:04:51.681537] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:11:35.926 [2024-04-18 21:04:51.681543] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:11:35.926 [2024-04-18 21:04:51.681547] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:11:35.926 [2024-04-18 21:04:51.681551] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:11:35.926 [2024-04-18 21:04:51.681556] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:11:35.926 [2024-04-18 21:04:51.681560] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:11:35.926 [2024-04-18 21:04:51.681575] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:11:35.926 [2024-04-18 21:04:51.689515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:11:35.926 [2024-04-18 21:04:51.689527] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:11:35.926 [2024-04-18 21:04:51.697516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:11:35.926 [2024-04-18 21:04:51.697527] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:11:35.926 [2024-04-18 21:04:51.705515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:11:35.926 [2024-04-18 21:04:51.705527] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:35.926 [2024-04-18 21:04:51.713515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:11:35.926 [2024-04-18 21:04:51.713527] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:11:35.926 [2024-04-18 21:04:51.713531] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:11:35.926 [2024-04-18 21:04:51.713534] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:11:35.926 [2024-04-18 21:04:51.713537] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:11:35.926 [2024-04-18 21:04:51.713543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:11:35.926 [2024-04-18 21:04:51.713549] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:11:35.926 [2024-04-18 21:04:51.713553] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:11:35.926 [2024-04-18 21:04:51.713558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:11:35.926 [2024-04-18 21:04:51.713564] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:11:35.926 [2024-04-18 21:04:51.713568] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:35.926 [2024-04-18 21:04:51.713576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:35.926 [2024-04-18 21:04:51.713582] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:11:35.926 [2024-04-18 21:04:51.713586] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:11:35.926 [2024-04-18 21:04:51.713591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:11:35.926 [2024-04-18 21:04:51.721516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:11:35.926 [2024-04-18 21:04:51.721530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:11:35.926 [2024-04-18 21:04:51.721538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:11:35.926 [2024-04-18 21:04:51.721544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:11:35.926 ===================================================== 00:11:35.926 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:35.926 ===================================================== 00:11:35.926 Controller Capabilities/Features 00:11:35.926 ================================ 00:11:35.926 Vendor ID: 4e58 00:11:35.926 Subsystem Vendor ID: 4e58 00:11:35.926 Serial Number: SPDK2 00:11:35.926 Model Number: SPDK bdev Controller 00:11:35.926 Firmware Version: 24.05 00:11:35.926 Recommended Arb Burst: 6 00:11:35.926 IEEE OUI Identifier: 8d 6b 50 00:11:35.926 Multi-path I/O 00:11:35.926 May have multiple subsystem ports: Yes 00:11:35.926 May have multiple controllers: Yes 00:11:35.926 Associated with SR-IOV VF: No 00:11:35.926 Max Data Transfer Size: 131072 00:11:35.926 Max Number of Namespaces: 32 00:11:35.926 Max Number of I/O Queues: 127 00:11:35.926 NVMe Specification Version (VS): 1.3 00:11:35.926 NVMe Specification Version (Identify): 1.3 00:11:35.926 Maximum Queue Entries: 256 00:11:35.926 Contiguous Queues Required: Yes 00:11:35.926 Arbitration Mechanisms Supported 00:11:35.926 Weighted Round Robin: Not Supported 00:11:35.926 Vendor Specific: Not Supported 00:11:35.926 Reset Timeout: 15000 ms 00:11:35.926 Doorbell Stride: 4 bytes 00:11:35.926 NVM Subsystem Reset: Not Supported 00:11:35.926 Command Sets Supported 00:11:35.926 NVM Command Set: Supported 00:11:35.926 Boot Partition: Not Supported 00:11:35.926 Memory Page Size Minimum: 4096 bytes 00:11:35.926 Memory Page Size Maximum: 4096 bytes 00:11:35.926 Persistent Memory Region: Not Supported 00:11:35.926 Optional Asynchronous Events Supported 00:11:35.926 Namespace Attribute Notices: Supported 00:11:35.926 Firmware Activation Notices: Not Supported 00:11:35.926 ANA Change Notices: Not Supported 00:11:35.926 PLE Aggregate Log Change Notices: Not Supported 00:11:35.926 LBA Status Info Alert Notices: Not Supported 00:11:35.926 EGE Aggregate Log Change Notices: Not Supported 00:11:35.926 Normal NVM Subsystem Shutdown event: Not Supported 00:11:35.926 Zone Descriptor Change Notices: Not Supported 00:11:35.926 Discovery Log Change Notices: Not Supported 00:11:35.926 Controller Attributes 00:11:35.926 128-bit Host Identifier: Supported 00:11:35.926 Non-Operational Permissive Mode: Not Supported 00:11:35.926 NVM Sets: Not Supported 00:11:35.926 Read Recovery Levels: Not Supported 00:11:35.926 Endurance Groups: Not Supported 00:11:35.926 Predictable Latency Mode: Not Supported 00:11:35.926 Traffic Based Keep ALive: Not Supported 00:11:35.926 Namespace Granularity: Not Supported 00:11:35.926 SQ Associations: Not Supported 00:11:35.926 UUID List: Not Supported 00:11:35.926 Multi-Domain Subsystem: Not Supported 00:11:35.926 Fixed Capacity Management: Not Supported 00:11:35.926 Variable Capacity Management: Not Supported 00:11:35.926 Delete Endurance Group: Not Supported 00:11:35.926 Delete NVM Set: Not Supported 00:11:35.926 Extended LBA Formats Supported: Not Supported 00:11:35.926 Flexible Data Placement Supported: Not Supported 00:11:35.926 00:11:35.926 Controller Memory Buffer Support 00:11:35.926 ================================ 00:11:35.926 Supported: No 00:11:35.926 00:11:35.926 Persistent Memory Region Support 00:11:35.926 ================================ 00:11:35.926 Supported: No 00:11:35.926 00:11:35.926 Admin Command Set Attributes 00:11:35.926 ============================ 00:11:35.926 Security Send/Receive: Not Supported 00:11:35.926 Format NVM: Not Supported 00:11:35.926 Firmware Activate/Download: Not Supported 00:11:35.926 Namespace Management: Not Supported 00:11:35.926 Device Self-Test: Not Supported 00:11:35.926 Directives: Not Supported 00:11:35.926 NVMe-MI: Not Supported 00:11:35.926 Virtualization Management: Not Supported 00:11:35.926 Doorbell Buffer Config: Not Supported 00:11:35.926 Get LBA Status Capability: Not Supported 00:11:35.926 Command & Feature Lockdown Capability: Not Supported 00:11:35.926 Abort Command Limit: 4 00:11:35.926 Async Event Request Limit: 4 00:11:35.926 Number of Firmware Slots: N/A 00:11:35.926 Firmware Slot 1 Read-Only: N/A 00:11:35.926 Firmware Activation Without Reset: N/A 00:11:35.926 Multiple Update Detection Support: N/A 00:11:35.926 Firmware Update Granularity: No Information Provided 00:11:35.926 Per-Namespace SMART Log: No 00:11:35.926 Asymmetric Namespace Access Log Page: Not Supported 00:11:35.926 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:11:35.926 Command Effects Log Page: Supported 00:11:35.926 Get Log Page Extended Data: Supported 00:11:35.926 Telemetry Log Pages: Not Supported 00:11:35.926 Persistent Event Log Pages: Not Supported 00:11:35.926 Supported Log Pages Log Page: May Support 00:11:35.926 Commands Supported & Effects Log Page: Not Supported 00:11:35.926 Feature Identifiers & Effects Log Page:May Support 00:11:35.926 NVMe-MI Commands & Effects Log Page: May Support 00:11:35.926 Data Area 4 for Telemetry Log: Not Supported 00:11:35.926 Error Log Page Entries Supported: 128 00:11:35.926 Keep Alive: Supported 00:11:35.926 Keep Alive Granularity: 10000 ms 00:11:35.926 00:11:35.926 NVM Command Set Attributes 00:11:35.926 ========================== 00:11:35.926 Submission Queue Entry Size 00:11:35.926 Max: 64 00:11:35.926 Min: 64 00:11:35.926 Completion Queue Entry Size 00:11:35.926 Max: 16 00:11:35.926 Min: 16 00:11:35.926 Number of Namespaces: 32 00:11:35.926 Compare Command: Supported 00:11:35.926 Write Uncorrectable Command: Not Supported 00:11:35.926 Dataset Management Command: Supported 00:11:35.926 Write Zeroes Command: Supported 00:11:35.926 Set Features Save Field: Not Supported 00:11:35.926 Reservations: Not Supported 00:11:35.926 Timestamp: Not Supported 00:11:35.926 Copy: Supported 00:11:35.926 Volatile Write Cache: Present 00:11:35.926 Atomic Write Unit (Normal): 1 00:11:35.926 Atomic Write Unit (PFail): 1 00:11:35.926 Atomic Compare & Write Unit: 1 00:11:35.926 Fused Compare & Write: Supported 00:11:35.926 Scatter-Gather List 00:11:35.926 SGL Command Set: Supported (Dword aligned) 00:11:35.926 SGL Keyed: Not Supported 00:11:35.926 SGL Bit Bucket Descriptor: Not Supported 00:11:35.926 SGL Metadata Pointer: Not Supported 00:11:35.926 Oversized SGL: Not Supported 00:11:35.926 SGL Metadata Address: Not Supported 00:11:35.926 SGL Offset: Not Supported 00:11:35.926 Transport SGL Data Block: Not Supported 00:11:35.926 Replay Protected Memory Block: Not Supported 00:11:35.926 00:11:35.926 Firmware Slot Information 00:11:35.926 ========================= 00:11:35.926 Active slot: 1 00:11:35.926 Slot 1 Firmware Revision: 24.05 00:11:35.926 00:11:35.926 00:11:35.926 Commands Supported and Effects 00:11:35.926 ============================== 00:11:35.926 Admin Commands 00:11:35.926 -------------- 00:11:35.926 Get Log Page (02h): Supported 00:11:35.926 Identify (06h): Supported 00:11:35.926 Abort (08h): Supported 00:11:35.926 Set Features (09h): Supported 00:11:35.926 Get Features (0Ah): Supported 00:11:35.926 Asynchronous Event Request (0Ch): Supported 00:11:35.926 Keep Alive (18h): Supported 00:11:35.926 I/O Commands 00:11:35.926 ------------ 00:11:35.927 Flush (00h): Supported LBA-Change 00:11:35.927 Write (01h): Supported LBA-Change 00:11:35.927 Read (02h): Supported 00:11:35.927 Compare (05h): Supported 00:11:35.927 Write Zeroes (08h): Supported LBA-Change 00:11:35.927 Dataset Management (09h): Supported LBA-Change 00:11:35.927 Copy (19h): Supported LBA-Change 00:11:35.927 Unknown (79h): Supported LBA-Change 00:11:35.927 Unknown (7Ah): Supported 00:11:35.927 00:11:35.927 Error Log 00:11:35.927 ========= 00:11:35.927 00:11:35.927 Arbitration 00:11:35.927 =========== 00:11:35.927 Arbitration Burst: 1 00:11:35.927 00:11:35.927 Power Management 00:11:35.927 ================ 00:11:35.927 Number of Power States: 1 00:11:35.927 Current Power State: Power State #0 00:11:35.927 Power State #0: 00:11:35.927 Max Power: 0.00 W 00:11:35.927 Non-Operational State: Operational 00:11:35.927 Entry Latency: Not Reported 00:11:35.927 Exit Latency: Not Reported 00:11:35.927 Relative Read Throughput: 0 00:11:35.927 Relative Read Latency: 0 00:11:35.927 Relative Write Throughput: 0 00:11:35.927 Relative Write Latency: 0 00:11:35.927 Idle Power: Not Reported 00:11:35.927 Active Power: Not Reported 00:11:35.927 Non-Operational Permissive Mode: Not Supported 00:11:35.927 00:11:35.927 Health Information 00:11:35.927 ================== 00:11:35.927 Critical Warnings: 00:11:35.927 Available Spare Space: OK 00:11:35.927 Temperature: OK 00:11:35.927 Device Reliability: OK 00:11:35.927 Read Only: No 00:11:35.927 Volatile Memory Backup: OK 00:11:35.927 Current Temperature: 0 Kelvin (-2[2024-04-18 21:04:51.721634] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:11:35.927 [2024-04-18 21:04:51.729514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:11:35.927 [2024-04-18 21:04:51.729541] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:11:35.927 [2024-04-18 21:04:51.729549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.927 [2024-04-18 21:04:51.729555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.927 [2024-04-18 21:04:51.729560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.927 [2024-04-18 21:04:51.729566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.927 [2024-04-18 21:04:51.729616] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:11:35.927 [2024-04-18 21:04:51.729626] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:11:35.927 [2024-04-18 21:04:51.730619] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:35.927 [2024-04-18 21:04:51.730661] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:11:35.927 [2024-04-18 21:04:51.730666] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:11:35.927 [2024-04-18 21:04:51.731626] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:11:35.927 [2024-04-18 21:04:51.731637] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:11:35.927 [2024-04-18 21:04:51.731681] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:11:35.927 [2024-04-18 21:04:51.732663] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:35.927 73 Celsius) 00:11:35.927 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:11:35.927 Available Spare: 0% 00:11:35.927 Available Spare Threshold: 0% 00:11:35.927 Life Percentage Used: 0% 00:11:35.927 Data Units Read: 0 00:11:35.927 Data Units Written: 0 00:11:35.927 Host Read Commands: 0 00:11:35.927 Host Write Commands: 0 00:11:35.927 Controller Busy Time: 0 minutes 00:11:35.927 Power Cycles: 0 00:11:35.927 Power On Hours: 0 hours 00:11:35.927 Unsafe Shutdowns: 0 00:11:35.927 Unrecoverable Media Errors: 0 00:11:35.927 Lifetime Error Log Entries: 0 00:11:35.927 Warning Temperature Time: 0 minutes 00:11:35.927 Critical Temperature Time: 0 minutes 00:11:35.927 00:11:35.927 Number of Queues 00:11:35.927 ================ 00:11:35.927 Number of I/O Submission Queues: 127 00:11:35.927 Number of I/O Completion Queues: 127 00:11:35.927 00:11:35.927 Active Namespaces 00:11:35.927 ================= 00:11:35.927 Namespace ID:1 00:11:35.927 Error Recovery Timeout: Unlimited 00:11:35.927 Command Set Identifier: NVM (00h) 00:11:35.927 Deallocate: Supported 00:11:35.927 Deallocated/Unwritten Error: Not Supported 00:11:35.927 Deallocated Read Value: Unknown 00:11:35.927 Deallocate in Write Zeroes: Not Supported 00:11:35.927 Deallocated Guard Field: 0xFFFF 00:11:35.927 Flush: Supported 00:11:35.927 Reservation: Supported 00:11:35.927 Namespace Sharing Capabilities: Multiple Controllers 00:11:35.927 Size (in LBAs): 131072 (0GiB) 00:11:35.927 Capacity (in LBAs): 131072 (0GiB) 00:11:35.927 Utilization (in LBAs): 131072 (0GiB) 00:11:35.927 NGUID: B73B09B009A048428081A94CB357B932 00:11:35.927 UUID: b73b09b0-09a0-4842-8081-a94cb357b932 00:11:35.927 Thin Provisioning: Not Supported 00:11:35.927 Per-NS Atomic Units: Yes 00:11:35.927 Atomic Boundary Size (Normal): 0 00:11:35.927 Atomic Boundary Size (PFail): 0 00:11:35.927 Atomic Boundary Offset: 0 00:11:35.927 Maximum Single Source Range Length: 65535 00:11:35.927 Maximum Copy Length: 65535 00:11:35.927 Maximum Source Range Count: 1 00:11:35.927 NGUID/EUI64 Never Reused: No 00:11:35.927 Namespace Write Protected: No 00:11:35.927 Number of LBA Formats: 1 00:11:35.927 Current LBA Format: LBA Format #00 00:11:35.927 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:35.927 00:11:35.927 21:04:51 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:11:35.927 EAL: No free 2048 kB hugepages reported on node 1 00:11:36.183 [2024-04-18 21:04:51.946845] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:41.437 [2024-04-18 21:04:57.054765] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:41.437 Initializing NVMe Controllers 00:11:41.437 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:41.437 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:11:41.437 Initialization complete. Launching workers. 00:11:41.437 ======================================================== 00:11:41.437 Latency(us) 00:11:41.437 Device Information : IOPS MiB/s Average min max 00:11:41.437 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39913.53 155.91 3206.53 973.87 9357.75 00:11:41.437 ======================================================== 00:11:41.437 Total : 39913.53 155.91 3206.53 973.87 9357.75 00:11:41.437 00:11:41.437 21:04:57 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:11:41.437 EAL: No free 2048 kB hugepages reported on node 1 00:11:41.437 [2024-04-18 21:04:57.270388] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:46.701 [2024-04-18 21:05:02.287927] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:46.701 Initializing NVMe Controllers 00:11:46.701 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:46.701 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:11:46.701 Initialization complete. Launching workers. 00:11:46.701 ======================================================== 00:11:46.701 Latency(us) 00:11:46.701 Device Information : IOPS MiB/s Average min max 00:11:46.701 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39848.26 155.66 3211.78 991.26 8332.14 00:11:46.701 ======================================================== 00:11:46.701 Total : 39848.26 155.66 3211.78 991.26 8332.14 00:11:46.701 00:11:46.701 21:05:02 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:11:46.701 EAL: No free 2048 kB hugepages reported on node 1 00:11:46.701 [2024-04-18 21:05:02.499379] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:52.010 [2024-04-18 21:05:07.641609] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:52.010 Initializing NVMe Controllers 00:11:52.010 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:52.010 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:52.010 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:11:52.010 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:11:52.010 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:11:52.010 Initialization complete. Launching workers. 00:11:52.010 Starting thread on core 2 00:11:52.010 Starting thread on core 3 00:11:52.010 Starting thread on core 1 00:11:52.010 21:05:07 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:11:52.010 EAL: No free 2048 kB hugepages reported on node 1 00:11:52.010 [2024-04-18 21:05:07.923443] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:55.304 [2024-04-18 21:05:10.985207] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:55.304 Initializing NVMe Controllers 00:11:55.304 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:55.304 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:55.304 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:11:55.304 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:11:55.304 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:11:55.304 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:11:55.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:11:55.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:11:55.304 Initialization complete. Launching workers. 00:11:55.304 Starting thread on core 1 with urgent priority queue 00:11:55.304 Starting thread on core 2 with urgent priority queue 00:11:55.304 Starting thread on core 3 with urgent priority queue 00:11:55.304 Starting thread on core 0 with urgent priority queue 00:11:55.304 SPDK bdev Controller (SPDK2 ) core 0: 7347.33 IO/s 13.61 secs/100000 ios 00:11:55.304 SPDK bdev Controller (SPDK2 ) core 1: 9581.33 IO/s 10.44 secs/100000 ios 00:11:55.304 SPDK bdev Controller (SPDK2 ) core 2: 8446.00 IO/s 11.84 secs/100000 ios 00:11:55.304 SPDK bdev Controller (SPDK2 ) core 3: 8030.33 IO/s 12.45 secs/100000 ios 00:11:55.304 ======================================================== 00:11:55.304 00:11:55.304 21:05:11 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:11:55.304 EAL: No free 2048 kB hugepages reported on node 1 00:11:55.564 [2024-04-18 21:05:11.256998] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:55.564 [2024-04-18 21:05:11.267054] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:55.564 Initializing NVMe Controllers 00:11:55.564 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:55.564 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:55.564 Namespace ID: 1 size: 0GB 00:11:55.564 Initialization complete. 00:11:55.564 INFO: using host memory buffer for IO 00:11:55.564 Hello world! 00:11:55.564 21:05:11 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:11:55.564 EAL: No free 2048 kB hugepages reported on node 1 00:11:55.824 [2024-04-18 21:05:11.531413] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:56.763 Initializing NVMe Controllers 00:11:56.763 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:56.763 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:56.763 Initialization complete. Launching workers. 00:11:56.763 submit (in ns) avg, min, max = 6311.6, 3232.2, 4001105.2 00:11:56.763 complete (in ns) avg, min, max = 20546.3, 1759.1, 4002073.9 00:11:56.763 00:11:56.763 Submit histogram 00:11:56.763 ================ 00:11:56.763 Range in us Cumulative Count 00:11:56.763 3.228 - 3.242: 0.0240% ( 4) 00:11:56.763 3.242 - 3.256: 0.3963% ( 62) 00:11:56.763 3.256 - 3.270: 2.3000% ( 317) 00:11:56.763 3.270 - 3.283: 5.9512% ( 608) 00:11:56.763 3.283 - 3.297: 11.0918% ( 856) 00:11:56.763 3.297 - 3.311: 16.9589% ( 977) 00:11:56.763 3.311 - 3.325: 23.0062% ( 1007) 00:11:56.763 3.325 - 3.339: 28.5191% ( 918) 00:11:56.763 3.339 - 3.353: 33.5635% ( 840) 00:11:56.763 3.353 - 3.367: 39.3646% ( 966) 00:11:56.763 3.367 - 3.381: 44.1809% ( 802) 00:11:56.763 3.381 - 3.395: 48.1143% ( 655) 00:11:56.763 3.395 - 3.409: 51.9938% ( 646) 00:11:56.763 3.409 - 3.423: 58.0231% ( 1004) 00:11:56.763 3.423 - 3.437: 63.7881% ( 960) 00:11:56.763 3.437 - 3.450: 68.0999% ( 718) 00:11:56.763 3.450 - 3.464: 73.6608% ( 926) 00:11:56.763 3.464 - 3.478: 78.6993% ( 839) 00:11:56.763 3.478 - 3.492: 82.0982% ( 566) 00:11:56.763 3.492 - 3.506: 84.5424% ( 407) 00:11:56.763 3.506 - 3.520: 86.0377% ( 249) 00:11:56.763 3.520 - 3.534: 86.9986% ( 160) 00:11:56.763 3.534 - 3.548: 87.5991% ( 100) 00:11:56.763 3.548 - 3.562: 88.0255% ( 71) 00:11:56.763 3.562 - 3.590: 89.3466% ( 220) 00:11:56.763 3.590 - 3.617: 90.8600% ( 252) 00:11:56.763 3.617 - 3.645: 92.6976% ( 306) 00:11:56.763 3.645 - 3.673: 94.4211% ( 287) 00:11:56.763 3.673 - 3.701: 96.1506% ( 288) 00:11:56.763 3.701 - 3.729: 97.5919% ( 240) 00:11:56.763 3.729 - 3.757: 98.4566% ( 144) 00:11:56.763 3.757 - 3.784: 98.9431% ( 81) 00:11:56.763 3.784 - 3.812: 99.2013% ( 43) 00:11:56.763 3.812 - 3.840: 99.3574% ( 26) 00:11:56.763 3.840 - 3.868: 99.4715% ( 19) 00:11:56.763 3.868 - 3.896: 99.5136% ( 7) 00:11:56.763 3.923 - 3.951: 99.5316% ( 3) 00:11:56.763 3.951 - 3.979: 99.5376% ( 1) 00:11:56.763 3.979 - 4.007: 99.5436% ( 1) 00:11:56.763 4.035 - 4.063: 99.5556% ( 2) 00:11:56.763 5.092 - 5.120: 99.5616% ( 1) 00:11:56.763 5.203 - 5.231: 99.5676% ( 1) 00:11:56.763 5.287 - 5.315: 99.5736% ( 1) 00:11:56.763 5.343 - 5.370: 99.5796% ( 1) 00:11:56.763 5.370 - 5.398: 99.5856% ( 1) 00:11:56.763 5.398 - 5.426: 99.5916% ( 1) 00:11:56.763 5.426 - 5.454: 99.5976% ( 1) 00:11:56.763 5.454 - 5.482: 99.6097% ( 2) 00:11:56.763 5.565 - 5.593: 99.6277% ( 3) 00:11:56.763 5.593 - 5.621: 99.6337% ( 1) 00:11:56.763 5.649 - 5.677: 99.6397% ( 1) 00:11:56.763 5.760 - 5.788: 99.6457% ( 1) 00:11:56.763 5.816 - 5.843: 99.6517% ( 1) 00:11:56.763 5.871 - 5.899: 99.6637% ( 2) 00:11:56.763 5.983 - 6.010: 99.6697% ( 1) 00:11:56.763 6.038 - 6.066: 99.6757% ( 1) 00:11:56.763 6.122 - 6.150: 99.6817% ( 1) 00:11:56.763 6.177 - 6.205: 99.6937% ( 2) 00:11:56.763 6.289 - 6.317: 99.6997% ( 1) 00:11:56.763 6.483 - 6.511: 99.7057% ( 1) 00:11:56.763 6.706 - 6.734: 99.7117% ( 1) 00:11:56.763 6.790 - 6.817: 99.7178% ( 1) 00:11:56.763 6.817 - 6.845: 99.7238% ( 1) 00:11:56.763 6.901 - 6.929: 99.7298% ( 1) 00:11:56.763 6.929 - 6.957: 99.7358% ( 1) 00:11:56.763 6.957 - 6.984: 99.7418% ( 1) 00:11:56.763 7.068 - 7.096: 99.7478% ( 1) 00:11:56.763 7.235 - 7.290: 99.7538% ( 1) 00:11:56.763 7.290 - 7.346: 99.7598% ( 1) 00:11:56.763 7.346 - 7.402: 99.7658% ( 1) 00:11:56.763 7.513 - 7.569: 99.7718% ( 1) 00:11:56.763 7.569 - 7.624: 99.7898% ( 3) 00:11:56.763 7.624 - 7.680: 99.8018% ( 2) 00:11:56.763 7.680 - 7.736: 99.8078% ( 1) 00:11:56.763 7.791 - 7.847: 99.8138% ( 1) 00:11:56.763 8.014 - 8.070: 99.8258% ( 2) 00:11:56.763 8.181 - 8.237: 99.8319% ( 1) 00:11:56.763 8.292 - 8.348: 99.8379% ( 1) 00:11:56.764 8.570 - 8.626: 99.8439% ( 1) 00:11:56.764 [2024-04-18 21:05:12.622500] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:56.764 8.682 - 8.737: 99.8499% ( 1) 00:11:56.764 8.793 - 8.849: 99.8559% ( 1) 00:11:56.764 9.016 - 9.071: 99.8679% ( 2) 00:11:56.764 9.350 - 9.405: 99.8739% ( 1) 00:11:56.764 9.405 - 9.461: 99.8799% ( 1) 00:11:56.764 9.517 - 9.572: 99.8919% ( 2) 00:11:56.764 9.739 - 9.795: 99.8979% ( 1) 00:11:56.764 10.184 - 10.240: 99.9039% ( 1) 00:11:56.764 10.574 - 10.630: 99.9099% ( 1) 00:11:56.764 10.741 - 10.797: 99.9159% ( 1) 00:11:56.764 17.252 - 17.363: 99.9219% ( 1) 00:11:56.764 19.478 - 19.590: 99.9279% ( 1) 00:11:56.764 3989.148 - 4017.642: 100.0000% ( 12) 00:11:56.764 00:11:56.764 Complete histogram 00:11:56.764 ================== 00:11:56.764 Range in us Cumulative Count 00:11:56.764 1.753 - 1.760: 0.0060% ( 1) 00:11:56.764 1.767 - 1.774: 0.0120% ( 1) 00:11:56.764 1.781 - 1.795: 0.4564% ( 74) 00:11:56.764 1.795 - 1.809: 24.6637% ( 4031) 00:11:56.764 1.809 - 1.823: 71.9914% ( 7881) 00:11:56.764 1.823 - 1.837: 79.7742% ( 1296) 00:11:56.764 1.837 - 1.850: 88.8061% ( 1504) 00:11:56.764 1.850 - 1.864: 94.3730% ( 927) 00:11:56.764 1.864 - 1.878: 96.0065% ( 272) 00:11:56.764 1.878 - 1.892: 97.5979% ( 265) 00:11:56.764 1.892 - 1.906: 98.4687% ( 145) 00:11:56.764 1.906 - 1.920: 98.7929% ( 54) 00:11:56.764 1.920 - 1.934: 98.9971% ( 34) 00:11:56.764 1.934 - 1.948: 99.1232% ( 21) 00:11:56.764 1.948 - 1.962: 99.1412% ( 3) 00:11:56.764 1.976 - 1.990: 99.1713% ( 5) 00:11:56.764 1.990 - 2.003: 99.1773% ( 1) 00:11:56.764 2.003 - 2.017: 99.1893% ( 2) 00:11:56.764 2.017 - 2.031: 99.2253% ( 6) 00:11:56.764 2.031 - 2.045: 99.2313% ( 1) 00:11:56.764 2.045 - 2.059: 99.2493% ( 3) 00:11:56.764 2.059 - 2.073: 99.2794% ( 5) 00:11:56.764 2.073 - 2.087: 99.2974% ( 3) 00:11:56.764 2.087 - 2.101: 99.3094% ( 2) 00:11:56.764 2.129 - 2.143: 99.3154% ( 1) 00:11:56.764 2.143 - 2.157: 99.3214% ( 1) 00:11:56.764 2.198 - 2.212: 99.3274% ( 1) 00:11:56.764 2.393 - 2.407: 99.3334% ( 1) 00:11:56.764 3.840 - 3.868: 99.3394% ( 1) 00:11:56.764 3.868 - 3.896: 99.3454% ( 1) 00:11:56.764 3.979 - 4.007: 99.3514% ( 1) 00:11:56.764 4.007 - 4.035: 99.3574% ( 1) 00:11:56.764 4.146 - 4.174: 99.3634% ( 1) 00:11:56.764 4.313 - 4.341: 99.3694% ( 1) 00:11:56.764 4.341 - 4.369: 99.3755% ( 1) 00:11:56.764 4.369 - 4.397: 99.3815% ( 1) 00:11:56.764 4.675 - 4.703: 99.3875% ( 1) 00:11:56.764 4.897 - 4.925: 99.3995% ( 2) 00:11:56.764 5.037 - 5.064: 99.4055% ( 1) 00:11:56.764 5.064 - 5.092: 99.4115% ( 1) 00:11:56.764 5.120 - 5.148: 99.4175% ( 1) 00:11:56.764 5.343 - 5.370: 99.4235% ( 1) 00:11:56.764 5.788 - 5.816: 99.4295% ( 1) 00:11:56.764 6.038 - 6.066: 99.4355% ( 1) 00:11:56.764 6.094 - 6.122: 99.4415% ( 1) 00:11:56.764 6.122 - 6.150: 99.4475% ( 1) 00:11:56.764 6.205 - 6.233: 99.4595% ( 2) 00:11:56.764 6.929 - 6.957: 99.4655% ( 1) 00:11:56.764 6.957 - 6.984: 99.4715% ( 1) 00:11:56.764 7.346 - 7.402: 99.4835% ( 2) 00:11:56.764 7.513 - 7.569: 99.4896% ( 1) 00:11:56.764 7.847 - 7.903: 99.5016% ( 2) 00:11:56.764 7.903 - 7.958: 99.5076% ( 1) 00:11:56.764 8.292 - 8.348: 99.5136% ( 1) 00:11:56.764 8.459 - 8.515: 99.5196% ( 1) 00:11:56.764 12.299 - 12.355: 99.5256% ( 1) 00:11:56.764 34.950 - 35.172: 99.5316% ( 1) 00:11:56.764 3989.148 - 4017.642: 100.0000% ( 78) 00:11:56.764 00:11:56.764 21:05:12 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:11:56.764 21:05:12 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:11:56.764 21:05:12 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:11:56.764 21:05:12 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:11:56.764 21:05:12 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:57.023 [ 00:11:57.023 { 00:11:57.023 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:57.023 "subtype": "Discovery", 00:11:57.023 "listen_addresses": [], 00:11:57.023 "allow_any_host": true, 00:11:57.023 "hosts": [] 00:11:57.023 }, 00:11:57.023 { 00:11:57.023 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:57.023 "subtype": "NVMe", 00:11:57.024 "listen_addresses": [ 00:11:57.024 { 00:11:57.024 "transport": "VFIOUSER", 00:11:57.024 "trtype": "VFIOUSER", 00:11:57.024 "adrfam": "IPv4", 00:11:57.024 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:57.024 "trsvcid": "0" 00:11:57.024 } 00:11:57.024 ], 00:11:57.024 "allow_any_host": true, 00:11:57.024 "hosts": [], 00:11:57.024 "serial_number": "SPDK1", 00:11:57.024 "model_number": "SPDK bdev Controller", 00:11:57.024 "max_namespaces": 32, 00:11:57.024 "min_cntlid": 1, 00:11:57.024 "max_cntlid": 65519, 00:11:57.024 "namespaces": [ 00:11:57.024 { 00:11:57.024 "nsid": 1, 00:11:57.024 "bdev_name": "Malloc1", 00:11:57.024 "name": "Malloc1", 00:11:57.024 "nguid": "1AD79A728BD446BFAD1511E773A7DB35", 00:11:57.024 "uuid": "1ad79a72-8bd4-46bf-ad15-11e773a7db35" 00:11:57.024 }, 00:11:57.024 { 00:11:57.024 "nsid": 2, 00:11:57.024 "bdev_name": "Malloc3", 00:11:57.024 "name": "Malloc3", 00:11:57.024 "nguid": "D51A10A6C7154A58B2E3BAC79B9F4FD1", 00:11:57.024 "uuid": "d51a10a6-c715-4a58-b2e3-bac79b9f4fd1" 00:11:57.024 } 00:11:57.024 ] 00:11:57.024 }, 00:11:57.024 { 00:11:57.024 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:57.024 "subtype": "NVMe", 00:11:57.024 "listen_addresses": [ 00:11:57.024 { 00:11:57.024 "transport": "VFIOUSER", 00:11:57.024 "trtype": "VFIOUSER", 00:11:57.024 "adrfam": "IPv4", 00:11:57.024 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:57.024 "trsvcid": "0" 00:11:57.024 } 00:11:57.024 ], 00:11:57.024 "allow_any_host": true, 00:11:57.024 "hosts": [], 00:11:57.024 "serial_number": "SPDK2", 00:11:57.024 "model_number": "SPDK bdev Controller", 00:11:57.024 "max_namespaces": 32, 00:11:57.024 "min_cntlid": 1, 00:11:57.024 "max_cntlid": 65519, 00:11:57.024 "namespaces": [ 00:11:57.024 { 00:11:57.024 "nsid": 1, 00:11:57.024 "bdev_name": "Malloc2", 00:11:57.024 "name": "Malloc2", 00:11:57.024 "nguid": "B73B09B009A048428081A94CB357B932", 00:11:57.024 "uuid": "b73b09b0-09a0-4842-8081-a94cb357b932" 00:11:57.024 } 00:11:57.024 ] 00:11:57.024 } 00:11:57.024 ] 00:11:57.024 21:05:12 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:11:57.024 21:05:12 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:11:57.024 21:05:12 -- target/nvmf_vfio_user.sh@34 -- # aerpid=2976949 00:11:57.024 21:05:12 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:11:57.024 21:05:12 -- common/autotest_common.sh@1251 -- # local i=0 00:11:57.024 21:05:12 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:57.024 21:05:12 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:57.024 21:05:12 -- common/autotest_common.sh@1262 -- # return 0 00:11:57.024 21:05:12 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:11:57.024 21:05:12 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:11:57.024 EAL: No free 2048 kB hugepages reported on node 1 00:11:57.283 [2024-04-18 21:05:12.983889] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:57.283 Malloc4 00:11:57.283 21:05:13 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:11:57.542 [2024-04-18 21:05:13.218613] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:57.542 21:05:13 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:57.542 Asynchronous Event Request test 00:11:57.542 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:57.542 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:57.542 Registering asynchronous event callbacks... 00:11:57.542 Starting namespace attribute notice tests for all controllers... 00:11:57.542 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:11:57.542 aer_cb - Changed Namespace 00:11:57.542 Cleaning up... 00:11:57.542 [ 00:11:57.542 { 00:11:57.542 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:57.542 "subtype": "Discovery", 00:11:57.542 "listen_addresses": [], 00:11:57.542 "allow_any_host": true, 00:11:57.542 "hosts": [] 00:11:57.542 }, 00:11:57.542 { 00:11:57.542 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:57.542 "subtype": "NVMe", 00:11:57.543 "listen_addresses": [ 00:11:57.543 { 00:11:57.543 "transport": "VFIOUSER", 00:11:57.543 "trtype": "VFIOUSER", 00:11:57.543 "adrfam": "IPv4", 00:11:57.543 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:57.543 "trsvcid": "0" 00:11:57.543 } 00:11:57.543 ], 00:11:57.543 "allow_any_host": true, 00:11:57.543 "hosts": [], 00:11:57.543 "serial_number": "SPDK1", 00:11:57.543 "model_number": "SPDK bdev Controller", 00:11:57.543 "max_namespaces": 32, 00:11:57.543 "min_cntlid": 1, 00:11:57.543 "max_cntlid": 65519, 00:11:57.543 "namespaces": [ 00:11:57.543 { 00:11:57.543 "nsid": 1, 00:11:57.543 "bdev_name": "Malloc1", 00:11:57.543 "name": "Malloc1", 00:11:57.543 "nguid": "1AD79A728BD446BFAD1511E773A7DB35", 00:11:57.543 "uuid": "1ad79a72-8bd4-46bf-ad15-11e773a7db35" 00:11:57.543 }, 00:11:57.543 { 00:11:57.543 "nsid": 2, 00:11:57.543 "bdev_name": "Malloc3", 00:11:57.543 "name": "Malloc3", 00:11:57.543 "nguid": "D51A10A6C7154A58B2E3BAC79B9F4FD1", 00:11:57.543 "uuid": "d51a10a6-c715-4a58-b2e3-bac79b9f4fd1" 00:11:57.543 } 00:11:57.543 ] 00:11:57.543 }, 00:11:57.543 { 00:11:57.543 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:57.543 "subtype": "NVMe", 00:11:57.543 "listen_addresses": [ 00:11:57.543 { 00:11:57.543 "transport": "VFIOUSER", 00:11:57.543 "trtype": "VFIOUSER", 00:11:57.543 "adrfam": "IPv4", 00:11:57.543 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:57.543 "trsvcid": "0" 00:11:57.543 } 00:11:57.543 ], 00:11:57.543 "allow_any_host": true, 00:11:57.543 "hosts": [], 00:11:57.543 "serial_number": "SPDK2", 00:11:57.543 "model_number": "SPDK bdev Controller", 00:11:57.543 "max_namespaces": 32, 00:11:57.543 "min_cntlid": 1, 00:11:57.543 "max_cntlid": 65519, 00:11:57.543 "namespaces": [ 00:11:57.543 { 00:11:57.543 "nsid": 1, 00:11:57.543 "bdev_name": "Malloc2", 00:11:57.543 "name": "Malloc2", 00:11:57.543 "nguid": "B73B09B009A048428081A94CB357B932", 00:11:57.543 "uuid": "b73b09b0-09a0-4842-8081-a94cb357b932" 00:11:57.543 }, 00:11:57.543 { 00:11:57.543 "nsid": 2, 00:11:57.543 "bdev_name": "Malloc4", 00:11:57.543 "name": "Malloc4", 00:11:57.543 "nguid": "A9806139D4B04D22B5977C7704B833DF", 00:11:57.543 "uuid": "a9806139-d4b0-4d22-b597-7c7704b833df" 00:11:57.543 } 00:11:57.543 ] 00:11:57.543 } 00:11:57.543 ] 00:11:57.543 21:05:13 -- target/nvmf_vfio_user.sh@44 -- # wait 2976949 00:11:57.543 21:05:13 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:11:57.543 21:05:13 -- target/nvmf_vfio_user.sh@95 -- # killprocess 2969314 00:11:57.543 21:05:13 -- common/autotest_common.sh@936 -- # '[' -z 2969314 ']' 00:11:57.543 21:05:13 -- common/autotest_common.sh@940 -- # kill -0 2969314 00:11:57.543 21:05:13 -- common/autotest_common.sh@941 -- # uname 00:11:57.543 21:05:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:57.543 21:05:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2969314 00:11:57.802 21:05:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:57.802 21:05:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:57.802 21:05:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2969314' 00:11:57.802 killing process with pid 2969314 00:11:57.802 21:05:13 -- common/autotest_common.sh@955 -- # kill 2969314 00:11:57.802 [2024-04-18 21:05:13.475030] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:11:57.802 21:05:13 -- common/autotest_common.sh@960 -- # wait 2969314 00:11:58.062 21:05:13 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:11:58.062 21:05:13 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:58.062 21:05:13 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:11:58.062 21:05:13 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:11:58.062 21:05:13 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:11:58.062 21:05:13 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2977181 00:11:58.062 21:05:13 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2977181' 00:11:58.062 Process pid: 2977181 00:11:58.062 21:05:13 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:11:58.062 21:05:13 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:58.062 21:05:13 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2977181 00:11:58.062 21:05:13 -- common/autotest_common.sh@817 -- # '[' -z 2977181 ']' 00:11:58.062 21:05:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.062 21:05:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:58.062 21:05:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.062 21:05:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:58.062 21:05:13 -- common/autotest_common.sh@10 -- # set +x 00:11:58.062 [2024-04-18 21:05:13.805966] thread.c:2927:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:11:58.062 [2024-04-18 21:05:13.806908] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:11:58.062 [2024-04-18 21:05:13.806945] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:58.062 EAL: No free 2048 kB hugepages reported on node 1 00:11:58.062 [2024-04-18 21:05:13.864788] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:58.062 [2024-04-18 21:05:13.932030] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:58.062 [2024-04-18 21:05:13.932070] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:58.062 [2024-04-18 21:05:13.932077] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:58.062 [2024-04-18 21:05:13.932083] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:58.062 [2024-04-18 21:05:13.932087] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:58.062 [2024-04-18 21:05:13.932175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:58.062 [2024-04-18 21:05:13.932272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:58.062 [2024-04-18 21:05:13.932335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:58.062 [2024-04-18 21:05:13.932336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.321 [2024-04-18 21:05:14.010920] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:11:58.321 [2024-04-18 21:05:14.011102] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:11:58.321 [2024-04-18 21:05:14.011257] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:11:58.321 [2024-04-18 21:05:14.011726] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:11:58.321 [2024-04-18 21:05:14.011814] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:11:58.889 21:05:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:58.889 21:05:14 -- common/autotest_common.sh@850 -- # return 0 00:11:58.889 21:05:14 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:11:59.827 21:05:15 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:12:00.086 21:05:15 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:00.086 21:05:15 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:00.086 21:05:15 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:00.086 21:05:15 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:00.086 21:05:15 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:00.086 Malloc1 00:12:00.086 21:05:15 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:00.345 21:05:16 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:00.604 21:05:16 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:00.604 21:05:16 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:00.604 21:05:16 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:00.604 21:05:16 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:00.863 Malloc2 00:12:00.863 21:05:16 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:01.123 21:05:16 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:01.382 21:05:17 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:01.382 21:05:17 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:12:01.382 21:05:17 -- target/nvmf_vfio_user.sh@95 -- # killprocess 2977181 00:12:01.382 21:05:17 -- common/autotest_common.sh@936 -- # '[' -z 2977181 ']' 00:12:01.382 21:05:17 -- common/autotest_common.sh@940 -- # kill -0 2977181 00:12:01.382 21:05:17 -- common/autotest_common.sh@941 -- # uname 00:12:01.382 21:05:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:01.382 21:05:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2977181 00:12:01.382 21:05:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:01.382 21:05:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:01.382 21:05:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2977181' 00:12:01.382 killing process with pid 2977181 00:12:01.382 21:05:17 -- common/autotest_common.sh@955 -- # kill 2977181 00:12:01.382 21:05:17 -- common/autotest_common.sh@960 -- # wait 2977181 00:12:01.641 21:05:17 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:01.642 21:05:17 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:01.642 00:12:01.642 real 0m51.277s 00:12:01.642 user 3m22.916s 00:12:01.642 sys 0m3.604s 00:12:01.642 21:05:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:01.642 21:05:17 -- common/autotest_common.sh@10 -- # set +x 00:12:01.642 ************************************ 00:12:01.642 END TEST nvmf_vfio_user 00:12:01.642 ************************************ 00:12:01.642 21:05:17 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:01.642 21:05:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:01.642 21:05:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:01.642 21:05:17 -- common/autotest_common.sh@10 -- # set +x 00:12:01.902 ************************************ 00:12:01.902 START TEST nvmf_vfio_user_nvme_compliance 00:12:01.902 ************************************ 00:12:01.902 21:05:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:01.902 * Looking for test storage... 00:12:01.902 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:12:01.902 21:05:17 -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:01.902 21:05:17 -- nvmf/common.sh@7 -- # uname -s 00:12:01.902 21:05:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:01.902 21:05:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:01.902 21:05:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:01.902 21:05:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:01.902 21:05:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:01.902 21:05:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:01.902 21:05:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:01.902 21:05:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:01.902 21:05:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:01.902 21:05:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:01.902 21:05:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:01.902 21:05:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:01.902 21:05:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:01.902 21:05:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:01.902 21:05:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:01.902 21:05:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:01.902 21:05:17 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:01.902 21:05:17 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:01.902 21:05:17 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:01.902 21:05:17 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:01.902 21:05:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.902 21:05:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.902 21:05:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.902 21:05:17 -- paths/export.sh@5 -- # export PATH 00:12:01.902 21:05:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.902 21:05:17 -- nvmf/common.sh@47 -- # : 0 00:12:01.902 21:05:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:01.902 21:05:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:01.902 21:05:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:01.902 21:05:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:01.902 21:05:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:01.902 21:05:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:01.902 21:05:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:01.902 21:05:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:01.902 21:05:17 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:01.902 21:05:17 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:01.902 21:05:17 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:12:01.902 21:05:17 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:12:01.902 21:05:17 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:12:01.902 21:05:17 -- compliance/compliance.sh@20 -- # nvmfpid=2977947 00:12:01.902 21:05:17 -- compliance/compliance.sh@21 -- # echo 'Process pid: 2977947' 00:12:01.902 Process pid: 2977947 00:12:01.902 21:05:17 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:01.902 21:05:17 -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:01.902 21:05:17 -- compliance/compliance.sh@24 -- # waitforlisten 2977947 00:12:01.902 21:05:17 -- common/autotest_common.sh@817 -- # '[' -z 2977947 ']' 00:12:01.902 21:05:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.902 21:05:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:01.902 21:05:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.902 21:05:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:01.902 21:05:17 -- common/autotest_common.sh@10 -- # set +x 00:12:02.162 [2024-04-18 21:05:17.845772] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:12:02.162 [2024-04-18 21:05:17.845814] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:02.162 EAL: No free 2048 kB hugepages reported on node 1 00:12:02.162 [2024-04-18 21:05:17.905215] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:02.162 [2024-04-18 21:05:17.974680] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:02.162 [2024-04-18 21:05:17.974722] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:02.162 [2024-04-18 21:05:17.974729] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:02.162 [2024-04-18 21:05:17.974734] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:02.162 [2024-04-18 21:05:17.974739] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:02.162 [2024-04-18 21:05:17.974825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:02.162 [2024-04-18 21:05:17.974922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:02.162 [2024-04-18 21:05:17.974923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.731 21:05:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:02.731 21:05:18 -- common/autotest_common.sh@850 -- # return 0 00:12:02.731 21:05:18 -- compliance/compliance.sh@26 -- # sleep 1 00:12:04.108 21:05:19 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:04.108 21:05:19 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:12:04.108 21:05:19 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:04.108 21:05:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:04.108 21:05:19 -- common/autotest_common.sh@10 -- # set +x 00:12:04.108 21:05:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:04.108 21:05:19 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:12:04.108 21:05:19 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:04.108 21:05:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:04.109 21:05:19 -- common/autotest_common.sh@10 -- # set +x 00:12:04.109 malloc0 00:12:04.109 21:05:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:04.109 21:05:19 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:12:04.109 21:05:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:04.109 21:05:19 -- common/autotest_common.sh@10 -- # set +x 00:12:04.109 21:05:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:04.109 21:05:19 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:04.109 21:05:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:04.109 21:05:19 -- common/autotest_common.sh@10 -- # set +x 00:12:04.109 21:05:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:04.109 21:05:19 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:04.109 21:05:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:04.109 21:05:19 -- common/autotest_common.sh@10 -- # set +x 00:12:04.109 21:05:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:04.109 21:05:19 -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:12:04.109 EAL: No free 2048 kB hugepages reported on node 1 00:12:04.109 00:12:04.109 00:12:04.109 CUnit - A unit testing framework for C - Version 2.1-3 00:12:04.109 http://cunit.sourceforge.net/ 00:12:04.109 00:12:04.109 00:12:04.109 Suite: nvme_compliance 00:12:04.109 Test: admin_identify_ctrlr_verify_dptr ...[2024-04-18 21:05:19.867944] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:04.109 [2024-04-18 21:05:19.869250] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:12:04.109 [2024-04-18 21:05:19.869267] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:12:04.109 [2024-04-18 21:05:19.869273] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:12:04.109 [2024-04-18 21:05:19.870964] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:04.109 passed 00:12:04.109 Test: admin_identify_ctrlr_verify_fused ...[2024-04-18 21:05:19.952517] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:04.109 [2024-04-18 21:05:19.955541] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:04.109 passed 00:12:04.109 Test: admin_identify_ns ...[2024-04-18 21:05:20.036580] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:04.368 [2024-04-18 21:05:20.097523] ctrlr.c:2691:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:12:04.368 [2024-04-18 21:05:20.105525] ctrlr.c:2691:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:12:04.368 [2024-04-18 21:05:20.126616] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:04.368 passed 00:12:04.368 Test: admin_get_features_mandatory_features ...[2024-04-18 21:05:20.204821] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:04.368 [2024-04-18 21:05:20.207841] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:04.368 passed 00:12:04.368 Test: admin_get_features_optional_features ...[2024-04-18 21:05:20.289384] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:04.368 [2024-04-18 21:05:20.292399] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:04.627 passed 00:12:04.627 Test: admin_set_features_number_of_queues ...[2024-04-18 21:05:20.372101] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:04.627 [2024-04-18 21:05:20.477683] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:04.627 passed 00:12:04.627 Test: admin_get_log_page_mandatory_logs ...[2024-04-18 21:05:20.551739] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:04.627 [2024-04-18 21:05:20.555767] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:04.887 passed 00:12:04.887 Test: admin_get_log_page_with_lpo ...[2024-04-18 21:05:20.631905] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:04.887 [2024-04-18 21:05:20.703519] ctrlr.c:2639:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:12:04.887 [2024-04-18 21:05:20.716573] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:04.887 passed 00:12:04.887 Test: fabric_property_get ...[2024-04-18 21:05:20.790528] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:04.887 [2024-04-18 21:05:20.791757] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:12:04.887 [2024-04-18 21:05:20.793552] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:05.147 passed 00:12:05.147 Test: admin_delete_io_sq_use_admin_qid ...[2024-04-18 21:05:20.872032] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:05.147 [2024-04-18 21:05:20.873247] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:12:05.147 [2024-04-18 21:05:20.875054] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:05.147 passed 00:12:05.147 Test: admin_delete_io_sq_delete_sq_twice ...[2024-04-18 21:05:20.951921] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:05.147 [2024-04-18 21:05:21.039522] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:05.147 [2024-04-18 21:05:21.055515] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:05.147 [2024-04-18 21:05:21.060596] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:05.407 passed 00:12:05.407 Test: admin_delete_io_cq_use_admin_qid ...[2024-04-18 21:05:21.134908] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:05.407 [2024-04-18 21:05:21.136133] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:12:05.407 [2024-04-18 21:05:21.137927] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:05.407 passed 00:12:05.407 Test: admin_delete_io_cq_delete_cq_first ...[2024-04-18 21:05:21.216752] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:05.407 [2024-04-18 21:05:21.293522] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:05.407 [2024-04-18 21:05:21.317516] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:05.407 [2024-04-18 21:05:21.322606] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:05.666 passed 00:12:05.667 Test: admin_create_io_cq_verify_iv_pc ...[2024-04-18 21:05:21.396738] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:05.667 [2024-04-18 21:05:21.397950] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:12:05.667 [2024-04-18 21:05:21.397973] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:12:05.667 [2024-04-18 21:05:21.399758] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:05.667 passed 00:12:05.667 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-04-18 21:05:21.477556] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:05.667 [2024-04-18 21:05:21.570522] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:12:05.667 [2024-04-18 21:05:21.578528] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:12:05.667 [2024-04-18 21:05:21.586522] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:12:05.667 [2024-04-18 21:05:21.594521] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:12:05.927 [2024-04-18 21:05:21.623601] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:05.927 passed 00:12:05.927 Test: admin_create_io_sq_verify_pc ...[2024-04-18 21:05:21.699631] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:05.927 [2024-04-18 21:05:21.718526] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:12:05.927 [2024-04-18 21:05:21.735819] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:05.927 passed 00:12:05.927 Test: admin_create_io_qp_max_qps ...[2024-04-18 21:05:21.814406] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:07.308 [2024-04-18 21:05:22.913520] nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:12:07.567 [2024-04-18 21:05:23.303031] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:07.567 passed 00:12:07.567 Test: admin_create_io_sq_shared_cq ...[2024-04-18 21:05:23.375874] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:07.827 [2024-04-18 21:05:23.507542] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:07.827 [2024-04-18 21:05:23.544575] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:07.827 passed 00:12:07.827 00:12:07.827 Run Summary: Type Total Ran Passed Failed Inactive 00:12:07.827 suites 1 1 n/a 0 0 00:12:07.827 tests 18 18 18 0 0 00:12:07.827 asserts 360 360 360 0 n/a 00:12:07.827 00:12:07.827 Elapsed time = 1.512 seconds 00:12:07.827 21:05:23 -- compliance/compliance.sh@42 -- # killprocess 2977947 00:12:07.827 21:05:23 -- common/autotest_common.sh@936 -- # '[' -z 2977947 ']' 00:12:07.827 21:05:23 -- common/autotest_common.sh@940 -- # kill -0 2977947 00:12:07.827 21:05:23 -- common/autotest_common.sh@941 -- # uname 00:12:07.827 21:05:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:07.827 21:05:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2977947 00:12:07.827 21:05:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:07.827 21:05:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:07.827 21:05:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2977947' 00:12:07.827 killing process with pid 2977947 00:12:07.827 21:05:23 -- common/autotest_common.sh@955 -- # kill 2977947 00:12:07.827 21:05:23 -- common/autotest_common.sh@960 -- # wait 2977947 00:12:08.087 21:05:23 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:12:08.087 21:05:23 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:12:08.087 00:12:08.087 real 0m6.178s 00:12:08.087 user 0m17.594s 00:12:08.087 sys 0m0.488s 00:12:08.087 21:05:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:08.087 21:05:23 -- common/autotest_common.sh@10 -- # set +x 00:12:08.087 ************************************ 00:12:08.087 END TEST nvmf_vfio_user_nvme_compliance 00:12:08.087 ************************************ 00:12:08.087 21:05:23 -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:08.087 21:05:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:08.087 21:05:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:08.087 21:05:23 -- common/autotest_common.sh@10 -- # set +x 00:12:08.347 ************************************ 00:12:08.347 START TEST nvmf_vfio_user_fuzz 00:12:08.347 ************************************ 00:12:08.347 21:05:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:08.347 * Looking for test storage... 00:12:08.347 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:08.347 21:05:24 -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:08.347 21:05:24 -- nvmf/common.sh@7 -- # uname -s 00:12:08.347 21:05:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:08.347 21:05:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:08.347 21:05:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:08.347 21:05:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:08.347 21:05:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:08.347 21:05:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:08.347 21:05:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:08.347 21:05:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:08.347 21:05:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:08.347 21:05:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:08.347 21:05:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:08.347 21:05:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:08.347 21:05:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:08.347 21:05:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:08.347 21:05:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:08.347 21:05:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:08.347 21:05:24 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:08.347 21:05:24 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:08.347 21:05:24 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:08.347 21:05:24 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:08.348 21:05:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.348 21:05:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.348 21:05:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.348 21:05:24 -- paths/export.sh@5 -- # export PATH 00:12:08.348 21:05:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.348 21:05:24 -- nvmf/common.sh@47 -- # : 0 00:12:08.348 21:05:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:08.348 21:05:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:08.348 21:05:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:08.348 21:05:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:08.348 21:05:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:08.348 21:05:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:08.348 21:05:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:08.348 21:05:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:08.348 21:05:24 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:08.348 21:05:24 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:08.348 21:05:24 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:08.348 21:05:24 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:12:08.348 21:05:24 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:08.348 21:05:24 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:08.348 21:05:24 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:12:08.348 21:05:24 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2978952 00:12:08.348 21:05:24 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2978952' 00:12:08.348 Process pid: 2978952 00:12:08.348 21:05:24 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:08.348 21:05:24 -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:08.348 21:05:24 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2978952 00:12:08.348 21:05:24 -- common/autotest_common.sh@817 -- # '[' -z 2978952 ']' 00:12:08.348 21:05:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.348 21:05:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:08.348 21:05:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.348 21:05:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:08.348 21:05:24 -- common/autotest_common.sh@10 -- # set +x 00:12:09.286 21:05:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:09.286 21:05:24 -- common/autotest_common.sh@850 -- # return 0 00:12:09.286 21:05:24 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:12:10.252 21:05:25 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:10.252 21:05:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:10.252 21:05:25 -- common/autotest_common.sh@10 -- # set +x 00:12:10.252 21:05:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:10.252 21:05:26 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:12:10.252 21:05:26 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:10.252 21:05:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:10.252 21:05:26 -- common/autotest_common.sh@10 -- # set +x 00:12:10.252 malloc0 00:12:10.252 21:05:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:10.252 21:05:26 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:12:10.252 21:05:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:10.252 21:05:26 -- common/autotest_common.sh@10 -- # set +x 00:12:10.252 21:05:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:10.252 21:05:26 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:10.252 21:05:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:10.252 21:05:26 -- common/autotest_common.sh@10 -- # set +x 00:12:10.252 21:05:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:10.252 21:05:26 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:10.252 21:05:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:10.252 21:05:26 -- common/autotest_common.sh@10 -- # set +x 00:12:10.252 21:05:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:10.252 21:05:26 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:12:10.252 21:05:26 -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:12:42.347 Fuzzing completed. Shutting down the fuzz application 00:12:42.347 00:12:42.347 Dumping successful admin opcodes: 00:12:42.347 8, 9, 10, 24, 00:12:42.347 Dumping successful io opcodes: 00:12:42.347 0, 00:12:42.347 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1117485, total successful commands: 4399, random_seed: 2178464640 00:12:42.347 NS: 0x200003a1ef00 admin qp, Total commands completed: 277589, total successful commands: 2241, random_seed: 1576519872 00:12:42.347 21:05:56 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:12:42.347 21:05:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:42.347 21:05:56 -- common/autotest_common.sh@10 -- # set +x 00:12:42.347 21:05:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:42.347 21:05:56 -- target/vfio_user_fuzz.sh@46 -- # killprocess 2978952 00:12:42.347 21:05:56 -- common/autotest_common.sh@936 -- # '[' -z 2978952 ']' 00:12:42.347 21:05:56 -- common/autotest_common.sh@940 -- # kill -0 2978952 00:12:42.347 21:05:56 -- common/autotest_common.sh@941 -- # uname 00:12:42.347 21:05:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:42.347 21:05:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2978952 00:12:42.347 21:05:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:42.347 21:05:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:42.347 21:05:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2978952' 00:12:42.347 killing process with pid 2978952 00:12:42.347 21:05:56 -- common/autotest_common.sh@955 -- # kill 2978952 00:12:42.347 21:05:56 -- common/autotest_common.sh@960 -- # wait 2978952 00:12:42.348 21:05:56 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:12:42.348 21:05:56 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:12:42.348 00:12:42.348 real 0m32.922s 00:12:42.348 user 0m35.567s 00:12:42.348 sys 0m25.593s 00:12:42.348 21:05:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:42.348 21:05:56 -- common/autotest_common.sh@10 -- # set +x 00:12:42.348 ************************************ 00:12:42.348 END TEST nvmf_vfio_user_fuzz 00:12:42.348 ************************************ 00:12:42.348 21:05:56 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:42.348 21:05:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:42.348 21:05:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:42.348 21:05:56 -- common/autotest_common.sh@10 -- # set +x 00:12:42.348 ************************************ 00:12:42.348 START TEST nvmf_host_management 00:12:42.348 ************************************ 00:12:42.348 21:05:57 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:42.348 * Looking for test storage... 00:12:42.348 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:42.348 21:05:57 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:42.348 21:05:57 -- nvmf/common.sh@7 -- # uname -s 00:12:42.348 21:05:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:42.348 21:05:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:42.348 21:05:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:42.348 21:05:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:42.348 21:05:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:42.348 21:05:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:42.348 21:05:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:42.348 21:05:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:42.348 21:05:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:42.348 21:05:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:42.348 21:05:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:42.348 21:05:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:42.348 21:05:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:42.348 21:05:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:42.348 21:05:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:42.348 21:05:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:42.348 21:05:57 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:42.348 21:05:57 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:42.348 21:05:57 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:42.348 21:05:57 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:42.348 21:05:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.348 21:05:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.348 21:05:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.348 21:05:57 -- paths/export.sh@5 -- # export PATH 00:12:42.348 21:05:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.348 21:05:57 -- nvmf/common.sh@47 -- # : 0 00:12:42.348 21:05:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:42.348 21:05:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:42.348 21:05:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:42.348 21:05:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:42.348 21:05:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:42.348 21:05:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:42.348 21:05:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:42.348 21:05:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:42.348 21:05:57 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:42.348 21:05:57 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:42.348 21:05:57 -- target/host_management.sh@105 -- # nvmftestinit 00:12:42.348 21:05:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:42.348 21:05:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:42.348 21:05:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:42.348 21:05:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:42.348 21:05:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:42.348 21:05:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.348 21:05:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:42.348 21:05:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.348 21:05:57 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:42.348 21:05:57 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:42.348 21:05:57 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:42.348 21:05:57 -- common/autotest_common.sh@10 -- # set +x 00:12:47.630 21:06:02 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:47.630 21:06:02 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:47.630 21:06:02 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:47.630 21:06:02 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:47.630 21:06:02 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:47.630 21:06:02 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:47.630 21:06:02 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:47.630 21:06:02 -- nvmf/common.sh@295 -- # net_devs=() 00:12:47.630 21:06:02 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:47.630 21:06:02 -- nvmf/common.sh@296 -- # e810=() 00:12:47.630 21:06:02 -- nvmf/common.sh@296 -- # local -ga e810 00:12:47.630 21:06:02 -- nvmf/common.sh@297 -- # x722=() 00:12:47.630 21:06:02 -- nvmf/common.sh@297 -- # local -ga x722 00:12:47.630 21:06:02 -- nvmf/common.sh@298 -- # mlx=() 00:12:47.630 21:06:02 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:47.630 21:06:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:47.630 21:06:02 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:47.630 21:06:02 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:47.630 21:06:02 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:47.631 21:06:02 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:47.631 21:06:02 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:47.631 21:06:02 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:47.631 21:06:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:47.631 21:06:02 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:47.631 21:06:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:47.631 21:06:02 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:47.631 21:06:02 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:47.631 21:06:02 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:47.631 21:06:02 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:47.631 21:06:02 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:47.631 21:06:02 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:47.631 21:06:02 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:47.631 21:06:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:47.631 21:06:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:47.631 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:47.631 21:06:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:47.631 21:06:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:47.631 21:06:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:47.631 21:06:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:47.631 21:06:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:47.631 21:06:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:47.631 21:06:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:47.631 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:47.631 21:06:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:47.631 21:06:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:47.631 21:06:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:47.631 21:06:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:47.631 21:06:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:47.631 21:06:02 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:47.631 21:06:02 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:47.631 21:06:02 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:47.631 21:06:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:47.631 21:06:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:47.631 21:06:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:47.631 21:06:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:47.631 21:06:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:47.631 Found net devices under 0000:86:00.0: cvl_0_0 00:12:47.631 21:06:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:47.631 21:06:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:47.631 21:06:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:47.631 21:06:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:47.631 21:06:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:47.631 21:06:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:47.631 Found net devices under 0000:86:00.1: cvl_0_1 00:12:47.631 21:06:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:47.631 21:06:02 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:47.631 21:06:02 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:47.631 21:06:02 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:47.631 21:06:02 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:47.631 21:06:02 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:47.631 21:06:02 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:47.631 21:06:02 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:47.631 21:06:02 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:47.631 21:06:02 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:47.631 21:06:02 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:47.631 21:06:02 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:47.631 21:06:02 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:47.631 21:06:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:47.631 21:06:02 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:47.631 21:06:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:47.631 21:06:02 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:47.631 21:06:02 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:47.631 21:06:02 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:47.631 21:06:03 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:47.631 21:06:03 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:47.631 21:06:03 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:47.631 21:06:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:47.631 21:06:03 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:47.631 21:06:03 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:47.631 21:06:03 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:47.631 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:47.631 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:12:47.631 00:12:47.631 --- 10.0.0.2 ping statistics --- 00:12:47.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.631 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:12:47.631 21:06:03 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:47.631 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:47.631 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:12:47.631 00:12:47.631 --- 10.0.0.1 ping statistics --- 00:12:47.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.631 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:12:47.631 21:06:03 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:47.631 21:06:03 -- nvmf/common.sh@411 -- # return 0 00:12:47.631 21:06:03 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:47.631 21:06:03 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:47.631 21:06:03 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:47.631 21:06:03 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:47.631 21:06:03 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:47.631 21:06:03 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:47.631 21:06:03 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:47.631 21:06:03 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:12:47.631 21:06:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:47.631 21:06:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:47.631 21:06:03 -- common/autotest_common.sh@10 -- # set +x 00:12:47.631 ************************************ 00:12:47.631 START TEST nvmf_host_management 00:12:47.631 ************************************ 00:12:47.631 21:06:03 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:12:47.631 21:06:03 -- target/host_management.sh@69 -- # starttarget 00:12:47.631 21:06:03 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:12:47.631 21:06:03 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:47.631 21:06:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:47.631 21:06:03 -- common/autotest_common.sh@10 -- # set +x 00:12:47.631 21:06:03 -- nvmf/common.sh@470 -- # nvmfpid=2987912 00:12:47.631 21:06:03 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:12:47.631 21:06:03 -- nvmf/common.sh@471 -- # waitforlisten 2987912 00:12:47.631 21:06:03 -- common/autotest_common.sh@817 -- # '[' -z 2987912 ']' 00:12:47.631 21:06:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.631 21:06:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:47.631 21:06:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.631 21:06:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:47.631 21:06:03 -- common/autotest_common.sh@10 -- # set +x 00:12:47.631 [2024-04-18 21:06:03.392661] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:12:47.631 [2024-04-18 21:06:03.392699] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:47.631 EAL: No free 2048 kB hugepages reported on node 1 00:12:47.631 [2024-04-18 21:06:03.456291] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:47.631 [2024-04-18 21:06:03.529299] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:47.631 [2024-04-18 21:06:03.529339] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:47.631 [2024-04-18 21:06:03.529347] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:47.631 [2024-04-18 21:06:03.529353] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:47.631 [2024-04-18 21:06:03.529358] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:47.631 [2024-04-18 21:06:03.529461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:47.631 [2024-04-18 21:06:03.529564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:47.631 [2024-04-18 21:06:03.529672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:47.631 [2024-04-18 21:06:03.529673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:12:48.605 21:06:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:48.605 21:06:04 -- common/autotest_common.sh@850 -- # return 0 00:12:48.605 21:06:04 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:48.605 21:06:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:48.605 21:06:04 -- common/autotest_common.sh@10 -- # set +x 00:12:48.605 21:06:04 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:48.605 21:06:04 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:48.605 21:06:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:48.605 21:06:04 -- common/autotest_common.sh@10 -- # set +x 00:12:48.605 [2024-04-18 21:06:04.228198] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:48.605 21:06:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:48.605 21:06:04 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:12:48.605 21:06:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:48.605 21:06:04 -- common/autotest_common.sh@10 -- # set +x 00:12:48.605 21:06:04 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:48.605 21:06:04 -- target/host_management.sh@23 -- # cat 00:12:48.605 21:06:04 -- target/host_management.sh@30 -- # rpc_cmd 00:12:48.605 21:06:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:48.605 21:06:04 -- common/autotest_common.sh@10 -- # set +x 00:12:48.605 Malloc0 00:12:48.605 [2024-04-18 21:06:04.287688] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:48.605 21:06:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:48.605 21:06:04 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:12:48.605 21:06:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:48.605 21:06:04 -- common/autotest_common.sh@10 -- # set +x 00:12:48.605 21:06:04 -- target/host_management.sh@73 -- # perfpid=2988172 00:12:48.605 21:06:04 -- target/host_management.sh@74 -- # waitforlisten 2988172 /var/tmp/bdevperf.sock 00:12:48.605 21:06:04 -- common/autotest_common.sh@817 -- # '[' -z 2988172 ']' 00:12:48.605 21:06:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:48.605 21:06:04 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:12:48.605 21:06:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:48.605 21:06:04 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:12:48.605 21:06:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:48.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:48.605 21:06:04 -- nvmf/common.sh@521 -- # config=() 00:12:48.605 21:06:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:48.605 21:06:04 -- nvmf/common.sh@521 -- # local subsystem config 00:12:48.605 21:06:04 -- common/autotest_common.sh@10 -- # set +x 00:12:48.605 21:06:04 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:48.605 21:06:04 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:48.605 { 00:12:48.605 "params": { 00:12:48.605 "name": "Nvme$subsystem", 00:12:48.605 "trtype": "$TEST_TRANSPORT", 00:12:48.605 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:48.605 "adrfam": "ipv4", 00:12:48.605 "trsvcid": "$NVMF_PORT", 00:12:48.605 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:48.605 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:48.605 "hdgst": ${hdgst:-false}, 00:12:48.605 "ddgst": ${ddgst:-false} 00:12:48.605 }, 00:12:48.605 "method": "bdev_nvme_attach_controller" 00:12:48.605 } 00:12:48.605 EOF 00:12:48.605 )") 00:12:48.605 21:06:04 -- nvmf/common.sh@543 -- # cat 00:12:48.605 21:06:04 -- nvmf/common.sh@545 -- # jq . 00:12:48.605 21:06:04 -- nvmf/common.sh@546 -- # IFS=, 00:12:48.605 21:06:04 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:48.605 "params": { 00:12:48.605 "name": "Nvme0", 00:12:48.605 "trtype": "tcp", 00:12:48.605 "traddr": "10.0.0.2", 00:12:48.605 "adrfam": "ipv4", 00:12:48.605 "trsvcid": "4420", 00:12:48.605 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:48.605 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:48.605 "hdgst": false, 00:12:48.605 "ddgst": false 00:12:48.605 }, 00:12:48.605 "method": "bdev_nvme_attach_controller" 00:12:48.605 }' 00:12:48.605 [2024-04-18 21:06:04.377634] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:12:48.606 [2024-04-18 21:06:04.377681] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2988172 ] 00:12:48.606 EAL: No free 2048 kB hugepages reported on node 1 00:12:48.606 [2024-04-18 21:06:04.437540] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.606 [2024-04-18 21:06:04.508008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.864 Running I/O for 10 seconds... 00:12:49.434 21:06:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:49.434 21:06:05 -- common/autotest_common.sh@850 -- # return 0 00:12:49.434 21:06:05 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:12:49.434 21:06:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:49.434 21:06:05 -- common/autotest_common.sh@10 -- # set +x 00:12:49.434 21:06:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:49.434 21:06:05 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:49.434 21:06:05 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:12:49.434 21:06:05 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:12:49.434 21:06:05 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:12:49.434 21:06:05 -- target/host_management.sh@52 -- # local ret=1 00:12:49.434 21:06:05 -- target/host_management.sh@53 -- # local i 00:12:49.434 21:06:05 -- target/host_management.sh@54 -- # (( i = 10 )) 00:12:49.434 21:06:05 -- target/host_management.sh@54 -- # (( i != 0 )) 00:12:49.434 21:06:05 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:12:49.434 21:06:05 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:12:49.434 21:06:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:49.434 21:06:05 -- common/autotest_common.sh@10 -- # set +x 00:12:49.434 21:06:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:49.434 21:06:05 -- target/host_management.sh@55 -- # read_io_count=707 00:12:49.434 21:06:05 -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:12:49.434 21:06:05 -- target/host_management.sh@59 -- # ret=0 00:12:49.434 21:06:05 -- target/host_management.sh@60 -- # break 00:12:49.434 21:06:05 -- target/host_management.sh@64 -- # return 0 00:12:49.434 21:06:05 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:49.434 21:06:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:49.434 21:06:05 -- common/autotest_common.sh@10 -- # set +x 00:12:49.434 [2024-04-18 21:06:05.270966] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94dcb0 is same with the state(5) to be set 00:12:49.434 [2024-04-18 21:06:05.271327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.434 [2024-04-18 21:06:05.271359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.434 [2024-04-18 21:06:05.271375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.434 [2024-04-18 21:06:05.271383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.434 [2024-04-18 21:06:05.271392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.434 [2024-04-18 21:06:05.271399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.434 [2024-04-18 21:06:05.271408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.434 [2024-04-18 21:06:05.271415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.434 [2024-04-18 21:06:05.271423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.434 [2024-04-18 21:06:05.271430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.434 [2024-04-18 21:06:05.271438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.434 [2024-04-18 21:06:05.271445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.434 [2024-04-18 21:06:05.271458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.434 [2024-04-18 21:06:05.271464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.434 [2024-04-18 21:06:05.271473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.434 [2024-04-18 21:06:05.271479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.434 [2024-04-18 21:06:05.271488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.434 [2024-04-18 21:06:05.271495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.434 [2024-04-18 21:06:05.271502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.434 [2024-04-18 21:06:05.271513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.434 [2024-04-18 21:06:05.271522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.434 [2024-04-18 21:06:05.271529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.434 [2024-04-18 21:06:05.271537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.434 [2024-04-18 21:06:05.271543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.434 [2024-04-18 21:06:05.271551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.434 [2024-04-18 21:06:05.271558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.435 [2024-04-18 21:06:05.271566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.435 [2024-04-18 21:06:05.271573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.435 [2024-04-18 21:06:05.271581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.435 [2024-04-18 21:06:05.271587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.435 [2024-04-18 21:06:05.271596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.435 [2024-04-18 21:06:05.271603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.435 [2024-04-18 21:06:05.271611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.435 [2024-04-18 21:06:05.271617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.435 [2024-04-18 21:06:05.271626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.435 [2024-04-18 21:06:05.271632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.435 [2024-04-18 21:06:05.271641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.435 [2024-04-18 21:06:05.271650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.435 [2024-04-18 21:06:05.271658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.435 [2024-04-18 21:06:05.271665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.435 [2024-04-18 21:06:05.271673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.435 [2024-04-18 21:06:05.271681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.435 [2024-04-18 21:06:05.271690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.435 [2024-04-18 21:06:05.271696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.435 [2024-04-18 21:06:05.271705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.435 [2024-04-18 21:06:05.271711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.435 [2024-04-18 21:06:05.271720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.435 [2024-04-18 21:06:05.271727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.435 [2024-04-18 21:06:05.271735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.435 [2024-04-18 21:06:05.271742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.435 [2024-04-18 21:06:05.271750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.435 [2024-04-18 21:06:05.271757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.435 [2024-04-18 21:06:05.271766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.435 [2024-04-18 21:06:05.271773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.435 [2024-04-18 21:06:05.271781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.435 [2024-04-18 21:06:05.271788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.435 [2024-04-18 21:06:05.271797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.435 [2024-04-18 21:06:05.271804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.435 [2024-04-18 21:06:05.271812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.435 [2024-04-18 21:06:05.271819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.435 [2024-04-18 21:06:05.271828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.435 [2024-04-18 21:06:05.271834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.435 [2024-04-18 21:06:05.271844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.435 [2024-04-18 21:06:05.271851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.435 [2024-04-18 21:06:05.271860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.435 [2024-04-18 21:06:05.271867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.435 [2024-04-18 21:06:05.271876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.435 [2024-04-18 21:06:05.271883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.435 [2024-04-18 21:06:05.271891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.435 [2024-04-18 21:06:05.271898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.435 [2024-04-18 21:06:05.271906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.435 [2024-04-18 21:06:05.271913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.435 [2024-04-18 21:06:05.271922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.435 [2024-04-18 21:06:05.271929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.435 [2024-04-18 21:06:05.271937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.435 [2024-04-18 21:06:05.271944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.435 [2024-04-18 21:06:05.271953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.435 [2024-04-18 21:06:05.271960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.435 [2024-04-18 21:06:05.271968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.435 [2024-04-18 21:06:05.271975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.435 [2024-04-18 21:06:05.271984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.435 [2024-04-18 21:06:05.271991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.435 [2024-04-18 21:06:05.271999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.435 [2024-04-18 21:06:05.272006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.435 [2024-04-18 21:06:05.272014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.435 [2024-04-18 21:06:05.272021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.435 [2024-04-18 21:06:05.272030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.435 [2024-04-18 21:06:05.272038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.435 [2024-04-18 21:06:05.272046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.435 [2024-04-18 21:06:05.272053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.435 [2024-04-18 21:06:05.272062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.435 [2024-04-18 21:06:05.272069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.435 [2024-04-18 21:06:05.272077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.435 [2024-04-18 21:06:05.272084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.435 [2024-04-18 21:06:05.272093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.435 [2024-04-18 21:06:05.272100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.435 [2024-04-18 21:06:05.272108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.435 [2024-04-18 21:06:05.272116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.435 [2024-04-18 21:06:05.272124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.435 [2024-04-18 21:06:05.272131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.435 [2024-04-18 21:06:05.272140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.435 [2024-04-18 21:06:05.272146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.435 [2024-04-18 21:06:05.272154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.435 [2024-04-18 21:06:05.272161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.435 [2024-04-18 21:06:05.272169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.435 [2024-04-18 21:06:05.272176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.435 [2024-04-18 21:06:05.272184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.435 [2024-04-18 21:06:05.272191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.435 [2024-04-18 21:06:05.272199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.435 [2024-04-18 21:06:05.272206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.435 [2024-04-18 21:06:05.272214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.435 [2024-04-18 21:06:05.272221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.435 [2024-04-18 21:06:05.272231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.435 [2024-04-18 21:06:05.272238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.435 [2024-04-18 21:06:05.272246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.435 [2024-04-18 21:06:05.272253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.435 [2024-04-18 21:06:05.272261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.435 [2024-04-18 21:06:05.272268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.435 [2024-04-18 21:06:05.272276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.435 [2024-04-18 21:06:05.272283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.435 [2024-04-18 21:06:05.272291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.436 [2024-04-18 21:06:05.272298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.436 [2024-04-18 21:06:05.272306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.436 [2024-04-18 21:06:05.272313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.436 [2024-04-18 21:06:05.272321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.436 [2024-04-18 21:06:05.272328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.436 [2024-04-18 21:06:05.272336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.436 [2024-04-18 21:06:05.272343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.436 [2024-04-18 21:06:05.272351] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7c7b0 is same with the state(5) to be set 00:12:49.436 [2024-04-18 21:06:05.272400] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd7c7b0 was disconnected and freed. reset controller. 00:12:49.436 [2024-04-18 21:06:05.273328] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:12:49.436 task offset: 103808 on job bdev=Nvme0n1 fails 00:12:49.436 00:12:49.436 Latency(us) 00:12:49.436 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:49.436 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:49.436 Job: Nvme0n1 ended in about 0.57 seconds with error 00:12:49.436 Verification LBA range: start 0x0 length 0x400 00:12:49.436 Nvme0n1 : 0.57 1351.39 84.46 112.62 0.00 42867.35 1460.31 44678.46 00:12:49.436 =================================================================================================================== 00:12:49.436 Total : 1351.39 84.46 112.62 0.00 42867.35 1460.31 44678.46 00:12:49.436 [2024-04-18 21:06:05.274959] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:49.436 [2024-04-18 21:06:05.274975] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96b900 (9): Bad file descriptor 00:12:49.436 21:06:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:49.436 21:06:05 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:49.436 21:06:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:49.436 21:06:05 -- common/autotest_common.sh@10 -- # set +x 00:12:49.436 21:06:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:49.436 21:06:05 -- target/host_management.sh@87 -- # sleep 1 00:12:49.436 [2024-04-18 21:06:05.284237] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:50.373 21:06:06 -- target/host_management.sh@91 -- # kill -9 2988172 00:12:50.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2988172) - No such process 00:12:50.373 21:06:06 -- target/host_management.sh@91 -- # true 00:12:50.373 21:06:06 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:12:50.373 21:06:06 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:12:50.373 21:06:06 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:12:50.373 21:06:06 -- nvmf/common.sh@521 -- # config=() 00:12:50.373 21:06:06 -- nvmf/common.sh@521 -- # local subsystem config 00:12:50.373 21:06:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:50.373 21:06:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:50.373 { 00:12:50.373 "params": { 00:12:50.373 "name": "Nvme$subsystem", 00:12:50.373 "trtype": "$TEST_TRANSPORT", 00:12:50.373 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:50.373 "adrfam": "ipv4", 00:12:50.373 "trsvcid": "$NVMF_PORT", 00:12:50.373 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:50.373 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:50.373 "hdgst": ${hdgst:-false}, 00:12:50.373 "ddgst": ${ddgst:-false} 00:12:50.373 }, 00:12:50.373 "method": "bdev_nvme_attach_controller" 00:12:50.373 } 00:12:50.373 EOF 00:12:50.373 )") 00:12:50.373 21:06:06 -- nvmf/common.sh@543 -- # cat 00:12:50.373 21:06:06 -- nvmf/common.sh@545 -- # jq . 00:12:50.373 21:06:06 -- nvmf/common.sh@546 -- # IFS=, 00:12:50.373 21:06:06 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:50.373 "params": { 00:12:50.373 "name": "Nvme0", 00:12:50.373 "trtype": "tcp", 00:12:50.373 "traddr": "10.0.0.2", 00:12:50.373 "adrfam": "ipv4", 00:12:50.373 "trsvcid": "4420", 00:12:50.373 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:50.373 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:50.373 "hdgst": false, 00:12:50.373 "ddgst": false 00:12:50.373 }, 00:12:50.373 "method": "bdev_nvme_attach_controller" 00:12:50.373 }' 00:12:50.633 [2024-04-18 21:06:06.336297] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:12:50.633 [2024-04-18 21:06:06.336343] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2988502 ] 00:12:50.633 EAL: No free 2048 kB hugepages reported on node 1 00:12:50.633 [2024-04-18 21:06:06.395224] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.633 [2024-04-18 21:06:06.466682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.892 Running I/O for 1 seconds... 00:12:51.832 00:12:51.832 Latency(us) 00:12:51.832 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:51.832 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:51.832 Verification LBA range: start 0x0 length 0x400 00:12:51.832 Nvme0n1 : 1.00 1404.17 87.76 0.00 0.00 44959.73 9232.03 47185.92 00:12:51.832 =================================================================================================================== 00:12:51.832 Total : 1404.17 87.76 0.00 0.00 44959.73 9232.03 47185.92 00:12:52.092 21:06:07 -- target/host_management.sh@102 -- # stoptarget 00:12:52.092 21:06:07 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:12:52.092 21:06:07 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:12:52.092 21:06:07 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:52.092 21:06:07 -- target/host_management.sh@40 -- # nvmftestfini 00:12:52.092 21:06:07 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:52.092 21:06:07 -- nvmf/common.sh@117 -- # sync 00:12:52.092 21:06:07 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:52.092 21:06:07 -- nvmf/common.sh@120 -- # set +e 00:12:52.092 21:06:07 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:52.092 21:06:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:52.092 rmmod nvme_tcp 00:12:52.092 rmmod nvme_fabrics 00:12:52.092 rmmod nvme_keyring 00:12:52.092 21:06:07 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:52.092 21:06:07 -- nvmf/common.sh@124 -- # set -e 00:12:52.092 21:06:07 -- nvmf/common.sh@125 -- # return 0 00:12:52.092 21:06:07 -- nvmf/common.sh@478 -- # '[' -n 2987912 ']' 00:12:52.092 21:06:07 -- nvmf/common.sh@479 -- # killprocess 2987912 00:12:52.092 21:06:07 -- common/autotest_common.sh@936 -- # '[' -z 2987912 ']' 00:12:52.092 21:06:07 -- common/autotest_common.sh@940 -- # kill -0 2987912 00:12:52.092 21:06:07 -- common/autotest_common.sh@941 -- # uname 00:12:52.092 21:06:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:52.092 21:06:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2987912 00:12:52.092 21:06:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:52.092 21:06:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:52.092 21:06:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2987912' 00:12:52.092 killing process with pid 2987912 00:12:52.093 21:06:07 -- common/autotest_common.sh@955 -- # kill 2987912 00:12:52.093 21:06:07 -- common/autotest_common.sh@960 -- # wait 2987912 00:12:52.352 [2024-04-18 21:06:08.167772] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:12:52.352 21:06:08 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:52.352 21:06:08 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:52.352 21:06:08 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:52.352 21:06:08 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:52.352 21:06:08 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:52.352 21:06:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.352 21:06:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:52.352 21:06:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.895 21:06:10 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:54.895 00:12:54.895 real 0m6.915s 00:12:54.895 user 0m20.967s 00:12:54.895 sys 0m1.127s 00:12:54.895 21:06:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:54.895 21:06:10 -- common/autotest_common.sh@10 -- # set +x 00:12:54.895 ************************************ 00:12:54.895 END TEST nvmf_host_management 00:12:54.895 ************************************ 00:12:54.895 21:06:10 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:12:54.895 00:12:54.895 real 0m13.180s 00:12:54.895 user 0m22.558s 00:12:54.895 sys 0m5.805s 00:12:54.895 21:06:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:54.895 21:06:10 -- common/autotest_common.sh@10 -- # set +x 00:12:54.895 ************************************ 00:12:54.895 END TEST nvmf_host_management 00:12:54.895 ************************************ 00:12:54.895 21:06:10 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:54.895 21:06:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:54.895 21:06:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:54.895 21:06:10 -- common/autotest_common.sh@10 -- # set +x 00:12:54.895 ************************************ 00:12:54.895 START TEST nvmf_lvol 00:12:54.895 ************************************ 00:12:54.895 21:06:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:54.895 * Looking for test storage... 00:12:54.895 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:54.895 21:06:10 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:54.895 21:06:10 -- nvmf/common.sh@7 -- # uname -s 00:12:54.895 21:06:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:54.895 21:06:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:54.895 21:06:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:54.895 21:06:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:54.895 21:06:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:54.895 21:06:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:54.895 21:06:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:54.895 21:06:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:54.895 21:06:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:54.895 21:06:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:54.895 21:06:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:54.895 21:06:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:54.895 21:06:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:54.895 21:06:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:54.895 21:06:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:54.895 21:06:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:54.895 21:06:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:54.895 21:06:10 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:54.895 21:06:10 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:54.896 21:06:10 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:54.896 21:06:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.896 21:06:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.896 21:06:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.896 21:06:10 -- paths/export.sh@5 -- # export PATH 00:12:54.896 21:06:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.896 21:06:10 -- nvmf/common.sh@47 -- # : 0 00:12:54.896 21:06:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:54.896 21:06:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:54.896 21:06:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:54.896 21:06:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:54.896 21:06:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:54.896 21:06:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:54.896 21:06:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:54.896 21:06:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:54.896 21:06:10 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:54.896 21:06:10 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:54.896 21:06:10 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:12:54.896 21:06:10 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:12:54.896 21:06:10 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:54.896 21:06:10 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:12:54.896 21:06:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:54.896 21:06:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:54.896 21:06:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:54.896 21:06:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:54.896 21:06:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:54.896 21:06:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:54.896 21:06:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:54.896 21:06:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.896 21:06:10 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:54.896 21:06:10 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:54.896 21:06:10 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:54.896 21:06:10 -- common/autotest_common.sh@10 -- # set +x 00:13:00.194 21:06:16 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:00.194 21:06:16 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:00.194 21:06:16 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:00.194 21:06:16 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:00.194 21:06:16 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:00.194 21:06:16 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:00.194 21:06:16 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:00.194 21:06:16 -- nvmf/common.sh@295 -- # net_devs=() 00:13:00.194 21:06:16 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:00.194 21:06:16 -- nvmf/common.sh@296 -- # e810=() 00:13:00.194 21:06:16 -- nvmf/common.sh@296 -- # local -ga e810 00:13:00.194 21:06:16 -- nvmf/common.sh@297 -- # x722=() 00:13:00.194 21:06:16 -- nvmf/common.sh@297 -- # local -ga x722 00:13:00.194 21:06:16 -- nvmf/common.sh@298 -- # mlx=() 00:13:00.194 21:06:16 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:00.194 21:06:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:00.194 21:06:16 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:00.194 21:06:16 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:00.194 21:06:16 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:00.194 21:06:16 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:00.194 21:06:16 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:00.194 21:06:16 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:00.194 21:06:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:00.194 21:06:16 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:00.194 21:06:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:00.194 21:06:16 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:00.194 21:06:16 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:00.194 21:06:16 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:00.194 21:06:16 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:00.194 21:06:16 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:00.194 21:06:16 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:00.194 21:06:16 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:00.194 21:06:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:00.194 21:06:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:00.194 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:00.194 21:06:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:00.194 21:06:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:00.194 21:06:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:00.194 21:06:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:00.194 21:06:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:00.194 21:06:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:00.194 21:06:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:00.194 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:00.194 21:06:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:00.194 21:06:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:00.194 21:06:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:00.194 21:06:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:00.194 21:06:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:00.194 21:06:16 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:00.194 21:06:16 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:00.194 21:06:16 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:00.194 21:06:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:00.194 21:06:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:00.194 21:06:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:00.194 21:06:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:00.194 21:06:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:00.194 Found net devices under 0000:86:00.0: cvl_0_0 00:13:00.194 21:06:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:00.194 21:06:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:00.194 21:06:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:00.194 21:06:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:00.194 21:06:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:00.194 21:06:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:00.194 Found net devices under 0000:86:00.1: cvl_0_1 00:13:00.194 21:06:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:00.194 21:06:16 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:00.194 21:06:16 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:00.194 21:06:16 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:00.194 21:06:16 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:00.194 21:06:16 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:00.194 21:06:16 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:00.194 21:06:16 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:00.194 21:06:16 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:00.194 21:06:16 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:00.194 21:06:16 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:00.194 21:06:16 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:00.194 21:06:16 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:00.194 21:06:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:00.194 21:06:16 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:00.194 21:06:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:00.195 21:06:16 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:00.195 21:06:16 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:00.195 21:06:16 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:00.454 21:06:16 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:00.454 21:06:16 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:00.454 21:06:16 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:00.454 21:06:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:00.454 21:06:16 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:00.454 21:06:16 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:00.454 21:06:16 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:00.454 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:00.454 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:13:00.454 00:13:00.454 --- 10.0.0.2 ping statistics --- 00:13:00.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.454 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:13:00.454 21:06:16 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:00.454 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:00.454 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:13:00.454 00:13:00.454 --- 10.0.0.1 ping statistics --- 00:13:00.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.454 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:13:00.454 21:06:16 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:00.454 21:06:16 -- nvmf/common.sh@411 -- # return 0 00:13:00.454 21:06:16 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:00.454 21:06:16 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:00.454 21:06:16 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:00.454 21:06:16 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:00.454 21:06:16 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:00.454 21:06:16 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:00.454 21:06:16 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:00.454 21:06:16 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:00.454 21:06:16 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:00.454 21:06:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:00.454 21:06:16 -- common/autotest_common.sh@10 -- # set +x 00:13:00.454 21:06:16 -- nvmf/common.sh@470 -- # nvmfpid=2993091 00:13:00.454 21:06:16 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:00.454 21:06:16 -- nvmf/common.sh@471 -- # waitforlisten 2993091 00:13:00.454 21:06:16 -- common/autotest_common.sh@817 -- # '[' -z 2993091 ']' 00:13:00.454 21:06:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.454 21:06:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:00.454 21:06:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.454 21:06:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:00.454 21:06:16 -- common/autotest_common.sh@10 -- # set +x 00:13:00.713 [2024-04-18 21:06:16.427208] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:13:00.713 [2024-04-18 21:06:16.427253] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:00.713 EAL: No free 2048 kB hugepages reported on node 1 00:13:00.713 [2024-04-18 21:06:16.489092] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:00.713 [2024-04-18 21:06:16.559150] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:00.713 [2024-04-18 21:06:16.559188] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:00.713 [2024-04-18 21:06:16.559195] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:00.713 [2024-04-18 21:06:16.559200] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:00.713 [2024-04-18 21:06:16.559205] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:00.713 [2024-04-18 21:06:16.559290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:00.713 [2024-04-18 21:06:16.559405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:00.713 [2024-04-18 21:06:16.559407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.351 21:06:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:01.351 21:06:17 -- common/autotest_common.sh@850 -- # return 0 00:13:01.351 21:06:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:01.351 21:06:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:01.351 21:06:17 -- common/autotest_common.sh@10 -- # set +x 00:13:01.351 21:06:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:01.351 21:06:17 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:01.610 [2024-04-18 21:06:17.409291] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:01.611 21:06:17 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:01.870 21:06:17 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:01.870 21:06:17 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:02.130 21:06:17 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:02.130 21:06:17 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:02.130 21:06:17 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:02.390 21:06:18 -- target/nvmf_lvol.sh@29 -- # lvs=df148f89-02c5-4fad-9b79-62bd4495a5bf 00:13:02.390 21:06:18 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u df148f89-02c5-4fad-9b79-62bd4495a5bf lvol 20 00:13:02.649 21:06:18 -- target/nvmf_lvol.sh@32 -- # lvol=f038b3ae-46a5-4d90-b780-03c67a387b56 00:13:02.649 21:06:18 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:02.649 21:06:18 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f038b3ae-46a5-4d90-b780-03c67a387b56 00:13:02.908 21:06:18 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:03.168 [2024-04-18 21:06:18.886316] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.168 21:06:18 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:03.168 21:06:19 -- target/nvmf_lvol.sh@42 -- # perf_pid=2993592 00:13:03.168 21:06:19 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:03.168 21:06:19 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:03.428 EAL: No free 2048 kB hugepages reported on node 1 00:13:04.369 21:06:20 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot f038b3ae-46a5-4d90-b780-03c67a387b56 MY_SNAPSHOT 00:13:04.630 21:06:20 -- target/nvmf_lvol.sh@47 -- # snapshot=55e41be9-bd53-4fea-965e-c5135c10a8cf 00:13:04.630 21:06:20 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize f038b3ae-46a5-4d90-b780-03c67a387b56 30 00:13:04.630 21:06:20 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 55e41be9-bd53-4fea-965e-c5135c10a8cf MY_CLONE 00:13:04.890 21:06:20 -- target/nvmf_lvol.sh@49 -- # clone=3ed3973a-a54c-4414-bca9-853fa1f1ed4f 00:13:04.890 21:06:20 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 3ed3973a-a54c-4414-bca9-853fa1f1ed4f 00:13:05.462 21:06:21 -- target/nvmf_lvol.sh@53 -- # wait 2993592 00:13:15.469 Initializing NVMe Controllers 00:13:15.469 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:15.469 Controller IO queue size 128, less than required. 00:13:15.469 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:15.469 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:13:15.469 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:13:15.469 Initialization complete. Launching workers. 00:13:15.469 ======================================================== 00:13:15.469 Latency(us) 00:13:15.469 Device Information : IOPS MiB/s Average min max 00:13:15.469 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11487.88 44.87 11146.47 1701.98 60565.09 00:13:15.469 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11362.98 44.39 11268.59 3178.97 58993.27 00:13:15.469 ======================================================== 00:13:15.469 Total : 22850.87 89.26 11207.20 1701.98 60565.09 00:13:15.469 00:13:15.469 21:06:29 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:15.469 21:06:29 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f038b3ae-46a5-4d90-b780-03c67a387b56 00:13:15.469 21:06:29 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u df148f89-02c5-4fad-9b79-62bd4495a5bf 00:13:15.469 21:06:30 -- target/nvmf_lvol.sh@60 -- # rm -f 00:13:15.469 21:06:30 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:13:15.469 21:06:30 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:13:15.469 21:06:30 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:15.469 21:06:30 -- nvmf/common.sh@117 -- # sync 00:13:15.469 21:06:30 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:15.469 21:06:30 -- nvmf/common.sh@120 -- # set +e 00:13:15.469 21:06:30 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:15.469 21:06:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:15.469 rmmod nvme_tcp 00:13:15.469 rmmod nvme_fabrics 00:13:15.469 rmmod nvme_keyring 00:13:15.469 21:06:30 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:15.469 21:06:30 -- nvmf/common.sh@124 -- # set -e 00:13:15.469 21:06:30 -- nvmf/common.sh@125 -- # return 0 00:13:15.469 21:06:30 -- nvmf/common.sh@478 -- # '[' -n 2993091 ']' 00:13:15.469 21:06:30 -- nvmf/common.sh@479 -- # killprocess 2993091 00:13:15.469 21:06:30 -- common/autotest_common.sh@936 -- # '[' -z 2993091 ']' 00:13:15.469 21:06:30 -- common/autotest_common.sh@940 -- # kill -0 2993091 00:13:15.469 21:06:30 -- common/autotest_common.sh@941 -- # uname 00:13:15.469 21:06:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:15.469 21:06:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2993091 00:13:15.469 21:06:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:15.469 21:06:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:15.469 21:06:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2993091' 00:13:15.469 killing process with pid 2993091 00:13:15.469 21:06:30 -- common/autotest_common.sh@955 -- # kill 2993091 00:13:15.469 21:06:30 -- common/autotest_common.sh@960 -- # wait 2993091 00:13:15.469 21:06:30 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:15.469 21:06:30 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:15.469 21:06:30 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:15.469 21:06:30 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:15.469 21:06:30 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:15.469 21:06:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.469 21:06:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:15.469 21:06:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:16.850 21:06:32 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:16.850 00:13:16.850 real 0m22.127s 00:13:16.850 user 1m4.525s 00:13:16.850 sys 0m7.032s 00:13:16.850 21:06:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:16.850 21:06:32 -- common/autotest_common.sh@10 -- # set +x 00:13:16.850 ************************************ 00:13:16.850 END TEST nvmf_lvol 00:13:16.850 ************************************ 00:13:16.850 21:06:32 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:16.850 21:06:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:16.850 21:06:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:16.850 21:06:32 -- common/autotest_common.sh@10 -- # set +x 00:13:16.850 ************************************ 00:13:16.850 START TEST nvmf_lvs_grow 00:13:16.850 ************************************ 00:13:16.850 21:06:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:17.110 * Looking for test storage... 00:13:17.110 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:17.110 21:06:32 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:17.110 21:06:32 -- nvmf/common.sh@7 -- # uname -s 00:13:17.110 21:06:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:17.110 21:06:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:17.110 21:06:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:17.110 21:06:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:17.110 21:06:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:17.110 21:06:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:17.110 21:06:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:17.110 21:06:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:17.110 21:06:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:17.110 21:06:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:17.110 21:06:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:17.110 21:06:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:17.110 21:06:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:17.110 21:06:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:17.110 21:06:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:17.110 21:06:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:17.110 21:06:32 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:17.110 21:06:32 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:17.110 21:06:32 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:17.110 21:06:32 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:17.110 21:06:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.110 21:06:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.110 21:06:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.110 21:06:32 -- paths/export.sh@5 -- # export PATH 00:13:17.110 21:06:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.110 21:06:32 -- nvmf/common.sh@47 -- # : 0 00:13:17.110 21:06:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:17.110 21:06:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:17.110 21:06:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:17.110 21:06:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:17.110 21:06:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:17.110 21:06:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:17.110 21:06:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:17.110 21:06:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:17.110 21:06:32 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:17.110 21:06:32 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:17.110 21:06:32 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:13:17.110 21:06:32 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:17.110 21:06:32 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:17.110 21:06:32 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:17.111 21:06:32 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:17.111 21:06:32 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:17.111 21:06:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.111 21:06:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:17.111 21:06:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.111 21:06:32 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:17.111 21:06:32 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:17.111 21:06:32 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:17.111 21:06:32 -- common/autotest_common.sh@10 -- # set +x 00:13:23.685 21:06:38 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:23.685 21:06:38 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:23.685 21:06:38 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:23.685 21:06:38 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:23.685 21:06:38 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:23.685 21:06:38 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:23.685 21:06:38 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:23.686 21:06:38 -- nvmf/common.sh@295 -- # net_devs=() 00:13:23.686 21:06:38 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:23.686 21:06:38 -- nvmf/common.sh@296 -- # e810=() 00:13:23.686 21:06:38 -- nvmf/common.sh@296 -- # local -ga e810 00:13:23.686 21:06:38 -- nvmf/common.sh@297 -- # x722=() 00:13:23.686 21:06:38 -- nvmf/common.sh@297 -- # local -ga x722 00:13:23.686 21:06:38 -- nvmf/common.sh@298 -- # mlx=() 00:13:23.686 21:06:38 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:23.686 21:06:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:23.686 21:06:38 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:23.686 21:06:38 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:23.686 21:06:38 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:23.686 21:06:38 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:23.686 21:06:38 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:23.686 21:06:38 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:23.686 21:06:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:23.686 21:06:38 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:23.686 21:06:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:23.686 21:06:38 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:23.686 21:06:38 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:23.686 21:06:38 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:23.686 21:06:38 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:23.686 21:06:38 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:23.686 21:06:38 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:23.686 21:06:38 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:23.686 21:06:38 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:23.686 21:06:38 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:23.686 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:23.686 21:06:38 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:23.686 21:06:38 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:23.686 21:06:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:23.686 21:06:38 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:23.686 21:06:38 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:23.686 21:06:38 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:23.686 21:06:38 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:23.686 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:23.686 21:06:38 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:23.686 21:06:38 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:23.686 21:06:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:23.686 21:06:38 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:23.686 21:06:38 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:23.686 21:06:38 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:23.686 21:06:38 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:23.686 21:06:38 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:23.686 21:06:38 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:23.686 21:06:38 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:23.686 21:06:38 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:23.686 21:06:38 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:23.686 21:06:38 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:23.686 Found net devices under 0000:86:00.0: cvl_0_0 00:13:23.686 21:06:38 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:23.686 21:06:38 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:23.686 21:06:38 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:23.686 21:06:38 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:23.686 21:06:38 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:23.686 21:06:38 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:23.686 Found net devices under 0000:86:00.1: cvl_0_1 00:13:23.686 21:06:38 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:23.686 21:06:38 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:23.686 21:06:38 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:23.686 21:06:38 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:23.686 21:06:38 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:23.686 21:06:38 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:23.686 21:06:38 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:23.686 21:06:38 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:23.686 21:06:38 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:23.686 21:06:38 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:23.686 21:06:38 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:23.686 21:06:38 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:23.686 21:06:38 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:23.686 21:06:38 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:23.686 21:06:38 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:23.686 21:06:38 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:23.686 21:06:38 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:23.686 21:06:38 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:23.686 21:06:38 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:23.686 21:06:38 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:23.686 21:06:38 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:23.686 21:06:38 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:23.686 21:06:38 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:23.686 21:06:38 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:23.686 21:06:38 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:23.686 21:06:38 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:23.686 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:23.686 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:13:23.686 00:13:23.686 --- 10.0.0.2 ping statistics --- 00:13:23.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.686 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:13:23.686 21:06:38 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:23.686 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:23.686 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.338 ms 00:13:23.686 00:13:23.686 --- 10.0.0.1 ping statistics --- 00:13:23.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.686 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:13:23.686 21:06:38 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:23.686 21:06:38 -- nvmf/common.sh@411 -- # return 0 00:13:23.686 21:06:38 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:23.686 21:06:38 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:23.686 21:06:38 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:23.686 21:06:38 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:23.686 21:06:38 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:23.686 21:06:38 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:23.686 21:06:38 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:23.686 21:06:38 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:13:23.686 21:06:38 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:23.686 21:06:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:23.686 21:06:38 -- common/autotest_common.sh@10 -- # set +x 00:13:23.686 21:06:38 -- nvmf/common.sh@470 -- # nvmfpid=2999245 00:13:23.686 21:06:38 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:23.686 21:06:38 -- nvmf/common.sh@471 -- # waitforlisten 2999245 00:13:23.686 21:06:38 -- common/autotest_common.sh@817 -- # '[' -z 2999245 ']' 00:13:23.686 21:06:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.686 21:06:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:23.686 21:06:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.686 21:06:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:23.686 21:06:38 -- common/autotest_common.sh@10 -- # set +x 00:13:23.686 [2024-04-18 21:06:38.977048] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:13:23.686 [2024-04-18 21:06:38.977092] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:23.686 EAL: No free 2048 kB hugepages reported on node 1 00:13:23.686 [2024-04-18 21:06:39.041396] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.686 [2024-04-18 21:06:39.118719] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:23.686 [2024-04-18 21:06:39.118753] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:23.687 [2024-04-18 21:06:39.118760] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:23.687 [2024-04-18 21:06:39.118766] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:23.687 [2024-04-18 21:06:39.118772] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:23.687 [2024-04-18 21:06:39.118793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.947 21:06:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:23.947 21:06:39 -- common/autotest_common.sh@850 -- # return 0 00:13:23.947 21:06:39 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:23.947 21:06:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:23.947 21:06:39 -- common/autotest_common.sh@10 -- # set +x 00:13:23.947 21:06:39 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:23.947 21:06:39 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:24.207 [2024-04-18 21:06:39.966547] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:24.207 21:06:39 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:13:24.207 21:06:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:24.207 21:06:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:24.207 21:06:39 -- common/autotest_common.sh@10 -- # set +x 00:13:24.207 ************************************ 00:13:24.207 START TEST lvs_grow_clean 00:13:24.207 ************************************ 00:13:24.207 21:06:40 -- common/autotest_common.sh@1111 -- # lvs_grow 00:13:24.207 21:06:40 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:24.207 21:06:40 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:24.207 21:06:40 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:24.207 21:06:40 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:24.207 21:06:40 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:24.207 21:06:40 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:24.207 21:06:40 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:24.207 21:06:40 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:24.207 21:06:40 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:24.466 21:06:40 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:24.466 21:06:40 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:24.725 21:06:40 -- target/nvmf_lvs_grow.sh@28 -- # lvs=3b03c92f-b52e-4169-a777-69b829f542ae 00:13:24.725 21:06:40 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b03c92f-b52e-4169-a777-69b829f542ae 00:13:24.725 21:06:40 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:24.985 21:06:40 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:24.985 21:06:40 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:24.985 21:06:40 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3b03c92f-b52e-4169-a777-69b829f542ae lvol 150 00:13:24.985 21:06:40 -- target/nvmf_lvs_grow.sh@33 -- # lvol=25f053b4-f7be-45e5-8f93-2e4ee23fce3c 00:13:24.985 21:06:40 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:24.985 21:06:40 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:25.244 [2024-04-18 21:06:41.011164] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:25.244 [2024-04-18 21:06:41.011216] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:25.244 true 00:13:25.244 21:06:41 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:25.244 21:06:41 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b03c92f-b52e-4169-a777-69b829f542ae 00:13:25.504 21:06:41 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:25.504 21:06:41 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:25.504 21:06:41 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 25f053b4-f7be-45e5-8f93-2e4ee23fce3c 00:13:25.763 21:06:41 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:25.763 [2024-04-18 21:06:41.685187] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:26.024 21:06:41 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:26.024 21:06:41 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:26.024 21:06:41 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2999760 00:13:26.024 21:06:41 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:26.024 21:06:41 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2999760 /var/tmp/bdevperf.sock 00:13:26.024 21:06:41 -- common/autotest_common.sh@817 -- # '[' -z 2999760 ']' 00:13:26.024 21:06:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:26.024 21:06:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:26.024 21:06:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:26.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:26.024 21:06:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:26.024 21:06:41 -- common/autotest_common.sh@10 -- # set +x 00:13:26.024 [2024-04-18 21:06:41.896284] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:13:26.024 [2024-04-18 21:06:41.896329] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2999760 ] 00:13:26.024 EAL: No free 2048 kB hugepages reported on node 1 00:13:26.284 [2024-04-18 21:06:41.955340] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.284 [2024-04-18 21:06:42.024932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:26.853 21:06:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:26.853 21:06:42 -- common/autotest_common.sh@850 -- # return 0 00:13:26.853 21:06:42 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:27.112 Nvme0n1 00:13:27.113 21:06:42 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:27.372 [ 00:13:27.372 { 00:13:27.372 "name": "Nvme0n1", 00:13:27.372 "aliases": [ 00:13:27.372 "25f053b4-f7be-45e5-8f93-2e4ee23fce3c" 00:13:27.372 ], 00:13:27.372 "product_name": "NVMe disk", 00:13:27.372 "block_size": 4096, 00:13:27.372 "num_blocks": 38912, 00:13:27.372 "uuid": "25f053b4-f7be-45e5-8f93-2e4ee23fce3c", 00:13:27.372 "assigned_rate_limits": { 00:13:27.372 "rw_ios_per_sec": 0, 00:13:27.372 "rw_mbytes_per_sec": 0, 00:13:27.372 "r_mbytes_per_sec": 0, 00:13:27.372 "w_mbytes_per_sec": 0 00:13:27.372 }, 00:13:27.372 "claimed": false, 00:13:27.372 "zoned": false, 00:13:27.372 "supported_io_types": { 00:13:27.372 "read": true, 00:13:27.372 "write": true, 00:13:27.372 "unmap": true, 00:13:27.372 "write_zeroes": true, 00:13:27.372 "flush": true, 00:13:27.372 "reset": true, 00:13:27.372 "compare": true, 00:13:27.372 "compare_and_write": true, 00:13:27.372 "abort": true, 00:13:27.372 "nvme_admin": true, 00:13:27.372 "nvme_io": true 00:13:27.372 }, 00:13:27.373 "memory_domains": [ 00:13:27.373 { 00:13:27.373 "dma_device_id": "system", 00:13:27.373 "dma_device_type": 1 00:13:27.373 } 00:13:27.373 ], 00:13:27.373 "driver_specific": { 00:13:27.373 "nvme": [ 00:13:27.373 { 00:13:27.373 "trid": { 00:13:27.373 "trtype": "TCP", 00:13:27.373 "adrfam": "IPv4", 00:13:27.373 "traddr": "10.0.0.2", 00:13:27.373 "trsvcid": "4420", 00:13:27.373 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:27.373 }, 00:13:27.373 "ctrlr_data": { 00:13:27.373 "cntlid": 1, 00:13:27.373 "vendor_id": "0x8086", 00:13:27.373 "model_number": "SPDK bdev Controller", 00:13:27.373 "serial_number": "SPDK0", 00:13:27.373 "firmware_revision": "24.05", 00:13:27.373 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:27.373 "oacs": { 00:13:27.373 "security": 0, 00:13:27.373 "format": 0, 00:13:27.373 "firmware": 0, 00:13:27.373 "ns_manage": 0 00:13:27.373 }, 00:13:27.373 "multi_ctrlr": true, 00:13:27.373 "ana_reporting": false 00:13:27.373 }, 00:13:27.373 "vs": { 00:13:27.373 "nvme_version": "1.3" 00:13:27.373 }, 00:13:27.373 "ns_data": { 00:13:27.373 "id": 1, 00:13:27.373 "can_share": true 00:13:27.373 } 00:13:27.373 } 00:13:27.373 ], 00:13:27.373 "mp_policy": "active_passive" 00:13:27.373 } 00:13:27.373 } 00:13:27.373 ] 00:13:27.373 21:06:43 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2999993 00:13:27.373 21:06:43 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:27.373 21:06:43 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:27.373 Running I/O for 10 seconds... 00:13:28.310 Latency(us) 00:13:28.310 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:28.310 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:28.310 Nvme0n1 : 1.00 22078.00 86.24 0.00 0.00 0.00 0.00 0.00 00:13:28.310 =================================================================================================================== 00:13:28.310 Total : 22078.00 86.24 0.00 0.00 0.00 0.00 0.00 00:13:28.310 00:13:29.268 21:06:45 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3b03c92f-b52e-4169-a777-69b829f542ae 00:13:29.532 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:29.532 Nvme0n1 : 2.00 22305.00 87.13 0.00 0.00 0.00 0.00 0.00 00:13:29.532 =================================================================================================================== 00:13:29.532 Total : 22305.00 87.13 0.00 0.00 0.00 0.00 0.00 00:13:29.532 00:13:29.532 true 00:13:29.532 21:06:45 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b03c92f-b52e-4169-a777-69b829f542ae 00:13:29.532 21:06:45 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:29.792 21:06:45 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:29.792 21:06:45 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:29.792 21:06:45 -- target/nvmf_lvs_grow.sh@65 -- # wait 2999993 00:13:30.361 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:30.361 Nvme0n1 : 3.00 22336.33 87.25 0.00 0.00 0.00 0.00 0.00 00:13:30.361 =================================================================================================================== 00:13:30.361 Total : 22336.33 87.25 0.00 0.00 0.00 0.00 0.00 00:13:30.361 00:13:31.740 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:31.740 Nvme0n1 : 4.00 22432.00 87.62 0.00 0.00 0.00 0.00 0.00 00:13:31.740 =================================================================================================================== 00:13:31.740 Total : 22432.00 87.62 0.00 0.00 0.00 0.00 0.00 00:13:31.740 00:13:32.309 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:32.309 Nvme0n1 : 5.00 22489.60 87.85 0.00 0.00 0.00 0.00 0.00 00:13:32.309 =================================================================================================================== 00:13:32.309 Total : 22489.60 87.85 0.00 0.00 0.00 0.00 0.00 00:13:32.309 00:13:33.687 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:33.687 Nvme0n1 : 6.00 22538.67 88.04 0.00 0.00 0.00 0.00 0.00 00:13:33.687 =================================================================================================================== 00:13:33.687 Total : 22538.67 88.04 0.00 0.00 0.00 0.00 0.00 00:13:33.687 00:13:34.625 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:34.625 Nvme0n1 : 7.00 22530.71 88.01 0.00 0.00 0.00 0.00 0.00 00:13:34.625 =================================================================================================================== 00:13:34.625 Total : 22530.71 88.01 0.00 0.00 0.00 0.00 0.00 00:13:34.625 00:13:35.563 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:35.563 Nvme0n1 : 8.00 22534.62 88.03 0.00 0.00 0.00 0.00 0.00 00:13:35.563 =================================================================================================================== 00:13:35.563 Total : 22534.62 88.03 0.00 0.00 0.00 0.00 0.00 00:13:35.563 00:13:36.501 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:36.501 Nvme0n1 : 9.00 22563.56 88.14 0.00 0.00 0.00 0.00 0.00 00:13:36.501 =================================================================================================================== 00:13:36.501 Total : 22563.56 88.14 0.00 0.00 0.00 0.00 0.00 00:13:36.501 00:13:37.438 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:37.438 Nvme0n1 : 10.00 22574.50 88.18 0.00 0.00 0.00 0.00 0.00 00:13:37.438 =================================================================================================================== 00:13:37.438 Total : 22574.50 88.18 0.00 0.00 0.00 0.00 0.00 00:13:37.438 00:13:37.438 00:13:37.438 Latency(us) 00:13:37.438 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:37.438 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:37.438 Nvme0n1 : 10.00 22576.73 88.19 0.00 0.00 5666.16 3405.02 18464.06 00:13:37.438 =================================================================================================================== 00:13:37.438 Total : 22576.73 88.19 0.00 0.00 5666.16 3405.02 18464.06 00:13:37.438 0 00:13:37.438 21:06:53 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2999760 00:13:37.438 21:06:53 -- common/autotest_common.sh@936 -- # '[' -z 2999760 ']' 00:13:37.438 21:06:53 -- common/autotest_common.sh@940 -- # kill -0 2999760 00:13:37.438 21:06:53 -- common/autotest_common.sh@941 -- # uname 00:13:37.438 21:06:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:37.438 21:06:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2999760 00:13:37.438 21:06:53 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:37.438 21:06:53 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:37.438 21:06:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2999760' 00:13:37.438 killing process with pid 2999760 00:13:37.438 21:06:53 -- common/autotest_common.sh@955 -- # kill 2999760 00:13:37.438 Received shutdown signal, test time was about 10.000000 seconds 00:13:37.438 00:13:37.438 Latency(us) 00:13:37.438 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:37.438 =================================================================================================================== 00:13:37.438 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:37.438 21:06:53 -- common/autotest_common.sh@960 -- # wait 2999760 00:13:37.697 21:06:53 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:37.956 21:06:53 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b03c92f-b52e-4169-a777-69b829f542ae 00:13:37.956 21:06:53 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:13:38.216 21:06:53 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:13:38.216 21:06:53 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:13:38.216 21:06:53 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:38.216 [2024-04-18 21:06:54.063029] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:38.216 21:06:54 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b03c92f-b52e-4169-a777-69b829f542ae 00:13:38.216 21:06:54 -- common/autotest_common.sh@638 -- # local es=0 00:13:38.216 21:06:54 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b03c92f-b52e-4169-a777-69b829f542ae 00:13:38.216 21:06:54 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:38.216 21:06:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:38.216 21:06:54 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:38.216 21:06:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:38.216 21:06:54 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:38.216 21:06:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:38.216 21:06:54 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:38.216 21:06:54 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:38.216 21:06:54 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b03c92f-b52e-4169-a777-69b829f542ae 00:13:38.475 request: 00:13:38.475 { 00:13:38.475 "uuid": "3b03c92f-b52e-4169-a777-69b829f542ae", 00:13:38.475 "method": "bdev_lvol_get_lvstores", 00:13:38.475 "req_id": 1 00:13:38.475 } 00:13:38.475 Got JSON-RPC error response 00:13:38.475 response: 00:13:38.475 { 00:13:38.475 "code": -19, 00:13:38.475 "message": "No such device" 00:13:38.475 } 00:13:38.475 21:06:54 -- common/autotest_common.sh@641 -- # es=1 00:13:38.475 21:06:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:38.475 21:06:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:38.475 21:06:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:38.475 21:06:54 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:38.734 aio_bdev 00:13:38.734 21:06:54 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 25f053b4-f7be-45e5-8f93-2e4ee23fce3c 00:13:38.734 21:06:54 -- common/autotest_common.sh@885 -- # local bdev_name=25f053b4-f7be-45e5-8f93-2e4ee23fce3c 00:13:38.734 21:06:54 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:13:38.734 21:06:54 -- common/autotest_common.sh@887 -- # local i 00:13:38.734 21:06:54 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:13:38.734 21:06:54 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:13:38.734 21:06:54 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:38.734 21:06:54 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 25f053b4-f7be-45e5-8f93-2e4ee23fce3c -t 2000 00:13:38.994 [ 00:13:38.994 { 00:13:38.994 "name": "25f053b4-f7be-45e5-8f93-2e4ee23fce3c", 00:13:38.994 "aliases": [ 00:13:38.994 "lvs/lvol" 00:13:38.994 ], 00:13:38.994 "product_name": "Logical Volume", 00:13:38.994 "block_size": 4096, 00:13:38.994 "num_blocks": 38912, 00:13:38.994 "uuid": "25f053b4-f7be-45e5-8f93-2e4ee23fce3c", 00:13:38.994 "assigned_rate_limits": { 00:13:38.994 "rw_ios_per_sec": 0, 00:13:38.994 "rw_mbytes_per_sec": 0, 00:13:38.994 "r_mbytes_per_sec": 0, 00:13:38.994 "w_mbytes_per_sec": 0 00:13:38.994 }, 00:13:38.994 "claimed": false, 00:13:38.994 "zoned": false, 00:13:38.994 "supported_io_types": { 00:13:38.994 "read": true, 00:13:38.994 "write": true, 00:13:38.994 "unmap": true, 00:13:38.994 "write_zeroes": true, 00:13:38.994 "flush": false, 00:13:38.994 "reset": true, 00:13:38.994 "compare": false, 00:13:38.994 "compare_and_write": false, 00:13:38.994 "abort": false, 00:13:38.994 "nvme_admin": false, 00:13:38.994 "nvme_io": false 00:13:38.994 }, 00:13:38.994 "driver_specific": { 00:13:38.994 "lvol": { 00:13:38.994 "lvol_store_uuid": "3b03c92f-b52e-4169-a777-69b829f542ae", 00:13:38.994 "base_bdev": "aio_bdev", 00:13:38.994 "thin_provision": false, 00:13:38.994 "snapshot": false, 00:13:38.994 "clone": false, 00:13:38.994 "esnap_clone": false 00:13:38.994 } 00:13:38.994 } 00:13:38.994 } 00:13:38.994 ] 00:13:38.994 21:06:54 -- common/autotest_common.sh@893 -- # return 0 00:13:38.994 21:06:54 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b03c92f-b52e-4169-a777-69b829f542ae 00:13:38.994 21:06:54 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:13:39.254 21:06:54 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:13:39.254 21:06:54 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b03c92f-b52e-4169-a777-69b829f542ae 00:13:39.254 21:06:54 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:13:39.254 21:06:55 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:13:39.254 21:06:55 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 25f053b4-f7be-45e5-8f93-2e4ee23fce3c 00:13:39.513 21:06:55 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3b03c92f-b52e-4169-a777-69b829f542ae 00:13:39.773 21:06:55 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:39.773 21:06:55 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:39.773 00:13:39.773 real 0m15.601s 00:13:39.773 user 0m15.297s 00:13:39.773 sys 0m1.404s 00:13:39.773 21:06:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:39.773 21:06:55 -- common/autotest_common.sh@10 -- # set +x 00:13:39.773 ************************************ 00:13:39.773 END TEST lvs_grow_clean 00:13:39.773 ************************************ 00:13:40.033 21:06:55 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:13:40.033 21:06:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:40.033 21:06:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:40.033 21:06:55 -- common/autotest_common.sh@10 -- # set +x 00:13:40.033 ************************************ 00:13:40.033 START TEST lvs_grow_dirty 00:13:40.033 ************************************ 00:13:40.033 21:06:55 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:13:40.033 21:06:55 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:40.033 21:06:55 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:40.033 21:06:55 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:40.033 21:06:55 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:40.033 21:06:55 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:40.033 21:06:55 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:40.033 21:06:55 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:40.033 21:06:55 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:40.033 21:06:55 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:40.293 21:06:56 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:40.293 21:06:56 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:40.553 21:06:56 -- target/nvmf_lvs_grow.sh@28 -- # lvs=5bf71d99-feae-4546-9569-4bf27feab804 00:13:40.553 21:06:56 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bf71d99-feae-4546-9569-4bf27feab804 00:13:40.553 21:06:56 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:40.553 21:06:56 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:40.553 21:06:56 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:40.553 21:06:56 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5bf71d99-feae-4546-9569-4bf27feab804 lvol 150 00:13:40.812 21:06:56 -- target/nvmf_lvs_grow.sh@33 -- # lvol=530afac6-6112-40d7-bbec-3cfcf5485db8 00:13:40.812 21:06:56 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:40.812 21:06:56 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:40.812 [2024-04-18 21:06:56.739236] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:40.812 [2024-04-18 21:06:56.739283] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:40.812 true 00:13:41.072 21:06:56 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bf71d99-feae-4546-9569-4bf27feab804 00:13:41.073 21:06:56 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:41.073 21:06:56 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:41.073 21:06:56 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:41.332 21:06:57 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 530afac6-6112-40d7-bbec-3cfcf5485db8 00:13:41.592 21:06:57 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:41.592 21:06:57 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:41.851 21:06:57 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3002365 00:13:41.851 21:06:57 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:41.851 21:06:57 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3002365 /var/tmp/bdevperf.sock 00:13:41.851 21:06:57 -- common/autotest_common.sh@817 -- # '[' -z 3002365 ']' 00:13:41.851 21:06:57 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:41.852 21:06:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:41.852 21:06:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:41.852 21:06:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:41.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:41.852 21:06:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:41.852 21:06:57 -- common/autotest_common.sh@10 -- # set +x 00:13:41.852 [2024-04-18 21:06:57.657925] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:13:41.852 [2024-04-18 21:06:57.657974] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3002365 ] 00:13:41.852 EAL: No free 2048 kB hugepages reported on node 1 00:13:41.852 [2024-04-18 21:06:57.716466] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.111 [2024-04-18 21:06:57.793547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:42.681 21:06:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:42.681 21:06:58 -- common/autotest_common.sh@850 -- # return 0 00:13:42.681 21:06:58 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:42.941 Nvme0n1 00:13:42.941 21:06:58 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:43.200 [ 00:13:43.200 { 00:13:43.200 "name": "Nvme0n1", 00:13:43.200 "aliases": [ 00:13:43.200 "530afac6-6112-40d7-bbec-3cfcf5485db8" 00:13:43.200 ], 00:13:43.200 "product_name": "NVMe disk", 00:13:43.200 "block_size": 4096, 00:13:43.200 "num_blocks": 38912, 00:13:43.200 "uuid": "530afac6-6112-40d7-bbec-3cfcf5485db8", 00:13:43.200 "assigned_rate_limits": { 00:13:43.200 "rw_ios_per_sec": 0, 00:13:43.200 "rw_mbytes_per_sec": 0, 00:13:43.200 "r_mbytes_per_sec": 0, 00:13:43.200 "w_mbytes_per_sec": 0 00:13:43.200 }, 00:13:43.200 "claimed": false, 00:13:43.200 "zoned": false, 00:13:43.200 "supported_io_types": { 00:13:43.200 "read": true, 00:13:43.200 "write": true, 00:13:43.200 "unmap": true, 00:13:43.200 "write_zeroes": true, 00:13:43.200 "flush": true, 00:13:43.200 "reset": true, 00:13:43.200 "compare": true, 00:13:43.200 "compare_and_write": true, 00:13:43.200 "abort": true, 00:13:43.200 "nvme_admin": true, 00:13:43.200 "nvme_io": true 00:13:43.200 }, 00:13:43.200 "memory_domains": [ 00:13:43.200 { 00:13:43.200 "dma_device_id": "system", 00:13:43.200 "dma_device_type": 1 00:13:43.200 } 00:13:43.200 ], 00:13:43.200 "driver_specific": { 00:13:43.200 "nvme": [ 00:13:43.200 { 00:13:43.200 "trid": { 00:13:43.200 "trtype": "TCP", 00:13:43.200 "adrfam": "IPv4", 00:13:43.200 "traddr": "10.0.0.2", 00:13:43.200 "trsvcid": "4420", 00:13:43.200 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:43.200 }, 00:13:43.200 "ctrlr_data": { 00:13:43.200 "cntlid": 1, 00:13:43.200 "vendor_id": "0x8086", 00:13:43.200 "model_number": "SPDK bdev Controller", 00:13:43.200 "serial_number": "SPDK0", 00:13:43.200 "firmware_revision": "24.05", 00:13:43.200 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:43.200 "oacs": { 00:13:43.201 "security": 0, 00:13:43.201 "format": 0, 00:13:43.201 "firmware": 0, 00:13:43.201 "ns_manage": 0 00:13:43.201 }, 00:13:43.201 "multi_ctrlr": true, 00:13:43.201 "ana_reporting": false 00:13:43.201 }, 00:13:43.201 "vs": { 00:13:43.201 "nvme_version": "1.3" 00:13:43.201 }, 00:13:43.201 "ns_data": { 00:13:43.201 "id": 1, 00:13:43.201 "can_share": true 00:13:43.201 } 00:13:43.201 } 00:13:43.201 ], 00:13:43.201 "mp_policy": "active_passive" 00:13:43.201 } 00:13:43.201 } 00:13:43.201 ] 00:13:43.201 21:06:59 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3002599 00:13:43.201 21:06:59 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:43.201 21:06:59 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:43.201 Running I/O for 10 seconds... 00:13:44.580 Latency(us) 00:13:44.580 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.580 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:44.580 Nvme0n1 : 1.00 21384.00 83.53 0.00 0.00 0.00 0.00 0.00 00:13:44.580 =================================================================================================================== 00:13:44.580 Total : 21384.00 83.53 0.00 0.00 0.00 0.00 0.00 00:13:44.580 00:13:45.148 21:07:01 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5bf71d99-feae-4546-9569-4bf27feab804 00:13:45.407 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:45.407 Nvme0n1 : 2.00 21464.00 83.84 0.00 0.00 0.00 0.00 0.00 00:13:45.407 =================================================================================================================== 00:13:45.407 Total : 21464.00 83.84 0.00 0.00 0.00 0.00 0.00 00:13:45.407 00:13:45.407 true 00:13:45.407 21:07:01 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bf71d99-feae-4546-9569-4bf27feab804 00:13:45.407 21:07:01 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:45.666 21:07:01 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:45.666 21:07:01 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:45.666 21:07:01 -- target/nvmf_lvs_grow.sh@65 -- # wait 3002599 00:13:46.235 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:46.235 Nvme0n1 : 3.00 21597.33 84.36 0.00 0.00 0.00 0.00 0.00 00:13:46.235 =================================================================================================================== 00:13:46.235 Total : 21597.33 84.36 0.00 0.00 0.00 0.00 0.00 00:13:46.235 00:13:47.631 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:47.631 Nvme0n1 : 4.00 21676.00 84.67 0.00 0.00 0.00 0.00 0.00 00:13:47.631 =================================================================================================================== 00:13:47.631 Total : 21676.00 84.67 0.00 0.00 0.00 0.00 0.00 00:13:47.631 00:13:48.226 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:48.227 Nvme0n1 : 5.00 21742.40 84.93 0.00 0.00 0.00 0.00 0.00 00:13:48.227 =================================================================================================================== 00:13:48.227 Total : 21742.40 84.93 0.00 0.00 0.00 0.00 0.00 00:13:48.227 00:13:49.611 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:49.611 Nvme0n1 : 6.00 21798.67 85.15 0.00 0.00 0.00 0.00 0.00 00:13:49.611 =================================================================================================================== 00:13:49.611 Total : 21798.67 85.15 0.00 0.00 0.00 0.00 0.00 00:13:49.611 00:13:50.547 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:50.547 Nvme0n1 : 7.00 21844.57 85.33 0.00 0.00 0.00 0.00 0.00 00:13:50.547 =================================================================================================================== 00:13:50.547 Total : 21844.57 85.33 0.00 0.00 0.00 0.00 0.00 00:13:50.547 00:13:51.485 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:51.485 Nvme0n1 : 8.00 21878.00 85.46 0.00 0.00 0.00 0.00 0.00 00:13:51.485 =================================================================================================================== 00:13:51.485 Total : 21878.00 85.46 0.00 0.00 0.00 0.00 0.00 00:13:51.485 00:13:52.424 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:52.424 Nvme0n1 : 9.00 21905.78 85.57 0.00 0.00 0.00 0.00 0.00 00:13:52.424 =================================================================================================================== 00:13:52.424 Total : 21905.78 85.57 0.00 0.00 0.00 0.00 0.00 00:13:52.424 00:13:53.362 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:53.362 Nvme0n1 : 10.00 21931.20 85.67 0.00 0.00 0.00 0.00 0.00 00:13:53.362 =================================================================================================================== 00:13:53.362 Total : 21931.20 85.67 0.00 0.00 0.00 0.00 0.00 00:13:53.362 00:13:53.362 00:13:53.362 Latency(us) 00:13:53.363 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:53.363 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:53.363 Nvme0n1 : 10.01 21931.80 85.67 0.00 0.00 5832.22 1837.86 11625.52 00:13:53.363 =================================================================================================================== 00:13:53.363 Total : 21931.80 85.67 0.00 0.00 5832.22 1837.86 11625.52 00:13:53.363 0 00:13:53.363 21:07:09 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3002365 00:13:53.363 21:07:09 -- common/autotest_common.sh@936 -- # '[' -z 3002365 ']' 00:13:53.363 21:07:09 -- common/autotest_common.sh@940 -- # kill -0 3002365 00:13:53.363 21:07:09 -- common/autotest_common.sh@941 -- # uname 00:13:53.363 21:07:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:53.363 21:07:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3002365 00:13:53.363 21:07:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:53.363 21:07:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:53.363 21:07:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3002365' 00:13:53.363 killing process with pid 3002365 00:13:53.363 21:07:09 -- common/autotest_common.sh@955 -- # kill 3002365 00:13:53.363 Received shutdown signal, test time was about 10.000000 seconds 00:13:53.363 00:13:53.363 Latency(us) 00:13:53.363 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:53.363 =================================================================================================================== 00:13:53.363 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:53.363 21:07:09 -- common/autotest_common.sh@960 -- # wait 3002365 00:13:53.622 21:07:09 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:53.883 21:07:09 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bf71d99-feae-4546-9569-4bf27feab804 00:13:53.883 21:07:09 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:13:53.883 21:07:09 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:13:53.883 21:07:09 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:13:53.883 21:07:09 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 2999245 00:13:53.883 21:07:09 -- target/nvmf_lvs_grow.sh@74 -- # wait 2999245 00:13:53.883 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 2999245 Killed "${NVMF_APP[@]}" "$@" 00:13:53.883 21:07:09 -- target/nvmf_lvs_grow.sh@74 -- # true 00:13:53.883 21:07:09 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:13:53.883 21:07:09 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:53.883 21:07:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:53.883 21:07:09 -- common/autotest_common.sh@10 -- # set +x 00:13:53.883 21:07:09 -- nvmf/common.sh@470 -- # nvmfpid=3004438 00:13:53.883 21:07:09 -- nvmf/common.sh@471 -- # waitforlisten 3004438 00:13:53.883 21:07:09 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:53.883 21:07:09 -- common/autotest_common.sh@817 -- # '[' -z 3004438 ']' 00:13:53.883 21:07:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.883 21:07:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:53.883 21:07:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.883 21:07:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:53.883 21:07:09 -- common/autotest_common.sh@10 -- # set +x 00:13:54.143 [2024-04-18 21:07:09.851509] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:13:54.143 [2024-04-18 21:07:09.851561] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:54.143 EAL: No free 2048 kB hugepages reported on node 1 00:13:54.143 [2024-04-18 21:07:09.915735] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.143 [2024-04-18 21:07:09.993252] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:54.143 [2024-04-18 21:07:09.993286] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:54.143 [2024-04-18 21:07:09.993293] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:54.143 [2024-04-18 21:07:09.993299] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:54.143 [2024-04-18 21:07:09.993305] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:54.143 [2024-04-18 21:07:09.993320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.082 21:07:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:55.082 21:07:10 -- common/autotest_common.sh@850 -- # return 0 00:13:55.082 21:07:10 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:55.082 21:07:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:55.082 21:07:10 -- common/autotest_common.sh@10 -- # set +x 00:13:55.082 21:07:10 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:55.082 21:07:10 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:55.082 [2024-04-18 21:07:10.836339] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:13:55.082 [2024-04-18 21:07:10.836427] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:13:55.082 [2024-04-18 21:07:10.836453] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:13:55.082 21:07:10 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:13:55.082 21:07:10 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 530afac6-6112-40d7-bbec-3cfcf5485db8 00:13:55.082 21:07:10 -- common/autotest_common.sh@885 -- # local bdev_name=530afac6-6112-40d7-bbec-3cfcf5485db8 00:13:55.082 21:07:10 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:13:55.082 21:07:10 -- common/autotest_common.sh@887 -- # local i 00:13:55.082 21:07:10 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:13:55.082 21:07:10 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:13:55.082 21:07:10 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:55.342 21:07:11 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 530afac6-6112-40d7-bbec-3cfcf5485db8 -t 2000 00:13:55.342 [ 00:13:55.342 { 00:13:55.342 "name": "530afac6-6112-40d7-bbec-3cfcf5485db8", 00:13:55.342 "aliases": [ 00:13:55.342 "lvs/lvol" 00:13:55.342 ], 00:13:55.342 "product_name": "Logical Volume", 00:13:55.342 "block_size": 4096, 00:13:55.342 "num_blocks": 38912, 00:13:55.342 "uuid": "530afac6-6112-40d7-bbec-3cfcf5485db8", 00:13:55.342 "assigned_rate_limits": { 00:13:55.342 "rw_ios_per_sec": 0, 00:13:55.342 "rw_mbytes_per_sec": 0, 00:13:55.342 "r_mbytes_per_sec": 0, 00:13:55.342 "w_mbytes_per_sec": 0 00:13:55.342 }, 00:13:55.342 "claimed": false, 00:13:55.342 "zoned": false, 00:13:55.342 "supported_io_types": { 00:13:55.342 "read": true, 00:13:55.342 "write": true, 00:13:55.342 "unmap": true, 00:13:55.342 "write_zeroes": true, 00:13:55.342 "flush": false, 00:13:55.342 "reset": true, 00:13:55.342 "compare": false, 00:13:55.342 "compare_and_write": false, 00:13:55.342 "abort": false, 00:13:55.342 "nvme_admin": false, 00:13:55.342 "nvme_io": false 00:13:55.342 }, 00:13:55.342 "driver_specific": { 00:13:55.342 "lvol": { 00:13:55.342 "lvol_store_uuid": "5bf71d99-feae-4546-9569-4bf27feab804", 00:13:55.342 "base_bdev": "aio_bdev", 00:13:55.342 "thin_provision": false, 00:13:55.342 "snapshot": false, 00:13:55.342 "clone": false, 00:13:55.342 "esnap_clone": false 00:13:55.342 } 00:13:55.342 } 00:13:55.342 } 00:13:55.342 ] 00:13:55.342 21:07:11 -- common/autotest_common.sh@893 -- # return 0 00:13:55.342 21:07:11 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bf71d99-feae-4546-9569-4bf27feab804 00:13:55.342 21:07:11 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:13:55.603 21:07:11 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:13:55.603 21:07:11 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bf71d99-feae-4546-9569-4bf27feab804 00:13:55.603 21:07:11 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:13:55.863 21:07:11 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:13:55.863 21:07:11 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:55.863 [2024-04-18 21:07:11.688627] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:55.863 21:07:11 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bf71d99-feae-4546-9569-4bf27feab804 00:13:55.863 21:07:11 -- common/autotest_common.sh@638 -- # local es=0 00:13:55.863 21:07:11 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bf71d99-feae-4546-9569-4bf27feab804 00:13:55.863 21:07:11 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:55.863 21:07:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:55.863 21:07:11 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:55.863 21:07:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:55.863 21:07:11 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:55.863 21:07:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:55.863 21:07:11 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:55.863 21:07:11 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:55.863 21:07:11 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bf71d99-feae-4546-9569-4bf27feab804 00:13:56.123 request: 00:13:56.123 { 00:13:56.123 "uuid": "5bf71d99-feae-4546-9569-4bf27feab804", 00:13:56.123 "method": "bdev_lvol_get_lvstores", 00:13:56.123 "req_id": 1 00:13:56.123 } 00:13:56.123 Got JSON-RPC error response 00:13:56.123 response: 00:13:56.123 { 00:13:56.123 "code": -19, 00:13:56.123 "message": "No such device" 00:13:56.123 } 00:13:56.123 21:07:11 -- common/autotest_common.sh@641 -- # es=1 00:13:56.123 21:07:11 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:56.123 21:07:11 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:56.123 21:07:11 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:56.123 21:07:11 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:56.383 aio_bdev 00:13:56.384 21:07:12 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 530afac6-6112-40d7-bbec-3cfcf5485db8 00:13:56.384 21:07:12 -- common/autotest_common.sh@885 -- # local bdev_name=530afac6-6112-40d7-bbec-3cfcf5485db8 00:13:56.384 21:07:12 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:13:56.384 21:07:12 -- common/autotest_common.sh@887 -- # local i 00:13:56.384 21:07:12 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:13:56.384 21:07:12 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:13:56.384 21:07:12 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:56.384 21:07:12 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 530afac6-6112-40d7-bbec-3cfcf5485db8 -t 2000 00:13:56.644 [ 00:13:56.644 { 00:13:56.644 "name": "530afac6-6112-40d7-bbec-3cfcf5485db8", 00:13:56.644 "aliases": [ 00:13:56.644 "lvs/lvol" 00:13:56.644 ], 00:13:56.644 "product_name": "Logical Volume", 00:13:56.644 "block_size": 4096, 00:13:56.644 "num_blocks": 38912, 00:13:56.644 "uuid": "530afac6-6112-40d7-bbec-3cfcf5485db8", 00:13:56.644 "assigned_rate_limits": { 00:13:56.644 "rw_ios_per_sec": 0, 00:13:56.644 "rw_mbytes_per_sec": 0, 00:13:56.644 "r_mbytes_per_sec": 0, 00:13:56.644 "w_mbytes_per_sec": 0 00:13:56.644 }, 00:13:56.644 "claimed": false, 00:13:56.644 "zoned": false, 00:13:56.644 "supported_io_types": { 00:13:56.644 "read": true, 00:13:56.644 "write": true, 00:13:56.644 "unmap": true, 00:13:56.644 "write_zeroes": true, 00:13:56.644 "flush": false, 00:13:56.644 "reset": true, 00:13:56.644 "compare": false, 00:13:56.644 "compare_and_write": false, 00:13:56.644 "abort": false, 00:13:56.644 "nvme_admin": false, 00:13:56.644 "nvme_io": false 00:13:56.644 }, 00:13:56.644 "driver_specific": { 00:13:56.644 "lvol": { 00:13:56.644 "lvol_store_uuid": "5bf71d99-feae-4546-9569-4bf27feab804", 00:13:56.644 "base_bdev": "aio_bdev", 00:13:56.644 "thin_provision": false, 00:13:56.644 "snapshot": false, 00:13:56.644 "clone": false, 00:13:56.644 "esnap_clone": false 00:13:56.645 } 00:13:56.645 } 00:13:56.645 } 00:13:56.645 ] 00:13:56.645 21:07:12 -- common/autotest_common.sh@893 -- # return 0 00:13:56.645 21:07:12 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bf71d99-feae-4546-9569-4bf27feab804 00:13:56.645 21:07:12 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:13:56.905 21:07:12 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:13:56.905 21:07:12 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:13:56.905 21:07:12 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bf71d99-feae-4546-9569-4bf27feab804 00:13:56.905 21:07:12 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:13:56.905 21:07:12 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 530afac6-6112-40d7-bbec-3cfcf5485db8 00:13:57.165 21:07:12 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5bf71d99-feae-4546-9569-4bf27feab804 00:13:57.436 21:07:13 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:57.437 21:07:13 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:57.437 00:13:57.437 real 0m17.459s 00:13:57.437 user 0m44.548s 00:13:57.437 sys 0m4.023s 00:13:57.437 21:07:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:57.437 21:07:13 -- common/autotest_common.sh@10 -- # set +x 00:13:57.437 ************************************ 00:13:57.437 END TEST lvs_grow_dirty 00:13:57.437 ************************************ 00:13:57.437 21:07:13 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:13:57.437 21:07:13 -- common/autotest_common.sh@794 -- # type=--id 00:13:57.437 21:07:13 -- common/autotest_common.sh@795 -- # id=0 00:13:57.437 21:07:13 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:13:57.437 21:07:13 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:57.699 21:07:13 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:13:57.699 21:07:13 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:13:57.699 21:07:13 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:13:57.699 21:07:13 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:57.699 nvmf_trace.0 00:13:57.699 21:07:13 -- common/autotest_common.sh@809 -- # return 0 00:13:57.699 21:07:13 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:13:57.699 21:07:13 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:57.699 21:07:13 -- nvmf/common.sh@117 -- # sync 00:13:57.699 21:07:13 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:57.699 21:07:13 -- nvmf/common.sh@120 -- # set +e 00:13:57.699 21:07:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:57.699 21:07:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:57.699 rmmod nvme_tcp 00:13:57.699 rmmod nvme_fabrics 00:13:57.699 rmmod nvme_keyring 00:13:57.699 21:07:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:57.699 21:07:13 -- nvmf/common.sh@124 -- # set -e 00:13:57.699 21:07:13 -- nvmf/common.sh@125 -- # return 0 00:13:57.699 21:07:13 -- nvmf/common.sh@478 -- # '[' -n 3004438 ']' 00:13:57.699 21:07:13 -- nvmf/common.sh@479 -- # killprocess 3004438 00:13:57.699 21:07:13 -- common/autotest_common.sh@936 -- # '[' -z 3004438 ']' 00:13:57.699 21:07:13 -- common/autotest_common.sh@940 -- # kill -0 3004438 00:13:57.699 21:07:13 -- common/autotest_common.sh@941 -- # uname 00:13:57.699 21:07:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:57.699 21:07:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3004438 00:13:57.699 21:07:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:57.699 21:07:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:57.699 21:07:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3004438' 00:13:57.699 killing process with pid 3004438 00:13:57.699 21:07:13 -- common/autotest_common.sh@955 -- # kill 3004438 00:13:57.699 21:07:13 -- common/autotest_common.sh@960 -- # wait 3004438 00:13:57.959 21:07:13 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:57.959 21:07:13 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:57.959 21:07:13 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:57.959 21:07:13 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:57.959 21:07:13 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:57.959 21:07:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:57.959 21:07:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:57.959 21:07:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.868 21:07:15 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:59.868 00:13:59.868 real 0m43.029s 00:13:59.868 user 1m5.869s 00:13:59.868 sys 0m10.514s 00:13:59.868 21:07:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:59.868 21:07:15 -- common/autotest_common.sh@10 -- # set +x 00:13:59.868 ************************************ 00:13:59.868 END TEST nvmf_lvs_grow 00:13:59.868 ************************************ 00:14:00.128 21:07:15 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:00.128 21:07:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:00.128 21:07:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:00.128 21:07:15 -- common/autotest_common.sh@10 -- # set +x 00:14:00.128 ************************************ 00:14:00.128 START TEST nvmf_bdev_io_wait 00:14:00.128 ************************************ 00:14:00.128 21:07:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:00.128 * Looking for test storage... 00:14:00.128 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:00.128 21:07:16 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:00.128 21:07:16 -- nvmf/common.sh@7 -- # uname -s 00:14:00.128 21:07:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:00.128 21:07:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:00.128 21:07:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:00.128 21:07:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:00.128 21:07:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:00.128 21:07:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:00.128 21:07:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:00.128 21:07:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:00.128 21:07:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:00.128 21:07:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:00.388 21:07:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:00.388 21:07:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:00.388 21:07:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:00.388 21:07:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:00.388 21:07:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:00.388 21:07:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:00.388 21:07:16 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:00.388 21:07:16 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:00.388 21:07:16 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:00.388 21:07:16 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:00.388 21:07:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.388 21:07:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.388 21:07:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.388 21:07:16 -- paths/export.sh@5 -- # export PATH 00:14:00.388 21:07:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.388 21:07:16 -- nvmf/common.sh@47 -- # : 0 00:14:00.388 21:07:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:00.388 21:07:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:00.388 21:07:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:00.388 21:07:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:00.388 21:07:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:00.388 21:07:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:00.388 21:07:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:00.388 21:07:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:00.388 21:07:16 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:00.388 21:07:16 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:00.388 21:07:16 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:00.388 21:07:16 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:00.388 21:07:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:00.388 21:07:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:00.389 21:07:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:00.389 21:07:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:00.389 21:07:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.389 21:07:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:00.389 21:07:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.389 21:07:16 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:00.389 21:07:16 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:00.389 21:07:16 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:00.389 21:07:16 -- common/autotest_common.sh@10 -- # set +x 00:14:06.961 21:07:22 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:06.961 21:07:22 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:06.961 21:07:22 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:06.961 21:07:22 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:06.961 21:07:22 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:06.961 21:07:22 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:06.961 21:07:22 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:06.961 21:07:22 -- nvmf/common.sh@295 -- # net_devs=() 00:14:06.961 21:07:22 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:06.961 21:07:22 -- nvmf/common.sh@296 -- # e810=() 00:14:06.961 21:07:22 -- nvmf/common.sh@296 -- # local -ga e810 00:14:06.961 21:07:22 -- nvmf/common.sh@297 -- # x722=() 00:14:06.961 21:07:22 -- nvmf/common.sh@297 -- # local -ga x722 00:14:06.961 21:07:22 -- nvmf/common.sh@298 -- # mlx=() 00:14:06.961 21:07:22 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:06.961 21:07:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:06.961 21:07:22 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:06.961 21:07:22 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:06.961 21:07:22 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:06.961 21:07:22 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:06.961 21:07:22 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:06.961 21:07:22 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:06.961 21:07:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:06.961 21:07:22 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:06.961 21:07:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:06.961 21:07:22 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:06.961 21:07:22 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:06.961 21:07:22 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:06.961 21:07:22 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:06.961 21:07:22 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:06.961 21:07:22 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:06.961 21:07:22 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:06.961 21:07:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:06.961 21:07:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:06.961 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:06.961 21:07:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:06.961 21:07:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:06.961 21:07:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:06.961 21:07:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:06.961 21:07:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:06.961 21:07:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:06.961 21:07:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:06.961 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:06.961 21:07:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:06.961 21:07:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:06.962 21:07:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:06.962 21:07:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:06.962 21:07:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:06.962 21:07:22 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:06.962 21:07:22 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:06.962 21:07:22 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:06.962 21:07:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:06.962 21:07:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:06.962 21:07:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:06.962 21:07:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:06.962 21:07:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:06.962 Found net devices under 0000:86:00.0: cvl_0_0 00:14:06.962 21:07:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:06.962 21:07:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:06.962 21:07:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:06.962 21:07:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:06.962 21:07:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:06.962 21:07:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:06.962 Found net devices under 0000:86:00.1: cvl_0_1 00:14:06.962 21:07:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:06.962 21:07:22 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:06.962 21:07:22 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:06.962 21:07:22 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:06.962 21:07:22 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:06.962 21:07:22 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:06.962 21:07:22 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:06.962 21:07:22 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:06.962 21:07:22 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:06.962 21:07:22 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:06.962 21:07:22 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:06.962 21:07:22 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:06.962 21:07:22 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:06.962 21:07:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:06.962 21:07:22 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:06.962 21:07:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:06.962 21:07:22 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:06.962 21:07:22 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:06.962 21:07:22 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:06.962 21:07:22 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:06.962 21:07:22 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:06.962 21:07:22 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:06.962 21:07:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:06.962 21:07:22 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:06.962 21:07:22 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:06.962 21:07:22 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:06.962 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:06.962 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:14:06.962 00:14:06.962 --- 10.0.0.2 ping statistics --- 00:14:06.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.962 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:14:06.962 21:07:22 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:06.962 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:06.962 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:14:06.962 00:14:06.962 --- 10.0.0.1 ping statistics --- 00:14:06.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.962 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:14:06.962 21:07:22 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:06.962 21:07:22 -- nvmf/common.sh@411 -- # return 0 00:14:06.962 21:07:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:06.962 21:07:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:06.962 21:07:22 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:06.962 21:07:22 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:06.962 21:07:22 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:06.962 21:07:22 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:06.962 21:07:22 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:06.962 21:07:22 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:14:06.962 21:07:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:06.962 21:07:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:06.962 21:07:22 -- common/autotest_common.sh@10 -- # set +x 00:14:06.962 21:07:22 -- nvmf/common.sh@470 -- # nvmfpid=3009004 00:14:06.962 21:07:22 -- nvmf/common.sh@471 -- # waitforlisten 3009004 00:14:06.962 21:07:22 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:14:06.962 21:07:22 -- common/autotest_common.sh@817 -- # '[' -z 3009004 ']' 00:14:06.962 21:07:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:06.962 21:07:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:06.962 21:07:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:06.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:06.962 21:07:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:06.962 21:07:22 -- common/autotest_common.sh@10 -- # set +x 00:14:06.962 [2024-04-18 21:07:22.452932] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:14:06.962 [2024-04-18 21:07:22.452971] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:06.962 EAL: No free 2048 kB hugepages reported on node 1 00:14:06.962 [2024-04-18 21:07:22.515203] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:06.962 [2024-04-18 21:07:22.591777] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:06.962 [2024-04-18 21:07:22.591817] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:06.962 [2024-04-18 21:07:22.591824] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:06.962 [2024-04-18 21:07:22.591830] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:06.962 [2024-04-18 21:07:22.591835] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:06.962 [2024-04-18 21:07:22.591879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:06.962 [2024-04-18 21:07:22.591976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:06.962 [2024-04-18 21:07:22.592061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:06.962 [2024-04-18 21:07:22.592063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.531 21:07:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:07.531 21:07:23 -- common/autotest_common.sh@850 -- # return 0 00:14:07.531 21:07:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:07.531 21:07:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:07.531 21:07:23 -- common/autotest_common.sh@10 -- # set +x 00:14:07.531 21:07:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:07.531 21:07:23 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:14:07.531 21:07:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:07.531 21:07:23 -- common/autotest_common.sh@10 -- # set +x 00:14:07.531 21:07:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:07.531 21:07:23 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:14:07.531 21:07:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:07.531 21:07:23 -- common/autotest_common.sh@10 -- # set +x 00:14:07.531 21:07:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:07.531 21:07:23 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:07.531 21:07:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:07.531 21:07:23 -- common/autotest_common.sh@10 -- # set +x 00:14:07.531 [2024-04-18 21:07:23.366380] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:07.531 21:07:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:07.531 21:07:23 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:07.531 21:07:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:07.531 21:07:23 -- common/autotest_common.sh@10 -- # set +x 00:14:07.531 Malloc0 00:14:07.531 21:07:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:07.531 21:07:23 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:07.531 21:07:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:07.531 21:07:23 -- common/autotest_common.sh@10 -- # set +x 00:14:07.531 21:07:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:07.531 21:07:23 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:07.531 21:07:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:07.531 21:07:23 -- common/autotest_common.sh@10 -- # set +x 00:14:07.531 21:07:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:07.531 21:07:23 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:07.531 21:07:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:07.531 21:07:23 -- common/autotest_common.sh@10 -- # set +x 00:14:07.531 [2024-04-18 21:07:23.424347] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:07.531 21:07:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:07.531 21:07:23 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3009250 00:14:07.531 21:07:23 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:14:07.531 21:07:23 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:14:07.531 21:07:23 -- target/bdev_io_wait.sh@30 -- # READ_PID=3009252 00:14:07.531 21:07:23 -- nvmf/common.sh@521 -- # config=() 00:14:07.531 21:07:23 -- nvmf/common.sh@521 -- # local subsystem config 00:14:07.531 21:07:23 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:14:07.531 21:07:23 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:14:07.531 { 00:14:07.531 "params": { 00:14:07.531 "name": "Nvme$subsystem", 00:14:07.531 "trtype": "$TEST_TRANSPORT", 00:14:07.531 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:07.531 "adrfam": "ipv4", 00:14:07.531 "trsvcid": "$NVMF_PORT", 00:14:07.531 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:07.531 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:07.531 "hdgst": ${hdgst:-false}, 00:14:07.531 "ddgst": ${ddgst:-false} 00:14:07.531 }, 00:14:07.531 "method": "bdev_nvme_attach_controller" 00:14:07.531 } 00:14:07.531 EOF 00:14:07.531 )") 00:14:07.531 21:07:23 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:14:07.531 21:07:23 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:14:07.531 21:07:23 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3009254 00:14:07.531 21:07:23 -- nvmf/common.sh@521 -- # config=() 00:14:07.531 21:07:23 -- nvmf/common.sh@521 -- # local subsystem config 00:14:07.531 21:07:23 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:14:07.531 21:07:23 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:14:07.531 { 00:14:07.531 "params": { 00:14:07.531 "name": "Nvme$subsystem", 00:14:07.531 "trtype": "$TEST_TRANSPORT", 00:14:07.531 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:07.531 "adrfam": "ipv4", 00:14:07.531 "trsvcid": "$NVMF_PORT", 00:14:07.531 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:07.531 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:07.531 "hdgst": ${hdgst:-false}, 00:14:07.531 "ddgst": ${ddgst:-false} 00:14:07.531 }, 00:14:07.531 "method": "bdev_nvme_attach_controller" 00:14:07.531 } 00:14:07.531 EOF 00:14:07.531 )") 00:14:07.531 21:07:23 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:14:07.531 21:07:23 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3009257 00:14:07.531 21:07:23 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:14:07.531 21:07:23 -- nvmf/common.sh@543 -- # cat 00:14:07.531 21:07:23 -- target/bdev_io_wait.sh@35 -- # sync 00:14:07.531 21:07:23 -- nvmf/common.sh@521 -- # config=() 00:14:07.531 21:07:23 -- nvmf/common.sh@521 -- # local subsystem config 00:14:07.531 21:07:23 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:14:07.531 21:07:23 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:14:07.531 { 00:14:07.531 "params": { 00:14:07.531 "name": "Nvme$subsystem", 00:14:07.531 "trtype": "$TEST_TRANSPORT", 00:14:07.531 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:07.531 "adrfam": "ipv4", 00:14:07.531 "trsvcid": "$NVMF_PORT", 00:14:07.531 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:07.531 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:07.531 "hdgst": ${hdgst:-false}, 00:14:07.531 "ddgst": ${ddgst:-false} 00:14:07.531 }, 00:14:07.531 "method": "bdev_nvme_attach_controller" 00:14:07.531 } 00:14:07.531 EOF 00:14:07.531 )") 00:14:07.531 21:07:23 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:14:07.531 21:07:23 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:14:07.531 21:07:23 -- nvmf/common.sh@521 -- # config=() 00:14:07.531 21:07:23 -- nvmf/common.sh@543 -- # cat 00:14:07.531 21:07:23 -- nvmf/common.sh@521 -- # local subsystem config 00:14:07.531 21:07:23 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:14:07.531 21:07:23 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:14:07.531 { 00:14:07.531 "params": { 00:14:07.531 "name": "Nvme$subsystem", 00:14:07.531 "trtype": "$TEST_TRANSPORT", 00:14:07.531 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:07.531 "adrfam": "ipv4", 00:14:07.531 "trsvcid": "$NVMF_PORT", 00:14:07.531 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:07.531 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:07.531 "hdgst": ${hdgst:-false}, 00:14:07.531 "ddgst": ${ddgst:-false} 00:14:07.531 }, 00:14:07.531 "method": "bdev_nvme_attach_controller" 00:14:07.531 } 00:14:07.531 EOF 00:14:07.531 )") 00:14:07.531 21:07:23 -- nvmf/common.sh@545 -- # jq . 00:14:07.531 21:07:23 -- nvmf/common.sh@543 -- # cat 00:14:07.531 21:07:23 -- target/bdev_io_wait.sh@37 -- # wait 3009250 00:14:07.531 21:07:23 -- nvmf/common.sh@543 -- # cat 00:14:07.531 21:07:23 -- nvmf/common.sh@546 -- # IFS=, 00:14:07.531 21:07:23 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:14:07.531 "params": { 00:14:07.531 "name": "Nvme1", 00:14:07.531 "trtype": "tcp", 00:14:07.531 "traddr": "10.0.0.2", 00:14:07.531 "adrfam": "ipv4", 00:14:07.531 "trsvcid": "4420", 00:14:07.531 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:07.531 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:07.531 "hdgst": false, 00:14:07.531 "ddgst": false 00:14:07.531 }, 00:14:07.531 "method": "bdev_nvme_attach_controller" 00:14:07.531 }' 00:14:07.531 21:07:23 -- nvmf/common.sh@545 -- # jq . 00:14:07.531 21:07:23 -- nvmf/common.sh@545 -- # jq . 00:14:07.531 21:07:23 -- nvmf/common.sh@545 -- # jq . 00:14:07.531 21:07:23 -- nvmf/common.sh@546 -- # IFS=, 00:14:07.531 21:07:23 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:14:07.531 "params": { 00:14:07.531 "name": "Nvme1", 00:14:07.531 "trtype": "tcp", 00:14:07.531 "traddr": "10.0.0.2", 00:14:07.531 "adrfam": "ipv4", 00:14:07.531 "trsvcid": "4420", 00:14:07.531 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:07.531 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:07.531 "hdgst": false, 00:14:07.531 "ddgst": false 00:14:07.531 }, 00:14:07.531 "method": "bdev_nvme_attach_controller" 00:14:07.531 }' 00:14:07.531 21:07:23 -- nvmf/common.sh@546 -- # IFS=, 00:14:07.531 21:07:23 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:14:07.531 "params": { 00:14:07.531 "name": "Nvme1", 00:14:07.531 "trtype": "tcp", 00:14:07.531 "traddr": "10.0.0.2", 00:14:07.531 "adrfam": "ipv4", 00:14:07.531 "trsvcid": "4420", 00:14:07.531 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:07.531 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:07.531 "hdgst": false, 00:14:07.531 "ddgst": false 00:14:07.531 }, 00:14:07.531 "method": "bdev_nvme_attach_controller" 00:14:07.531 }' 00:14:07.531 21:07:23 -- nvmf/common.sh@546 -- # IFS=, 00:14:07.531 21:07:23 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:14:07.531 "params": { 00:14:07.531 "name": "Nvme1", 00:14:07.531 "trtype": "tcp", 00:14:07.531 "traddr": "10.0.0.2", 00:14:07.531 "adrfam": "ipv4", 00:14:07.531 "trsvcid": "4420", 00:14:07.531 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:07.531 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:07.531 "hdgst": false, 00:14:07.531 "ddgst": false 00:14:07.531 }, 00:14:07.531 "method": "bdev_nvme_attach_controller" 00:14:07.531 }' 00:14:07.790 [2024-04-18 21:07:23.472674] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:14:07.790 [2024-04-18 21:07:23.472726] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:14:07.790 [2024-04-18 21:07:23.475557] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:14:07.790 [2024-04-18 21:07:23.475604] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:14:07.790 [2024-04-18 21:07:23.476393] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:14:07.790 [2024-04-18 21:07:23.476435] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:14:07.790 [2024-04-18 21:07:23.482358] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:14:07.790 [2024-04-18 21:07:23.482458] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:14:07.790 EAL: No free 2048 kB hugepages reported on node 1 00:14:07.790 EAL: No free 2048 kB hugepages reported on node 1 00:14:07.790 [2024-04-18 21:07:23.656236] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.790 EAL: No free 2048 kB hugepages reported on node 1 00:14:08.050 [2024-04-18 21:07:23.729328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:08.050 [2024-04-18 21:07:23.749546] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.050 EAL: No free 2048 kB hugepages reported on node 1 00:14:08.050 [2024-04-18 21:07:23.825440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:14:08.050 [2024-04-18 21:07:23.849122] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.050 [2024-04-18 21:07:23.890694] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.050 [2024-04-18 21:07:23.941125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:14:08.050 [2024-04-18 21:07:23.966241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:14:08.309 Running I/O for 1 seconds... 00:14:08.309 Running I/O for 1 seconds... 00:14:08.309 Running I/O for 1 seconds... 00:14:08.309 Running I/O for 1 seconds... 00:14:09.267 00:14:09.267 Latency(us) 00:14:09.267 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:09.267 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:14:09.267 Nvme1n1 : 1.01 12711.00 49.65 0.00 0.00 10033.63 3846.68 17894.18 00:14:09.267 =================================================================================================================== 00:14:09.267 Total : 12711.00 49.65 0.00 0.00 10033.63 3846.68 17894.18 00:14:09.267 00:14:09.267 Latency(us) 00:14:09.267 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:09.267 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:14:09.267 Nvme1n1 : 1.01 9086.48 35.49 0.00 0.00 14036.79 5071.92 26784.28 00:14:09.267 =================================================================================================================== 00:14:09.267 Total : 9086.48 35.49 0.00 0.00 14036.79 5071.92 26784.28 00:14:09.267 00:14:09.267 Latency(us) 00:14:09.267 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:09.267 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:14:09.267 Nvme1n1 : 1.00 9656.35 37.72 0.00 0.00 13217.55 4188.61 27696.08 00:14:09.267 =================================================================================================================== 00:14:09.267 Total : 9656.35 37.72 0.00 0.00 13217.55 4188.61 27696.08 00:14:09.536 00:14:09.536 Latency(us) 00:14:09.537 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:09.537 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:14:09.537 Nvme1n1 : 1.00 252049.24 984.57 0.00 0.00 506.08 202.13 669.61 00:14:09.537 =================================================================================================================== 00:14:09.537 Total : 252049.24 984.57 0.00 0.00 506.08 202.13 669.61 00:14:09.844 21:07:25 -- target/bdev_io_wait.sh@38 -- # wait 3009252 00:14:09.844 21:07:25 -- target/bdev_io_wait.sh@39 -- # wait 3009254 00:14:09.844 21:07:25 -- target/bdev_io_wait.sh@40 -- # wait 3009257 00:14:09.844 21:07:25 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:09.844 21:07:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:09.844 21:07:25 -- common/autotest_common.sh@10 -- # set +x 00:14:09.844 21:07:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:09.844 21:07:25 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:14:09.844 21:07:25 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:14:09.844 21:07:25 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:09.844 21:07:25 -- nvmf/common.sh@117 -- # sync 00:14:09.844 21:07:25 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:09.844 21:07:25 -- nvmf/common.sh@120 -- # set +e 00:14:09.844 21:07:25 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:09.844 21:07:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:09.844 rmmod nvme_tcp 00:14:09.844 rmmod nvme_fabrics 00:14:09.844 rmmod nvme_keyring 00:14:09.844 21:07:25 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:09.844 21:07:25 -- nvmf/common.sh@124 -- # set -e 00:14:09.844 21:07:25 -- nvmf/common.sh@125 -- # return 0 00:14:09.844 21:07:25 -- nvmf/common.sh@478 -- # '[' -n 3009004 ']' 00:14:09.844 21:07:25 -- nvmf/common.sh@479 -- # killprocess 3009004 00:14:09.844 21:07:25 -- common/autotest_common.sh@936 -- # '[' -z 3009004 ']' 00:14:09.844 21:07:25 -- common/autotest_common.sh@940 -- # kill -0 3009004 00:14:09.844 21:07:25 -- common/autotest_common.sh@941 -- # uname 00:14:09.844 21:07:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:09.844 21:07:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3009004 00:14:09.844 21:07:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:09.844 21:07:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:09.844 21:07:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3009004' 00:14:09.844 killing process with pid 3009004 00:14:09.844 21:07:25 -- common/autotest_common.sh@955 -- # kill 3009004 00:14:09.844 21:07:25 -- common/autotest_common.sh@960 -- # wait 3009004 00:14:10.150 21:07:25 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:10.150 21:07:25 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:10.150 21:07:25 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:10.150 21:07:25 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:10.150 21:07:25 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:10.150 21:07:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.150 21:07:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:10.150 21:07:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.059 21:07:27 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:12.059 00:14:12.059 real 0m11.909s 00:14:12.059 user 0m20.227s 00:14:12.059 sys 0m6.370s 00:14:12.059 21:07:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:12.059 21:07:27 -- common/autotest_common.sh@10 -- # set +x 00:14:12.059 ************************************ 00:14:12.059 END TEST nvmf_bdev_io_wait 00:14:12.059 ************************************ 00:14:12.059 21:07:27 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:12.059 21:07:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:12.059 21:07:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:12.059 21:07:27 -- common/autotest_common.sh@10 -- # set +x 00:14:12.319 ************************************ 00:14:12.319 START TEST nvmf_queue_depth 00:14:12.319 ************************************ 00:14:12.319 21:07:28 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:12.319 * Looking for test storage... 00:14:12.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:12.319 21:07:28 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:12.319 21:07:28 -- nvmf/common.sh@7 -- # uname -s 00:14:12.319 21:07:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:12.319 21:07:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:12.319 21:07:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:12.319 21:07:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:12.319 21:07:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:12.319 21:07:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:12.319 21:07:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:12.319 21:07:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:12.319 21:07:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:12.319 21:07:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:12.319 21:07:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:12.319 21:07:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:12.319 21:07:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:12.319 21:07:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:12.319 21:07:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:12.319 21:07:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:12.319 21:07:28 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:12.319 21:07:28 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:12.319 21:07:28 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:12.319 21:07:28 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:12.319 21:07:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.319 21:07:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.319 21:07:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.319 21:07:28 -- paths/export.sh@5 -- # export PATH 00:14:12.319 21:07:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.319 21:07:28 -- nvmf/common.sh@47 -- # : 0 00:14:12.319 21:07:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:12.319 21:07:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:12.319 21:07:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:12.319 21:07:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:12.319 21:07:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:12.319 21:07:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:12.319 21:07:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:12.319 21:07:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:12.319 21:07:28 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:14:12.319 21:07:28 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:14:12.319 21:07:28 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:12.319 21:07:28 -- target/queue_depth.sh@19 -- # nvmftestinit 00:14:12.319 21:07:28 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:12.319 21:07:28 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:12.319 21:07:28 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:12.319 21:07:28 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:12.319 21:07:28 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:12.319 21:07:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.319 21:07:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:12.319 21:07:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.319 21:07:28 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:12.319 21:07:28 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:12.319 21:07:28 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:12.319 21:07:28 -- common/autotest_common.sh@10 -- # set +x 00:14:18.892 21:07:33 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:18.892 21:07:33 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:18.892 21:07:33 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:18.892 21:07:33 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:18.892 21:07:33 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:18.892 21:07:33 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:18.892 21:07:33 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:18.892 21:07:33 -- nvmf/common.sh@295 -- # net_devs=() 00:14:18.892 21:07:33 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:18.892 21:07:33 -- nvmf/common.sh@296 -- # e810=() 00:14:18.892 21:07:33 -- nvmf/common.sh@296 -- # local -ga e810 00:14:18.892 21:07:33 -- nvmf/common.sh@297 -- # x722=() 00:14:18.892 21:07:33 -- nvmf/common.sh@297 -- # local -ga x722 00:14:18.892 21:07:33 -- nvmf/common.sh@298 -- # mlx=() 00:14:18.892 21:07:33 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:18.892 21:07:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:18.892 21:07:33 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:18.892 21:07:33 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:18.892 21:07:33 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:18.892 21:07:33 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:18.892 21:07:33 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:18.892 21:07:33 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:18.892 21:07:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:18.892 21:07:33 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:18.892 21:07:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:18.892 21:07:33 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:18.892 21:07:33 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:18.892 21:07:33 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:18.892 21:07:33 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:18.892 21:07:33 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:18.892 21:07:33 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:18.892 21:07:33 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:18.892 21:07:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:18.892 21:07:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:18.892 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:18.892 21:07:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:18.892 21:07:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:18.892 21:07:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:18.892 21:07:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:18.892 21:07:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:18.892 21:07:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:18.892 21:07:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:18.892 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:18.892 21:07:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:18.892 21:07:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:18.892 21:07:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:18.892 21:07:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:18.892 21:07:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:18.892 21:07:33 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:18.892 21:07:33 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:18.892 21:07:33 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:18.892 21:07:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:18.893 21:07:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:18.893 21:07:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:18.893 21:07:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:18.893 21:07:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:18.893 Found net devices under 0000:86:00.0: cvl_0_0 00:14:18.893 21:07:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:18.893 21:07:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:18.893 21:07:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:18.893 21:07:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:18.893 21:07:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:18.893 21:07:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:18.893 Found net devices under 0000:86:00.1: cvl_0_1 00:14:18.893 21:07:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:18.893 21:07:33 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:18.893 21:07:33 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:18.893 21:07:33 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:18.893 21:07:33 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:18.893 21:07:33 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:18.893 21:07:33 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:18.893 21:07:33 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:18.893 21:07:33 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:18.893 21:07:33 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:18.893 21:07:33 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:18.893 21:07:33 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:18.893 21:07:33 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:18.893 21:07:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:18.893 21:07:33 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:18.893 21:07:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:18.893 21:07:33 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:18.893 21:07:33 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:18.893 21:07:33 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:18.893 21:07:34 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:18.893 21:07:34 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:18.893 21:07:34 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:18.893 21:07:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:18.893 21:07:34 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:18.893 21:07:34 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:18.893 21:07:34 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:18.893 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:18.893 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:14:18.893 00:14:18.893 --- 10.0.0.2 ping statistics --- 00:14:18.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.893 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:14:18.893 21:07:34 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:18.893 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:18.893 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:14:18.893 00:14:18.893 --- 10.0.0.1 ping statistics --- 00:14:18.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.893 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:14:18.893 21:07:34 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:18.893 21:07:34 -- nvmf/common.sh@411 -- # return 0 00:14:18.893 21:07:34 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:18.893 21:07:34 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:18.893 21:07:34 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:18.893 21:07:34 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:18.893 21:07:34 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:18.893 21:07:34 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:18.893 21:07:34 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:18.893 21:07:34 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:18.893 21:07:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:18.893 21:07:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:18.893 21:07:34 -- common/autotest_common.sh@10 -- # set +x 00:14:18.893 21:07:34 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:18.893 21:07:34 -- nvmf/common.sh@470 -- # nvmfpid=3013371 00:14:18.893 21:07:34 -- nvmf/common.sh@471 -- # waitforlisten 3013371 00:14:18.893 21:07:34 -- common/autotest_common.sh@817 -- # '[' -z 3013371 ']' 00:14:18.893 21:07:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.893 21:07:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:18.893 21:07:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.893 21:07:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:18.893 21:07:34 -- common/autotest_common.sh@10 -- # set +x 00:14:18.893 [2024-04-18 21:07:34.275289] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:14:18.893 [2024-04-18 21:07:34.275333] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:18.893 EAL: No free 2048 kB hugepages reported on node 1 00:14:18.893 [2024-04-18 21:07:34.339495] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.893 [2024-04-18 21:07:34.420430] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:18.893 [2024-04-18 21:07:34.420460] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:18.893 [2024-04-18 21:07:34.420467] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:18.893 [2024-04-18 21:07:34.420474] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:18.893 [2024-04-18 21:07:34.420479] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:18.893 [2024-04-18 21:07:34.420499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:19.153 21:07:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:19.153 21:07:35 -- common/autotest_common.sh@850 -- # return 0 00:14:19.153 21:07:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:19.153 21:07:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:19.153 21:07:35 -- common/autotest_common.sh@10 -- # set +x 00:14:19.413 21:07:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:19.413 21:07:35 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:19.413 21:07:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:19.413 21:07:35 -- common/autotest_common.sh@10 -- # set +x 00:14:19.413 [2024-04-18 21:07:35.112023] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:19.413 21:07:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:19.413 21:07:35 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:19.413 21:07:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:19.413 21:07:35 -- common/autotest_common.sh@10 -- # set +x 00:14:19.413 Malloc0 00:14:19.413 21:07:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:19.413 21:07:35 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:19.413 21:07:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:19.413 21:07:35 -- common/autotest_common.sh@10 -- # set +x 00:14:19.413 21:07:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:19.413 21:07:35 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:19.413 21:07:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:19.413 21:07:35 -- common/autotest_common.sh@10 -- # set +x 00:14:19.413 21:07:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:19.413 21:07:35 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:19.413 21:07:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:19.413 21:07:35 -- common/autotest_common.sh@10 -- # set +x 00:14:19.413 [2024-04-18 21:07:35.180750] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:19.413 21:07:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:19.413 21:07:35 -- target/queue_depth.sh@30 -- # bdevperf_pid=3013589 00:14:19.413 21:07:35 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:14:19.413 21:07:35 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:19.413 21:07:35 -- target/queue_depth.sh@33 -- # waitforlisten 3013589 /var/tmp/bdevperf.sock 00:14:19.413 21:07:35 -- common/autotest_common.sh@817 -- # '[' -z 3013589 ']' 00:14:19.413 21:07:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:19.413 21:07:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:19.413 21:07:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:19.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:19.413 21:07:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:19.413 21:07:35 -- common/autotest_common.sh@10 -- # set +x 00:14:19.413 [2024-04-18 21:07:35.226227] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:14:19.413 [2024-04-18 21:07:35.226268] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3013589 ] 00:14:19.413 EAL: No free 2048 kB hugepages reported on node 1 00:14:19.413 [2024-04-18 21:07:35.283565] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.673 [2024-04-18 21:07:35.356315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.241 21:07:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:20.241 21:07:36 -- common/autotest_common.sh@850 -- # return 0 00:14:20.241 21:07:36 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:20.241 21:07:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:20.241 21:07:36 -- common/autotest_common.sh@10 -- # set +x 00:14:20.241 NVMe0n1 00:14:20.241 21:07:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:20.241 21:07:36 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:20.501 Running I/O for 10 seconds... 00:14:30.484 00:14:30.484 Latency(us) 00:14:30.484 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:30.484 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:14:30.484 Verification LBA range: start 0x0 length 0x4000 00:14:30.484 NVMe0n1 : 10.05 12233.44 47.79 0.00 0.00 83427.54 14189.97 55392.17 00:14:30.484 =================================================================================================================== 00:14:30.484 Total : 12233.44 47.79 0.00 0.00 83427.54 14189.97 55392.17 00:14:30.484 0 00:14:30.484 21:07:46 -- target/queue_depth.sh@39 -- # killprocess 3013589 00:14:30.484 21:07:46 -- common/autotest_common.sh@936 -- # '[' -z 3013589 ']' 00:14:30.484 21:07:46 -- common/autotest_common.sh@940 -- # kill -0 3013589 00:14:30.484 21:07:46 -- common/autotest_common.sh@941 -- # uname 00:14:30.484 21:07:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:30.484 21:07:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3013589 00:14:30.484 21:07:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:30.484 21:07:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:30.484 21:07:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3013589' 00:14:30.484 killing process with pid 3013589 00:14:30.484 21:07:46 -- common/autotest_common.sh@955 -- # kill 3013589 00:14:30.484 Received shutdown signal, test time was about 10.000000 seconds 00:14:30.484 00:14:30.484 Latency(us) 00:14:30.484 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:30.484 =================================================================================================================== 00:14:30.484 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:30.484 21:07:46 -- common/autotest_common.sh@960 -- # wait 3013589 00:14:30.744 21:07:46 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:30.744 21:07:46 -- target/queue_depth.sh@43 -- # nvmftestfini 00:14:30.744 21:07:46 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:30.744 21:07:46 -- nvmf/common.sh@117 -- # sync 00:14:30.744 21:07:46 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:30.744 21:07:46 -- nvmf/common.sh@120 -- # set +e 00:14:30.744 21:07:46 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:30.744 21:07:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:30.744 rmmod nvme_tcp 00:14:30.744 rmmod nvme_fabrics 00:14:30.744 rmmod nvme_keyring 00:14:30.744 21:07:46 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:30.744 21:07:46 -- nvmf/common.sh@124 -- # set -e 00:14:30.744 21:07:46 -- nvmf/common.sh@125 -- # return 0 00:14:30.744 21:07:46 -- nvmf/common.sh@478 -- # '[' -n 3013371 ']' 00:14:30.744 21:07:46 -- nvmf/common.sh@479 -- # killprocess 3013371 00:14:30.744 21:07:46 -- common/autotest_common.sh@936 -- # '[' -z 3013371 ']' 00:14:30.744 21:07:46 -- common/autotest_common.sh@940 -- # kill -0 3013371 00:14:30.744 21:07:46 -- common/autotest_common.sh@941 -- # uname 00:14:30.744 21:07:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:30.744 21:07:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3013371 00:14:30.744 21:07:46 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:30.744 21:07:46 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:30.744 21:07:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3013371' 00:14:30.744 killing process with pid 3013371 00:14:30.744 21:07:46 -- common/autotest_common.sh@955 -- # kill 3013371 00:14:30.744 21:07:46 -- common/autotest_common.sh@960 -- # wait 3013371 00:14:31.004 21:07:46 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:31.004 21:07:46 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:31.004 21:07:46 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:31.004 21:07:46 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:31.004 21:07:46 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:31.004 21:07:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.004 21:07:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:31.004 21:07:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.542 21:07:48 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:33.542 00:14:33.542 real 0m20.865s 00:14:33.542 user 0m24.954s 00:14:33.542 sys 0m6.084s 00:14:33.542 21:07:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:33.542 21:07:48 -- common/autotest_common.sh@10 -- # set +x 00:14:33.542 ************************************ 00:14:33.542 END TEST nvmf_queue_depth 00:14:33.542 ************************************ 00:14:33.542 21:07:48 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:33.542 21:07:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:33.542 21:07:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:33.542 21:07:48 -- common/autotest_common.sh@10 -- # set +x 00:14:33.542 ************************************ 00:14:33.542 START TEST nvmf_multipath 00:14:33.542 ************************************ 00:14:33.542 21:07:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:33.542 * Looking for test storage... 00:14:33.542 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:33.542 21:07:49 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:33.542 21:07:49 -- nvmf/common.sh@7 -- # uname -s 00:14:33.542 21:07:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:33.542 21:07:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:33.542 21:07:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:33.542 21:07:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:33.542 21:07:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:33.542 21:07:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:33.542 21:07:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:33.542 21:07:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:33.542 21:07:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:33.542 21:07:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:33.542 21:07:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:33.542 21:07:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:33.542 21:07:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:33.542 21:07:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:33.542 21:07:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:33.542 21:07:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:33.542 21:07:49 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:33.542 21:07:49 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:33.542 21:07:49 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:33.542 21:07:49 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:33.542 21:07:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.542 21:07:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.542 21:07:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.542 21:07:49 -- paths/export.sh@5 -- # export PATH 00:14:33.542 21:07:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.542 21:07:49 -- nvmf/common.sh@47 -- # : 0 00:14:33.542 21:07:49 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:33.543 21:07:49 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:33.543 21:07:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:33.543 21:07:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:33.543 21:07:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:33.543 21:07:49 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:33.543 21:07:49 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:33.543 21:07:49 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:33.543 21:07:49 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:33.543 21:07:49 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:33.543 21:07:49 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:14:33.543 21:07:49 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:33.543 21:07:49 -- target/multipath.sh@43 -- # nvmftestinit 00:14:33.543 21:07:49 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:33.543 21:07:49 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:33.543 21:07:49 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:33.543 21:07:49 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:33.543 21:07:49 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:33.543 21:07:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.543 21:07:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:33.543 21:07:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.543 21:07:49 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:33.543 21:07:49 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:33.543 21:07:49 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:33.543 21:07:49 -- common/autotest_common.sh@10 -- # set +x 00:14:40.144 21:07:54 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:40.144 21:07:54 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:40.144 21:07:54 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:40.144 21:07:54 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:40.144 21:07:54 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:40.144 21:07:54 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:40.144 21:07:54 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:40.144 21:07:54 -- nvmf/common.sh@295 -- # net_devs=() 00:14:40.144 21:07:54 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:40.144 21:07:54 -- nvmf/common.sh@296 -- # e810=() 00:14:40.144 21:07:54 -- nvmf/common.sh@296 -- # local -ga e810 00:14:40.144 21:07:54 -- nvmf/common.sh@297 -- # x722=() 00:14:40.144 21:07:54 -- nvmf/common.sh@297 -- # local -ga x722 00:14:40.144 21:07:54 -- nvmf/common.sh@298 -- # mlx=() 00:14:40.144 21:07:54 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:40.144 21:07:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:40.144 21:07:54 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:40.144 21:07:54 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:40.144 21:07:54 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:40.144 21:07:54 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:40.144 21:07:54 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:40.144 21:07:54 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:40.144 21:07:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:40.144 21:07:54 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:40.144 21:07:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:40.144 21:07:54 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:40.144 21:07:54 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:40.144 21:07:54 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:40.144 21:07:54 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:40.144 21:07:54 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:40.144 21:07:54 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:40.144 21:07:54 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:40.144 21:07:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:40.144 21:07:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:40.144 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:40.144 21:07:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:40.144 21:07:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:40.144 21:07:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:40.144 21:07:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:40.144 21:07:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:40.144 21:07:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:40.144 21:07:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:40.144 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:40.144 21:07:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:40.144 21:07:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:40.144 21:07:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:40.144 21:07:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:40.144 21:07:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:40.144 21:07:54 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:40.144 21:07:54 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:40.144 21:07:54 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:40.144 21:07:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:40.144 21:07:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:40.144 21:07:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:40.144 21:07:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:40.144 21:07:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:40.144 Found net devices under 0000:86:00.0: cvl_0_0 00:14:40.144 21:07:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:40.144 21:07:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:40.144 21:07:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:40.144 21:07:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:40.144 21:07:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:40.144 21:07:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:40.144 Found net devices under 0000:86:00.1: cvl_0_1 00:14:40.144 21:07:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:40.144 21:07:54 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:40.144 21:07:54 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:40.144 21:07:54 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:40.144 21:07:54 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:40.144 21:07:54 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:40.144 21:07:54 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:40.144 21:07:54 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:40.144 21:07:54 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:40.144 21:07:54 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:40.144 21:07:54 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:40.144 21:07:54 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:40.144 21:07:54 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:40.144 21:07:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:40.144 21:07:54 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:40.144 21:07:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:40.144 21:07:54 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:40.144 21:07:54 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:40.144 21:07:54 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:40.144 21:07:55 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:40.144 21:07:55 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:40.144 21:07:55 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:40.144 21:07:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:40.144 21:07:55 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:40.144 21:07:55 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:40.144 21:07:55 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:40.144 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:40.144 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:14:40.144 00:14:40.144 --- 10.0.0.2 ping statistics --- 00:14:40.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:40.144 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:14:40.144 21:07:55 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:40.144 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:40.144 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:14:40.144 00:14:40.144 --- 10.0.0.1 ping statistics --- 00:14:40.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:40.145 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:14:40.145 21:07:55 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:40.145 21:07:55 -- nvmf/common.sh@411 -- # return 0 00:14:40.145 21:07:55 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:40.145 21:07:55 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:40.145 21:07:55 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:40.145 21:07:55 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:40.145 21:07:55 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:40.145 21:07:55 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:40.145 21:07:55 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:40.145 21:07:55 -- target/multipath.sh@45 -- # '[' -z ']' 00:14:40.145 21:07:55 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:14:40.145 only one NIC for nvmf test 00:14:40.145 21:07:55 -- target/multipath.sh@47 -- # nvmftestfini 00:14:40.145 21:07:55 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:40.145 21:07:55 -- nvmf/common.sh@117 -- # sync 00:14:40.145 21:07:55 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:40.145 21:07:55 -- nvmf/common.sh@120 -- # set +e 00:14:40.145 21:07:55 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:40.145 21:07:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:40.145 rmmod nvme_tcp 00:14:40.145 rmmod nvme_fabrics 00:14:40.145 rmmod nvme_keyring 00:14:40.145 21:07:55 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:40.145 21:07:55 -- nvmf/common.sh@124 -- # set -e 00:14:40.145 21:07:55 -- nvmf/common.sh@125 -- # return 0 00:14:40.145 21:07:55 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:14:40.145 21:07:55 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:40.145 21:07:55 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:40.145 21:07:55 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:40.145 21:07:55 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:40.145 21:07:55 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:40.145 21:07:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:40.145 21:07:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:40.145 21:07:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.524 21:07:57 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:41.524 21:07:57 -- target/multipath.sh@48 -- # exit 0 00:14:41.524 21:07:57 -- target/multipath.sh@1 -- # nvmftestfini 00:14:41.524 21:07:57 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:41.524 21:07:57 -- nvmf/common.sh@117 -- # sync 00:14:41.524 21:07:57 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:41.524 21:07:57 -- nvmf/common.sh@120 -- # set +e 00:14:41.524 21:07:57 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:41.524 21:07:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:41.524 21:07:57 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:41.524 21:07:57 -- nvmf/common.sh@124 -- # set -e 00:14:41.524 21:07:57 -- nvmf/common.sh@125 -- # return 0 00:14:41.524 21:07:57 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:14:41.524 21:07:57 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:41.524 21:07:57 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:41.524 21:07:57 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:41.524 21:07:57 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:41.524 21:07:57 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:41.524 21:07:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.524 21:07:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:41.524 21:07:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.524 21:07:57 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:41.524 00:14:41.524 real 0m8.325s 00:14:41.524 user 0m1.765s 00:14:41.524 sys 0m4.559s 00:14:41.524 21:07:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:41.524 21:07:57 -- common/autotest_common.sh@10 -- # set +x 00:14:41.524 ************************************ 00:14:41.524 END TEST nvmf_multipath 00:14:41.524 ************************************ 00:14:41.524 21:07:57 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:41.524 21:07:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:41.524 21:07:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:41.524 21:07:57 -- common/autotest_common.sh@10 -- # set +x 00:14:41.784 ************************************ 00:14:41.784 START TEST nvmf_zcopy 00:14:41.784 ************************************ 00:14:41.784 21:07:57 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:41.784 * Looking for test storage... 00:14:41.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:41.784 21:07:57 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:41.784 21:07:57 -- nvmf/common.sh@7 -- # uname -s 00:14:41.784 21:07:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:41.784 21:07:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:41.784 21:07:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:41.784 21:07:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:41.784 21:07:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:41.784 21:07:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:41.784 21:07:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:41.784 21:07:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:41.784 21:07:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:41.784 21:07:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:41.784 21:07:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:41.784 21:07:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:41.784 21:07:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:41.784 21:07:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:41.784 21:07:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:41.784 21:07:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:41.784 21:07:57 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:41.784 21:07:57 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:41.784 21:07:57 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:41.784 21:07:57 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:41.784 21:07:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.784 21:07:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.784 21:07:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.784 21:07:57 -- paths/export.sh@5 -- # export PATH 00:14:41.784 21:07:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.784 21:07:57 -- nvmf/common.sh@47 -- # : 0 00:14:41.784 21:07:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:41.784 21:07:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:41.784 21:07:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:41.784 21:07:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:41.784 21:07:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:41.784 21:07:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:41.784 21:07:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:41.784 21:07:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:41.784 21:07:57 -- target/zcopy.sh@12 -- # nvmftestinit 00:14:41.784 21:07:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:41.784 21:07:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:41.784 21:07:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:41.784 21:07:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:41.784 21:07:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:41.784 21:07:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.784 21:07:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:41.784 21:07:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.784 21:07:57 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:41.784 21:07:57 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:41.784 21:07:57 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:41.784 21:07:57 -- common/autotest_common.sh@10 -- # set +x 00:14:48.398 21:08:03 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:48.398 21:08:03 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:48.398 21:08:03 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:48.398 21:08:03 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:48.398 21:08:03 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:48.398 21:08:03 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:48.398 21:08:03 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:48.398 21:08:03 -- nvmf/common.sh@295 -- # net_devs=() 00:14:48.398 21:08:03 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:48.398 21:08:03 -- nvmf/common.sh@296 -- # e810=() 00:14:48.398 21:08:03 -- nvmf/common.sh@296 -- # local -ga e810 00:14:48.398 21:08:03 -- nvmf/common.sh@297 -- # x722=() 00:14:48.398 21:08:03 -- nvmf/common.sh@297 -- # local -ga x722 00:14:48.398 21:08:03 -- nvmf/common.sh@298 -- # mlx=() 00:14:48.398 21:08:03 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:48.398 21:08:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:48.398 21:08:03 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:48.398 21:08:03 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:48.398 21:08:03 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:48.398 21:08:03 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:48.398 21:08:03 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:48.398 21:08:03 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:48.398 21:08:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:48.398 21:08:03 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:48.398 21:08:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:48.398 21:08:03 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:48.398 21:08:03 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:48.398 21:08:03 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:48.398 21:08:03 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:48.398 21:08:03 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:48.398 21:08:03 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:48.398 21:08:03 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:48.398 21:08:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:48.398 21:08:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:48.398 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:48.398 21:08:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:48.398 21:08:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:48.398 21:08:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:48.398 21:08:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:48.398 21:08:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:48.398 21:08:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:48.398 21:08:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:48.398 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:48.398 21:08:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:48.398 21:08:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:48.398 21:08:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:48.398 21:08:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:48.398 21:08:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:48.398 21:08:03 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:48.398 21:08:03 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:48.398 21:08:03 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:48.398 21:08:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:48.399 21:08:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:48.399 21:08:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:48.399 21:08:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:48.399 21:08:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:48.399 Found net devices under 0000:86:00.0: cvl_0_0 00:14:48.399 21:08:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:48.399 21:08:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:48.399 21:08:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:48.399 21:08:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:48.399 21:08:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:48.399 21:08:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:48.399 Found net devices under 0000:86:00.1: cvl_0_1 00:14:48.399 21:08:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:48.399 21:08:03 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:48.399 21:08:03 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:48.399 21:08:03 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:48.399 21:08:03 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:48.399 21:08:03 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:48.399 21:08:03 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:48.399 21:08:03 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:48.399 21:08:03 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:48.399 21:08:03 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:48.399 21:08:03 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:48.399 21:08:03 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:48.399 21:08:03 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:48.399 21:08:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:48.399 21:08:03 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:48.399 21:08:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:48.399 21:08:03 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:48.399 21:08:03 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:48.399 21:08:03 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:48.399 21:08:03 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:48.399 21:08:03 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:48.399 21:08:03 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:48.399 21:08:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:48.399 21:08:03 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:48.399 21:08:03 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:48.399 21:08:03 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:48.399 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:48.399 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:14:48.399 00:14:48.399 --- 10.0.0.2 ping statistics --- 00:14:48.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.399 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:14:48.399 21:08:03 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:48.399 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:48.399 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:14:48.399 00:14:48.399 --- 10.0.0.1 ping statistics --- 00:14:48.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.399 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:14:48.399 21:08:03 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:48.399 21:08:03 -- nvmf/common.sh@411 -- # return 0 00:14:48.399 21:08:03 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:48.399 21:08:03 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:48.399 21:08:03 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:48.399 21:08:03 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:48.399 21:08:03 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:48.399 21:08:03 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:48.399 21:08:03 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:48.399 21:08:03 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:14:48.399 21:08:03 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:48.399 21:08:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:48.399 21:08:03 -- common/autotest_common.sh@10 -- # set +x 00:14:48.399 21:08:03 -- nvmf/common.sh@470 -- # nvmfpid=3023061 00:14:48.399 21:08:03 -- nvmf/common.sh@471 -- # waitforlisten 3023061 00:14:48.399 21:08:03 -- common/autotest_common.sh@817 -- # '[' -z 3023061 ']' 00:14:48.399 21:08:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.399 21:08:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:48.399 21:08:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.399 21:08:03 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:48.399 21:08:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:48.399 21:08:03 -- common/autotest_common.sh@10 -- # set +x 00:14:48.399 [2024-04-18 21:08:03.637544] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:14:48.399 [2024-04-18 21:08:03.637587] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:48.399 EAL: No free 2048 kB hugepages reported on node 1 00:14:48.399 [2024-04-18 21:08:03.701388] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.399 [2024-04-18 21:08:03.777783] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:48.399 [2024-04-18 21:08:03.777819] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:48.399 [2024-04-18 21:08:03.777826] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:48.399 [2024-04-18 21:08:03.777832] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:48.399 [2024-04-18 21:08:03.777837] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:48.399 [2024-04-18 21:08:03.777873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:48.658 21:08:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:48.658 21:08:04 -- common/autotest_common.sh@850 -- # return 0 00:14:48.658 21:08:04 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:48.658 21:08:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:48.658 21:08:04 -- common/autotest_common.sh@10 -- # set +x 00:14:48.658 21:08:04 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:48.658 21:08:04 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:14:48.658 21:08:04 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:14:48.658 21:08:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.658 21:08:04 -- common/autotest_common.sh@10 -- # set +x 00:14:48.658 [2024-04-18 21:08:04.472884] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:48.658 21:08:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.658 21:08:04 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:48.658 21:08:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.658 21:08:04 -- common/autotest_common.sh@10 -- # set +x 00:14:48.658 21:08:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.658 21:08:04 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:48.658 21:08:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.658 21:08:04 -- common/autotest_common.sh@10 -- # set +x 00:14:48.658 [2024-04-18 21:08:04.489000] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:48.658 21:08:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.658 21:08:04 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:48.658 21:08:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.658 21:08:04 -- common/autotest_common.sh@10 -- # set +x 00:14:48.658 21:08:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.658 21:08:04 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:14:48.658 21:08:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.658 21:08:04 -- common/autotest_common.sh@10 -- # set +x 00:14:48.658 malloc0 00:14:48.658 21:08:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.658 21:08:04 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:48.659 21:08:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.659 21:08:04 -- common/autotest_common.sh@10 -- # set +x 00:14:48.659 21:08:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.659 21:08:04 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:14:48.659 21:08:04 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:14:48.659 21:08:04 -- nvmf/common.sh@521 -- # config=() 00:14:48.659 21:08:04 -- nvmf/common.sh@521 -- # local subsystem config 00:14:48.659 21:08:04 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:14:48.659 21:08:04 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:14:48.659 { 00:14:48.659 "params": { 00:14:48.659 "name": "Nvme$subsystem", 00:14:48.659 "trtype": "$TEST_TRANSPORT", 00:14:48.659 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:48.659 "adrfam": "ipv4", 00:14:48.659 "trsvcid": "$NVMF_PORT", 00:14:48.659 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:48.659 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:48.659 "hdgst": ${hdgst:-false}, 00:14:48.659 "ddgst": ${ddgst:-false} 00:14:48.659 }, 00:14:48.659 "method": "bdev_nvme_attach_controller" 00:14:48.659 } 00:14:48.659 EOF 00:14:48.659 )") 00:14:48.659 21:08:04 -- nvmf/common.sh@543 -- # cat 00:14:48.659 21:08:04 -- nvmf/common.sh@545 -- # jq . 00:14:48.659 21:08:04 -- nvmf/common.sh@546 -- # IFS=, 00:14:48.659 21:08:04 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:14:48.659 "params": { 00:14:48.659 "name": "Nvme1", 00:14:48.659 "trtype": "tcp", 00:14:48.659 "traddr": "10.0.0.2", 00:14:48.659 "adrfam": "ipv4", 00:14:48.659 "trsvcid": "4420", 00:14:48.659 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:48.659 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:48.659 "hdgst": false, 00:14:48.659 "ddgst": false 00:14:48.659 }, 00:14:48.659 "method": "bdev_nvme_attach_controller" 00:14:48.659 }' 00:14:48.659 [2024-04-18 21:08:04.566363] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:14:48.659 [2024-04-18 21:08:04.566408] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3023291 ] 00:14:48.918 EAL: No free 2048 kB hugepages reported on node 1 00:14:48.918 [2024-04-18 21:08:04.625361] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.918 [2024-04-18 21:08:04.696264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.177 Running I/O for 10 seconds... 00:14:59.161 00:14:59.161 Latency(us) 00:14:59.161 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.161 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:14:59.161 Verification LBA range: start 0x0 length 0x1000 00:14:59.161 Nvme1n1 : 10.01 8340.64 65.16 0.00 0.00 15303.17 2108.55 38979.67 00:14:59.161 =================================================================================================================== 00:14:59.161 Total : 8340.64 65.16 0.00 0.00 15303.17 2108.55 38979.67 00:14:59.421 21:08:15 -- target/zcopy.sh@39 -- # perfpid=3025121 00:14:59.421 21:08:15 -- target/zcopy.sh@41 -- # xtrace_disable 00:14:59.421 21:08:15 -- common/autotest_common.sh@10 -- # set +x 00:14:59.421 21:08:15 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:14:59.421 21:08:15 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:14:59.421 21:08:15 -- nvmf/common.sh@521 -- # config=() 00:14:59.421 21:08:15 -- nvmf/common.sh@521 -- # local subsystem config 00:14:59.421 21:08:15 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:14:59.421 21:08:15 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:14:59.421 { 00:14:59.421 "params": { 00:14:59.421 "name": "Nvme$subsystem", 00:14:59.421 "trtype": "$TEST_TRANSPORT", 00:14:59.421 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:59.421 "adrfam": "ipv4", 00:14:59.421 "trsvcid": "$NVMF_PORT", 00:14:59.421 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:59.421 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:59.421 "hdgst": ${hdgst:-false}, 00:14:59.421 "ddgst": ${ddgst:-false} 00:14:59.421 }, 00:14:59.421 "method": "bdev_nvme_attach_controller" 00:14:59.421 } 00:14:59.421 EOF 00:14:59.421 )") 00:14:59.421 [2024-04-18 21:08:15.189426] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.421 [2024-04-18 21:08:15.189456] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.421 21:08:15 -- nvmf/common.sh@543 -- # cat 00:14:59.421 21:08:15 -- nvmf/common.sh@545 -- # jq . 00:14:59.421 [2024-04-18 21:08:15.197414] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.421 [2024-04-18 21:08:15.197427] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.421 21:08:15 -- nvmf/common.sh@546 -- # IFS=, 00:14:59.421 21:08:15 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:14:59.421 "params": { 00:14:59.421 "name": "Nvme1", 00:14:59.421 "trtype": "tcp", 00:14:59.421 "traddr": "10.0.0.2", 00:14:59.421 "adrfam": "ipv4", 00:14:59.421 "trsvcid": "4420", 00:14:59.421 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:59.421 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:59.421 "hdgst": false, 00:14:59.421 "ddgst": false 00:14:59.421 }, 00:14:59.421 "method": "bdev_nvme_attach_controller" 00:14:59.421 }' 00:14:59.421 [2024-04-18 21:08:15.205432] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.421 [2024-04-18 21:08:15.205443] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.421 [2024-04-18 21:08:15.213453] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.421 [2024-04-18 21:08:15.213462] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.421 [2024-04-18 21:08:15.221475] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.421 [2024-04-18 21:08:15.221484] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.421 [2024-04-18 21:08:15.226202] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:14:59.421 [2024-04-18 21:08:15.226244] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3025121 ] 00:14:59.421 [2024-04-18 21:08:15.229497] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.421 [2024-04-18 21:08:15.229507] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.421 [2024-04-18 21:08:15.237524] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.421 [2024-04-18 21:08:15.237534] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.421 [2024-04-18 21:08:15.245544] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.421 [2024-04-18 21:08:15.245553] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.421 EAL: No free 2048 kB hugepages reported on node 1 00:14:59.421 [2024-04-18 21:08:15.253581] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.421 [2024-04-18 21:08:15.253590] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.421 [2024-04-18 21:08:15.261583] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.421 [2024-04-18 21:08:15.261592] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.421 [2024-04-18 21:08:15.269605] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.421 [2024-04-18 21:08:15.269613] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.421 [2024-04-18 21:08:15.277626] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.421 [2024-04-18 21:08:15.277634] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.421 [2024-04-18 21:08:15.284094] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.421 [2024-04-18 21:08:15.285646] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.421 [2024-04-18 21:08:15.285655] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.421 [2024-04-18 21:08:15.293670] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.421 [2024-04-18 21:08:15.293681] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.421 [2024-04-18 21:08:15.301689] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.421 [2024-04-18 21:08:15.301698] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.421 [2024-04-18 21:08:15.309711] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.421 [2024-04-18 21:08:15.309724] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.421 [2024-04-18 21:08:15.317733] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.421 [2024-04-18 21:08:15.317742] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.421 [2024-04-18 21:08:15.325757] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.421 [2024-04-18 21:08:15.325774] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.421 [2024-04-18 21:08:15.333776] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.421 [2024-04-18 21:08:15.333785] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.421 [2024-04-18 21:08:15.341798] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.421 [2024-04-18 21:08:15.341806] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.421 [2024-04-18 21:08:15.349821] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.421 [2024-04-18 21:08:15.349829] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.681 [2024-04-18 21:08:15.357757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.681 [2024-04-18 21:08:15.357859] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.681 [2024-04-18 21:08:15.357868] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.681 [2024-04-18 21:08:15.365862] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.681 [2024-04-18 21:08:15.365871] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.681 [2024-04-18 21:08:15.373894] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.681 [2024-04-18 21:08:15.373912] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.681 [2024-04-18 21:08:15.381908] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.681 [2024-04-18 21:08:15.381919] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.681 [2024-04-18 21:08:15.389930] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.681 [2024-04-18 21:08:15.389940] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.681 [2024-04-18 21:08:15.397949] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.681 [2024-04-18 21:08:15.397959] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.681 [2024-04-18 21:08:15.405969] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.681 [2024-04-18 21:08:15.405978] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.681 [2024-04-18 21:08:15.413991] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.681 [2024-04-18 21:08:15.414001] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.681 [2024-04-18 21:08:15.426022] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.681 [2024-04-18 21:08:15.426031] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.681 [2024-04-18 21:08:15.434040] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.681 [2024-04-18 21:08:15.434048] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.681 [2024-04-18 21:08:15.442063] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.681 [2024-04-18 21:08:15.442071] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.681 [2024-04-18 21:08:15.450096] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.681 [2024-04-18 21:08:15.450114] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.681 [2024-04-18 21:08:15.458113] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.681 [2024-04-18 21:08:15.458125] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.681 [2024-04-18 21:08:15.466134] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.681 [2024-04-18 21:08:15.466146] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.681 [2024-04-18 21:08:15.474157] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.681 [2024-04-18 21:08:15.474170] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.681 [2024-04-18 21:08:15.482181] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.681 [2024-04-18 21:08:15.482194] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.681 [2024-04-18 21:08:15.490204] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.681 [2024-04-18 21:08:15.490217] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.681 [2024-04-18 21:08:15.498224] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.681 [2024-04-18 21:08:15.498233] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.681 [2024-04-18 21:08:15.506253] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.681 [2024-04-18 21:08:15.506269] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.682 [2024-04-18 21:08:15.514269] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.682 [2024-04-18 21:08:15.514278] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.682 Running I/O for 5 seconds... 00:14:59.682 [2024-04-18 21:08:15.522291] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.682 [2024-04-18 21:08:15.522300] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.682 [2024-04-18 21:08:15.535380] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.682 [2024-04-18 21:08:15.535399] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.682 [2024-04-18 21:08:15.545345] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.682 [2024-04-18 21:08:15.545364] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.682 [2024-04-18 21:08:15.553601] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.682 [2024-04-18 21:08:15.553618] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.682 [2024-04-18 21:08:15.561539] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.682 [2024-04-18 21:08:15.561557] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.682 [2024-04-18 21:08:15.571505] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.682 [2024-04-18 21:08:15.571529] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.682 [2024-04-18 21:08:15.580208] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.682 [2024-04-18 21:08:15.580225] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.682 [2024-04-18 21:08:15.588489] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.682 [2024-04-18 21:08:15.588507] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.682 [2024-04-18 21:08:15.597551] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.682 [2024-04-18 21:08:15.597568] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.682 [2024-04-18 21:08:15.606008] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.682 [2024-04-18 21:08:15.606025] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.941 [2024-04-18 21:08:15.614733] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.941 [2024-04-18 21:08:15.614751] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.941 [2024-04-18 21:08:15.621774] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.941 [2024-04-18 21:08:15.621796] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.941 [2024-04-18 21:08:15.633208] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.941 [2024-04-18 21:08:15.633225] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.941 [2024-04-18 21:08:15.640500] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.941 [2024-04-18 21:08:15.640522] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.942 [2024-04-18 21:08:15.651054] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.942 [2024-04-18 21:08:15.651072] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.942 [2024-04-18 21:08:15.659767] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.942 [2024-04-18 21:08:15.659784] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.942 [2024-04-18 21:08:15.667189] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.942 [2024-04-18 21:08:15.667207] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.942 [2024-04-18 21:08:15.674786] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.942 [2024-04-18 21:08:15.674804] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.942 [2024-04-18 21:08:15.685432] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.942 [2024-04-18 21:08:15.685449] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.942 [2024-04-18 21:08:15.693981] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.942 [2024-04-18 21:08:15.693999] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.942 [2024-04-18 21:08:15.702724] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.942 [2024-04-18 21:08:15.702742] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.942 [2024-04-18 21:08:15.711835] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.942 [2024-04-18 21:08:15.711852] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.942 [2024-04-18 21:08:15.720837] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.942 [2024-04-18 21:08:15.720855] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.942 [2024-04-18 21:08:15.728215] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.942 [2024-04-18 21:08:15.728232] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.942 [2024-04-18 21:08:15.738325] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.942 [2024-04-18 21:08:15.738343] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.942 [2024-04-18 21:08:15.746976] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.942 [2024-04-18 21:08:15.746994] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.942 [2024-04-18 21:08:15.753873] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.942 [2024-04-18 21:08:15.753890] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.942 [2024-04-18 21:08:15.764500] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.942 [2024-04-18 21:08:15.764522] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.942 [2024-04-18 21:08:15.771332] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.942 [2024-04-18 21:08:15.771349] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.942 [2024-04-18 21:08:15.781516] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.942 [2024-04-18 21:08:15.781534] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.942 [2024-04-18 21:08:15.790363] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.942 [2024-04-18 21:08:15.790384] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.942 [2024-04-18 21:08:15.798939] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.942 [2024-04-18 21:08:15.798957] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.942 [2024-04-18 21:08:15.806305] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.942 [2024-04-18 21:08:15.806322] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.942 [2024-04-18 21:08:15.815886] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.942 [2024-04-18 21:08:15.815903] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.942 [2024-04-18 21:08:15.824700] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.942 [2024-04-18 21:08:15.824717] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.942 [2024-04-18 21:08:15.833347] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.942 [2024-04-18 21:08:15.833364] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.942 [2024-04-18 21:08:15.841876] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.942 [2024-04-18 21:08:15.841894] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.942 [2024-04-18 21:08:15.850508] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.942 [2024-04-18 21:08:15.850533] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.942 [2024-04-18 21:08:15.859061] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.942 [2024-04-18 21:08:15.859080] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.942 [2024-04-18 21:08:15.869690] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.942 [2024-04-18 21:08:15.869708] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.201 [2024-04-18 21:08:15.878406] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.201 [2024-04-18 21:08:15.878424] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.201 [2024-04-18 21:08:15.887176] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.201 [2024-04-18 21:08:15.887194] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.201 [2024-04-18 21:08:15.896353] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.201 [2024-04-18 21:08:15.896370] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.201 [2024-04-18 21:08:15.905011] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.201 [2024-04-18 21:08:15.905029] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.201 [2024-04-18 21:08:15.913835] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.201 [2024-04-18 21:08:15.913852] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.201 [2024-04-18 21:08:15.922815] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.201 [2024-04-18 21:08:15.922832] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.201 [2024-04-18 21:08:15.931138] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.201 [2024-04-18 21:08:15.931154] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.201 [2024-04-18 21:08:15.938068] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.201 [2024-04-18 21:08:15.938084] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.201 [2024-04-18 21:08:15.948454] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.201 [2024-04-18 21:08:15.948473] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.201 [2024-04-18 21:08:15.955341] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.201 [2024-04-18 21:08:15.955363] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.201 [2024-04-18 21:08:15.965681] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.201 [2024-04-18 21:08:15.965699] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.201 [2024-04-18 21:08:15.975206] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.201 [2024-04-18 21:08:15.975223] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.201 [2024-04-18 21:08:15.982128] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.201 [2024-04-18 21:08:15.982144] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.201 [2024-04-18 21:08:15.993341] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.201 [2024-04-18 21:08:15.993358] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.201 [2024-04-18 21:08:16.002272] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.201 [2024-04-18 21:08:16.002290] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.201 [2024-04-18 21:08:16.011041] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.201 [2024-04-18 21:08:16.011058] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.201 [2024-04-18 21:08:16.017901] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.201 [2024-04-18 21:08:16.017918] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.201 [2024-04-18 21:08:16.028438] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.201 [2024-04-18 21:08:16.028456] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.201 [2024-04-18 21:08:16.037234] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.202 [2024-04-18 21:08:16.037253] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.202 [2024-04-18 21:08:16.044193] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.202 [2024-04-18 21:08:16.044210] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.202 [2024-04-18 21:08:16.054821] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.202 [2024-04-18 21:08:16.054840] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.202 [2024-04-18 21:08:16.063643] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.202 [2024-04-18 21:08:16.063662] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.202 [2024-04-18 21:08:16.072334] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.202 [2024-04-18 21:08:16.072352] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.202 [2024-04-18 21:08:16.080932] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.202 [2024-04-18 21:08:16.080951] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.202 [2024-04-18 21:08:16.089683] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.202 [2024-04-18 21:08:16.089702] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.202 [2024-04-18 21:08:16.099167] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.202 [2024-04-18 21:08:16.099186] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.202 [2024-04-18 21:08:16.107956] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.202 [2024-04-18 21:08:16.107975] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.202 [2024-04-18 21:08:16.116731] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.202 [2024-04-18 21:08:16.116749] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.202 [2024-04-18 21:08:16.126074] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.202 [2024-04-18 21:08:16.126094] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.461 [2024-04-18 21:08:16.133073] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.461 [2024-04-18 21:08:16.133091] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.461 [2024-04-18 21:08:16.144600] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.461 [2024-04-18 21:08:16.144617] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.461 [2024-04-18 21:08:16.153571] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.461 [2024-04-18 21:08:16.153589] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.461 [2024-04-18 21:08:16.162358] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.461 [2024-04-18 21:08:16.162376] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.461 [2024-04-18 21:08:16.171342] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.461 [2024-04-18 21:08:16.171359] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.461 [2024-04-18 21:08:16.180073] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.461 [2024-04-18 21:08:16.180090] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.461 [2024-04-18 21:08:16.189276] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.461 [2024-04-18 21:08:16.189294] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.461 [2024-04-18 21:08:16.197655] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.461 [2024-04-18 21:08:16.197673] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.461 [2024-04-18 21:08:16.206906] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.461 [2024-04-18 21:08:16.206923] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.461 [2024-04-18 21:08:16.215671] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.461 [2024-04-18 21:08:16.215690] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.461 [2024-04-18 21:08:16.224277] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.461 [2024-04-18 21:08:16.224295] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.461 [2024-04-18 21:08:16.233032] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.461 [2024-04-18 21:08:16.233049] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.461 [2024-04-18 21:08:16.241691] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.461 [2024-04-18 21:08:16.241708] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.461 [2024-04-18 21:08:16.251058] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.461 [2024-04-18 21:08:16.251076] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.461 [2024-04-18 21:08:16.259738] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.461 [2024-04-18 21:08:16.259755] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.461 [2024-04-18 21:08:16.268548] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.461 [2024-04-18 21:08:16.268565] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.461 [2024-04-18 21:08:16.277599] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.461 [2024-04-18 21:08:16.277617] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.461 [2024-04-18 21:08:16.286963] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.461 [2024-04-18 21:08:16.286980] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.461 [2024-04-18 21:08:16.295702] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.462 [2024-04-18 21:08:16.295720] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.462 [2024-04-18 21:08:16.304037] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.462 [2024-04-18 21:08:16.304054] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.462 [2024-04-18 21:08:16.312849] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.462 [2024-04-18 21:08:16.312867] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.462 [2024-04-18 21:08:16.322005] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.462 [2024-04-18 21:08:16.322022] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.462 [2024-04-18 21:08:16.330816] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.462 [2024-04-18 21:08:16.330833] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.462 [2024-04-18 21:08:16.339482] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.462 [2024-04-18 21:08:16.339500] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.462 [2024-04-18 21:08:16.348772] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.462 [2024-04-18 21:08:16.348789] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.462 [2024-04-18 21:08:16.357420] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.462 [2024-04-18 21:08:16.357436] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.462 [2024-04-18 21:08:16.366546] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.462 [2024-04-18 21:08:16.366565] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.462 [2024-04-18 21:08:16.375306] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.462 [2024-04-18 21:08:16.375324] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.462 [2024-04-18 21:08:16.384594] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.462 [2024-04-18 21:08:16.384612] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.721 [2024-04-18 21:08:16.393358] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.721 [2024-04-18 21:08:16.393377] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.721 [2024-04-18 21:08:16.402136] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.721 [2024-04-18 21:08:16.402153] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.721 [2024-04-18 21:08:16.411465] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.721 [2024-04-18 21:08:16.411483] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.721 [2024-04-18 21:08:16.420318] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.721 [2024-04-18 21:08:16.420336] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.721 [2024-04-18 21:08:16.429469] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.721 [2024-04-18 21:08:16.429486] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.721 [2024-04-18 21:08:16.438471] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.721 [2024-04-18 21:08:16.438488] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.721 [2024-04-18 21:08:16.447185] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.721 [2024-04-18 21:08:16.447201] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.721 [2024-04-18 21:08:16.455766] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.721 [2024-04-18 21:08:16.455783] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.721 [2024-04-18 21:08:16.462583] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.721 [2024-04-18 21:08:16.462600] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.721 [2024-04-18 21:08:16.472691] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.721 [2024-04-18 21:08:16.472709] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.721 [2024-04-18 21:08:16.480313] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.721 [2024-04-18 21:08:16.480330] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.721 [2024-04-18 21:08:16.488891] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.721 [2024-04-18 21:08:16.488908] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.721 [2024-04-18 21:08:16.498410] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.722 [2024-04-18 21:08:16.498427] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.722 [2024-04-18 21:08:16.510311] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.722 [2024-04-18 21:08:16.510328] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.722 [2024-04-18 21:08:16.517930] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.722 [2024-04-18 21:08:16.517947] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.722 [2024-04-18 21:08:16.529236] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.722 [2024-04-18 21:08:16.529254] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.722 [2024-04-18 21:08:16.536270] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.722 [2024-04-18 21:08:16.536287] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.722 [2024-04-18 21:08:16.547064] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.722 [2024-04-18 21:08:16.547082] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.722 [2024-04-18 21:08:16.556112] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.722 [2024-04-18 21:08:16.556131] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.722 [2024-04-18 21:08:16.564950] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.722 [2024-04-18 21:08:16.564968] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.722 [2024-04-18 21:08:16.572356] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.722 [2024-04-18 21:08:16.572373] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.722 [2024-04-18 21:08:16.582739] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.722 [2024-04-18 21:08:16.582756] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.722 [2024-04-18 21:08:16.591748] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.722 [2024-04-18 21:08:16.591765] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.722 [2024-04-18 21:08:16.600404] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.722 [2024-04-18 21:08:16.600421] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.722 [2024-04-18 21:08:16.609659] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.722 [2024-04-18 21:08:16.609676] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.722 [2024-04-18 21:08:16.618483] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.722 [2024-04-18 21:08:16.618499] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.722 [2024-04-18 21:08:16.627338] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.722 [2024-04-18 21:08:16.627355] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.722 [2024-04-18 21:08:16.636209] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.722 [2024-04-18 21:08:16.636227] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.722 [2024-04-18 21:08:16.645185] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.722 [2024-04-18 21:08:16.645202] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.982 [2024-04-18 21:08:16.654025] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.982 [2024-04-18 21:08:16.654058] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.982 [2024-04-18 21:08:16.662949] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.982 [2024-04-18 21:08:16.662967] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.982 [2024-04-18 21:08:16.672147] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.982 [2024-04-18 21:08:16.672165] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.982 [2024-04-18 21:08:16.680797] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.982 [2024-04-18 21:08:16.680814] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.982 [2024-04-18 21:08:16.689397] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.982 [2024-04-18 21:08:16.689414] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.982 [2024-04-18 21:08:16.698730] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.982 [2024-04-18 21:08:16.698748] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.982 [2024-04-18 21:08:16.707595] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.982 [2024-04-18 21:08:16.707612] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.982 [2024-04-18 21:08:16.714730] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.982 [2024-04-18 21:08:16.714747] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.982 [2024-04-18 21:08:16.725159] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.982 [2024-04-18 21:08:16.725176] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.982 [2024-04-18 21:08:16.733981] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.982 [2024-04-18 21:08:16.733998] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.982 [2024-04-18 21:08:16.742725] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.982 [2024-04-18 21:08:16.742741] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.982 [2024-04-18 21:08:16.752108] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.982 [2024-04-18 21:08:16.752126] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.982 [2024-04-18 21:08:16.761422] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.982 [2024-04-18 21:08:16.761439] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.982 [2024-04-18 21:08:16.770251] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.982 [2024-04-18 21:08:16.770268] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.982 [2024-04-18 21:08:16.779077] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.982 [2024-04-18 21:08:16.779094] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.982 [2024-04-18 21:08:16.787804] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.982 [2024-04-18 21:08:16.787821] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.982 [2024-04-18 21:08:16.797227] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.982 [2024-04-18 21:08:16.797245] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.982 [2024-04-18 21:08:16.806011] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.982 [2024-04-18 21:08:16.806028] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.982 [2024-04-18 21:08:16.814675] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.982 [2024-04-18 21:08:16.814692] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.982 [2024-04-18 21:08:16.824038] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.982 [2024-04-18 21:08:16.824055] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.982 [2024-04-18 21:08:16.832825] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.982 [2024-04-18 21:08:16.832842] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.982 [2024-04-18 21:08:16.842009] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.982 [2024-04-18 21:08:16.842026] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.982 [2024-04-18 21:08:16.851330] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.982 [2024-04-18 21:08:16.851347] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.982 [2024-04-18 21:08:16.859937] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.982 [2024-04-18 21:08:16.859955] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.982 [2024-04-18 21:08:16.868651] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.982 [2024-04-18 21:08:16.868669] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.982 [2024-04-18 21:08:16.877506] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.982 [2024-04-18 21:08:16.877532] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.982 [2024-04-18 21:08:16.886342] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.982 [2024-04-18 21:08:16.886360] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.982 [2024-04-18 21:08:16.895710] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.982 [2024-04-18 21:08:16.895727] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.982 [2024-04-18 21:08:16.905085] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.982 [2024-04-18 21:08:16.905102] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.242 [2024-04-18 21:08:16.913938] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.242 [2024-04-18 21:08:16.913956] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.242 [2024-04-18 21:08:16.923391] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.242 [2024-04-18 21:08:16.923408] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.242 [2024-04-18 21:08:16.932045] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.242 [2024-04-18 21:08:16.932062] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.242 [2024-04-18 21:08:16.939057] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.242 [2024-04-18 21:08:16.939074] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.242 [2024-04-18 21:08:16.949853] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.242 [2024-04-18 21:08:16.949871] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.242 [2024-04-18 21:08:16.959873] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.242 [2024-04-18 21:08:16.959890] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.242 [2024-04-18 21:08:16.968772] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.242 [2024-04-18 21:08:16.968793] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.242 [2024-04-18 21:08:16.977992] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.242 [2024-04-18 21:08:16.978009] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.242 [2024-04-18 21:08:16.986685] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.242 [2024-04-18 21:08:16.986702] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.242 [2024-04-18 21:08:16.995285] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.242 [2024-04-18 21:08:16.995303] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.242 [2024-04-18 21:08:17.003506] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.242 [2024-04-18 21:08:17.003530] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.242 [2024-04-18 21:08:17.010403] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.242 [2024-04-18 21:08:17.010420] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.242 [2024-04-18 21:08:17.021594] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.242 [2024-04-18 21:08:17.021611] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.242 [2024-04-18 21:08:17.030621] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.242 [2024-04-18 21:08:17.030638] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.242 [2024-04-18 21:08:17.040070] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.242 [2024-04-18 21:08:17.040089] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.242 [2024-04-18 21:08:17.048356] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.242 [2024-04-18 21:08:17.048373] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.242 [2024-04-18 21:08:17.057680] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.242 [2024-04-18 21:08:17.057698] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.242 [2024-04-18 21:08:17.066459] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.242 [2024-04-18 21:08:17.066477] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.242 [2024-04-18 21:08:17.075891] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.242 [2024-04-18 21:08:17.075909] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.242 [2024-04-18 21:08:17.084508] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.242 [2024-04-18 21:08:17.084532] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.242 [2024-04-18 21:08:17.093781] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.242 [2024-04-18 21:08:17.093798] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.242 [2024-04-18 21:08:17.102910] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.242 [2024-04-18 21:08:17.102928] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.242 [2024-04-18 21:08:17.112603] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.242 [2024-04-18 21:08:17.112621] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.242 [2024-04-18 21:08:17.121807] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.242 [2024-04-18 21:08:17.121824] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.242 [2024-04-18 21:08:17.130745] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.242 [2024-04-18 21:08:17.130762] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.242 [2024-04-18 21:08:17.140305] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.242 [2024-04-18 21:08:17.140326] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.242 [2024-04-18 21:08:17.148991] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.242 [2024-04-18 21:08:17.149009] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.242 [2024-04-18 21:08:17.157536] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.242 [2024-04-18 21:08:17.157554] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.242 [2024-04-18 21:08:17.166687] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.242 [2024-04-18 21:08:17.166704] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.502 [2024-04-18 21:08:17.173613] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.502 [2024-04-18 21:08:17.173630] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.502 [2024-04-18 21:08:17.184189] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.502 [2024-04-18 21:08:17.184207] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.502 [2024-04-18 21:08:17.192896] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.502 [2024-04-18 21:08:17.192913] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.502 [2024-04-18 21:08:17.201769] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.502 [2024-04-18 21:08:17.201786] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.502 [2024-04-18 21:08:17.210957] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.502 [2024-04-18 21:08:17.210975] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.502 [2024-04-18 21:08:17.220073] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.502 [2024-04-18 21:08:17.220090] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.502 [2024-04-18 21:08:17.229745] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.502 [2024-04-18 21:08:17.229763] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.502 [2024-04-18 21:08:17.238396] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.502 [2024-04-18 21:08:17.238412] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.502 [2024-04-18 21:08:17.247110] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.502 [2024-04-18 21:08:17.247127] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.502 [2024-04-18 21:08:17.255697] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.502 [2024-04-18 21:08:17.255715] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.502 [2024-04-18 21:08:17.262659] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.502 [2024-04-18 21:08:17.262676] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.502 [2024-04-18 21:08:17.274264] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.502 [2024-04-18 21:08:17.274281] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.502 [2024-04-18 21:08:17.281074] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.502 [2024-04-18 21:08:17.281091] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.502 [2024-04-18 21:08:17.291714] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.502 [2024-04-18 21:08:17.291732] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.502 [2024-04-18 21:08:17.300841] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.502 [2024-04-18 21:08:17.300858] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.502 [2024-04-18 21:08:17.309540] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.502 [2024-04-18 21:08:17.309560] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.502 [2024-04-18 21:08:17.318854] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.502 [2024-04-18 21:08:17.318872] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.502 [2024-04-18 21:08:17.327498] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.502 [2024-04-18 21:08:17.327518] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.502 [2024-04-18 21:08:17.334931] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.502 [2024-04-18 21:08:17.334947] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.502 [2024-04-18 21:08:17.344913] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.502 [2024-04-18 21:08:17.344930] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.502 [2024-04-18 21:08:17.353746] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.502 [2024-04-18 21:08:17.353763] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.502 [2024-04-18 21:08:17.362515] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.502 [2024-04-18 21:08:17.362531] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.502 [2024-04-18 21:08:17.370426] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.502 [2024-04-18 21:08:17.370443] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.502 [2024-04-18 21:08:17.380953] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.502 [2024-04-18 21:08:17.380970] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.502 [2024-04-18 21:08:17.388326] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.502 [2024-04-18 21:08:17.388343] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.502 [2024-04-18 21:08:17.398004] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.502 [2024-04-18 21:08:17.398021] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.502 [2024-04-18 21:08:17.406710] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.502 [2024-04-18 21:08:17.406727] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.502 [2024-04-18 21:08:17.415346] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.502 [2024-04-18 21:08:17.415364] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.502 [2024-04-18 21:08:17.423980] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.502 [2024-04-18 21:08:17.423998] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.762 [2024-04-18 21:08:17.432705] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.762 [2024-04-18 21:08:17.432724] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.762 [2024-04-18 21:08:17.441601] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.762 [2024-04-18 21:08:17.441618] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.762 [2024-04-18 21:08:17.450710] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.762 [2024-04-18 21:08:17.450728] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.762 [2024-04-18 21:08:17.459975] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.762 [2024-04-18 21:08:17.459993] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.762 [2024-04-18 21:08:17.469274] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.762 [2024-04-18 21:08:17.469292] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.762 [2024-04-18 21:08:17.478736] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.762 [2024-04-18 21:08:17.478758] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.762 [2024-04-18 21:08:17.488148] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.762 [2024-04-18 21:08:17.488166] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.762 [2024-04-18 21:08:17.497343] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.762 [2024-04-18 21:08:17.497361] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.762 [2024-04-18 21:08:17.506666] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.762 [2024-04-18 21:08:17.506683] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.762 [2024-04-18 21:08:17.515322] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.762 [2024-04-18 21:08:17.515339] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.762 [2024-04-18 21:08:17.523956] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.762 [2024-04-18 21:08:17.523973] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.762 [2024-04-18 21:08:17.532779] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.762 [2024-04-18 21:08:17.532796] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.762 [2024-04-18 21:08:17.541615] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.762 [2024-04-18 21:08:17.541633] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.762 [2024-04-18 21:08:17.550506] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.762 [2024-04-18 21:08:17.550532] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.762 [2024-04-18 21:08:17.559249] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.762 [2024-04-18 21:08:17.559266] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.762 [2024-04-18 21:08:17.568134] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.762 [2024-04-18 21:08:17.568151] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.762 [2024-04-18 21:08:17.577766] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.762 [2024-04-18 21:08:17.577783] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.762 [2024-04-18 21:08:17.585192] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.762 [2024-04-18 21:08:17.585209] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.762 [2024-04-18 21:08:17.595212] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.762 [2024-04-18 21:08:17.595230] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.762 [2024-04-18 21:08:17.602775] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.762 [2024-04-18 21:08:17.602791] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.762 [2024-04-18 21:08:17.612633] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.762 [2024-04-18 21:08:17.612650] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.762 [2024-04-18 21:08:17.621488] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.762 [2024-04-18 21:08:17.621506] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.762 [2024-04-18 21:08:17.630267] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.762 [2024-04-18 21:08:17.630284] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.762 [2024-04-18 21:08:17.639565] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.762 [2024-04-18 21:08:17.639582] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.762 [2024-04-18 21:08:17.648697] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.762 [2024-04-18 21:08:17.648715] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.762 [2024-04-18 21:08:17.657604] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.762 [2024-04-18 21:08:17.657622] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.762 [2024-04-18 21:08:17.664635] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.762 [2024-04-18 21:08:17.664652] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.762 [2024-04-18 21:08:17.675845] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.762 [2024-04-18 21:08:17.675864] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.762 [2024-04-18 21:08:17.684598] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.762 [2024-04-18 21:08:17.684617] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.022 [2024-04-18 21:08:17.694074] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.022 [2024-04-18 21:08:17.694093] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.022 [2024-04-18 21:08:17.701297] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.022 [2024-04-18 21:08:17.701314] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.022 [2024-04-18 21:08:17.711585] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.022 [2024-04-18 21:08:17.711603] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.022 [2024-04-18 21:08:17.720821] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.022 [2024-04-18 21:08:17.720838] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.022 [2024-04-18 21:08:17.729609] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.022 [2024-04-18 21:08:17.729626] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.022 [2024-04-18 21:08:17.738969] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.022 [2024-04-18 21:08:17.738987] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.022 [2024-04-18 21:08:17.748356] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.022 [2024-04-18 21:08:17.748373] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.022 [2024-04-18 21:08:17.755218] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.022 [2024-04-18 21:08:17.755235] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.022 [2024-04-18 21:08:17.766219] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.022 [2024-04-18 21:08:17.766236] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.022 [2024-04-18 21:08:17.775195] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.022 [2024-04-18 21:08:17.775213] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.022 [2024-04-18 21:08:17.783692] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.022 [2024-04-18 21:08:17.783709] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.022 [2024-04-18 21:08:17.792082] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.022 [2024-04-18 21:08:17.792099] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.022 [2024-04-18 21:08:17.801978] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.022 [2024-04-18 21:08:17.801995] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.022 [2024-04-18 21:08:17.810854] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.022 [2024-04-18 21:08:17.810871] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.022 [2024-04-18 21:08:17.820094] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.022 [2024-04-18 21:08:17.820111] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.022 [2024-04-18 21:08:17.826923] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.022 [2024-04-18 21:08:17.826940] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.022 [2024-04-18 21:08:17.839549] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.022 [2024-04-18 21:08:17.839566] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.022 [2024-04-18 21:08:17.850155] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.022 [2024-04-18 21:08:17.850173] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.022 [2024-04-18 21:08:17.859124] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.022 [2024-04-18 21:08:17.859141] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.022 [2024-04-18 21:08:17.866253] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.022 [2024-04-18 21:08:17.866269] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.022 [2024-04-18 21:08:17.876816] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.022 [2024-04-18 21:08:17.876833] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.022 [2024-04-18 21:08:17.885904] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.022 [2024-04-18 21:08:17.885923] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.022 [2024-04-18 21:08:17.895458] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.022 [2024-04-18 21:08:17.895478] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.022 [2024-04-18 21:08:17.904156] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.022 [2024-04-18 21:08:17.904173] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.022 [2024-04-18 21:08:17.914018] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.022 [2024-04-18 21:08:17.914035] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.022 [2024-04-18 21:08:17.921534] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.022 [2024-04-18 21:08:17.921551] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.022 [2024-04-18 21:08:17.931602] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.022 [2024-04-18 21:08:17.931619] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.022 [2024-04-18 21:08:17.939148] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.022 [2024-04-18 21:08:17.939165] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.022 [2024-04-18 21:08:17.949448] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.022 [2024-04-18 21:08:17.949465] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.282 [2024-04-18 21:08:17.958077] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.282 [2024-04-18 21:08:17.958094] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.282 [2024-04-18 21:08:17.967157] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.282 [2024-04-18 21:08:17.967175] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.282 [2024-04-18 21:08:17.975092] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.282 [2024-04-18 21:08:17.975109] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.282 [2024-04-18 21:08:17.984356] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.282 [2024-04-18 21:08:17.984372] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.282 [2024-04-18 21:08:17.993118] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.282 [2024-04-18 21:08:17.993135] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.282 [2024-04-18 21:08:17.999931] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.282 [2024-04-18 21:08:17.999947] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.282 [2024-04-18 21:08:18.010401] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.282 [2024-04-18 21:08:18.010417] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.282 [2024-04-18 21:08:18.018941] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.282 [2024-04-18 21:08:18.018958] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.282 [2024-04-18 21:08:18.025991] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.282 [2024-04-18 21:08:18.026008] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.282 [2024-04-18 21:08:18.037721] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.282 [2024-04-18 21:08:18.037738] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.282 [2024-04-18 21:08:18.048915] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.282 [2024-04-18 21:08:18.048932] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.282 [2024-04-18 21:08:18.058171] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.282 [2024-04-18 21:08:18.058188] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.282 [2024-04-18 21:08:18.066863] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.282 [2024-04-18 21:08:18.066881] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.282 [2024-04-18 21:08:18.073739] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.282 [2024-04-18 21:08:18.073756] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.282 [2024-04-18 21:08:18.084075] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.282 [2024-04-18 21:08:18.084093] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.282 [2024-04-18 21:08:18.091197] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.282 [2024-04-18 21:08:18.091214] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.282 [2024-04-18 21:08:18.101574] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.282 [2024-04-18 21:08:18.101592] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.282 [2024-04-18 21:08:18.109665] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.282 [2024-04-18 21:08:18.109682] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.282 [2024-04-18 21:08:18.117312] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.282 [2024-04-18 21:08:18.117330] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.282 [2024-04-18 21:08:18.128081] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.282 [2024-04-18 21:08:18.128100] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.282 [2024-04-18 21:08:18.136658] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.282 [2024-04-18 21:08:18.136675] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.282 [2024-04-18 21:08:18.145570] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.282 [2024-04-18 21:08:18.145587] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.282 [2024-04-18 21:08:18.154912] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.282 [2024-04-18 21:08:18.154930] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.282 [2024-04-18 21:08:18.204171] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.283 [2024-04-18 21:08:18.204189] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.542 [2024-04-18 21:08:18.214702] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.542 [2024-04-18 21:08:18.214719] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.542 [2024-04-18 21:08:18.225026] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.542 [2024-04-18 21:08:18.225043] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.542 [2024-04-18 21:08:18.235216] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.542 [2024-04-18 21:08:18.235233] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.542 [2024-04-18 21:08:18.245436] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.542 [2024-04-18 21:08:18.245454] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.542 [2024-04-18 21:08:18.254247] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.542 [2024-04-18 21:08:18.254265] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.542 [2024-04-18 21:08:18.261444] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.542 [2024-04-18 21:08:18.261461] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.542 [2024-04-18 21:08:18.271976] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.542 [2024-04-18 21:08:18.271995] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.542 [2024-04-18 21:08:18.280816] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.542 [2024-04-18 21:08:18.280834] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.542 [2024-04-18 21:08:18.287715] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.542 [2024-04-18 21:08:18.287732] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.542 [2024-04-18 21:08:18.298255] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.542 [2024-04-18 21:08:18.298274] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.542 [2024-04-18 21:08:18.307224] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.542 [2024-04-18 21:08:18.307242] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.542 [2024-04-18 21:08:18.316711] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.542 [2024-04-18 21:08:18.316729] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.542 [2024-04-18 21:08:18.325834] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.542 [2024-04-18 21:08:18.325851] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.542 [2024-04-18 21:08:18.334414] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.542 [2024-04-18 21:08:18.334432] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.542 [2024-04-18 21:08:18.343379] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.542 [2024-04-18 21:08:18.343396] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.542 [2024-04-18 21:08:18.352192] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.542 [2024-04-18 21:08:18.352209] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.542 [2024-04-18 21:08:18.360809] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.542 [2024-04-18 21:08:18.360825] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.542 [2024-04-18 21:08:18.370134] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.542 [2024-04-18 21:08:18.370155] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.542 [2024-04-18 21:08:18.378886] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.542 [2024-04-18 21:08:18.378903] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.542 [2024-04-18 21:08:18.387410] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.542 [2024-04-18 21:08:18.387427] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.542 [2024-04-18 21:08:18.396287] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.542 [2024-04-18 21:08:18.396304] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.542 [2024-04-18 21:08:18.403311] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.542 [2024-04-18 21:08:18.403328] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.542 [2024-04-18 21:08:18.413841] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.542 [2024-04-18 21:08:18.413858] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.542 [2024-04-18 21:08:18.422585] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.542 [2024-04-18 21:08:18.422602] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.542 [2024-04-18 21:08:18.430763] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.542 [2024-04-18 21:08:18.430780] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.542 [2024-04-18 21:08:18.439709] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.542 [2024-04-18 21:08:18.439726] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.542 [2024-04-18 21:08:18.449021] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.542 [2024-04-18 21:08:18.449038] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.542 [2024-04-18 21:08:18.458403] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.542 [2024-04-18 21:08:18.458420] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.542 [2024-04-18 21:08:18.467226] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.542 [2024-04-18 21:08:18.467243] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.801 [2024-04-18 21:08:18.476687] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.801 [2024-04-18 21:08:18.476705] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.801 [2024-04-18 21:08:18.485589] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.801 [2024-04-18 21:08:18.485606] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.801 [2024-04-18 21:08:18.492551] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.801 [2024-04-18 21:08:18.492568] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.801 [2024-04-18 21:08:18.503714] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.801 [2024-04-18 21:08:18.503734] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.801 [2024-04-18 21:08:18.512568] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.801 [2024-04-18 21:08:18.512586] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.801 [2024-04-18 21:08:18.521723] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.801 [2024-04-18 21:08:18.521740] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.801 [2024-04-18 21:08:18.531161] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.801 [2024-04-18 21:08:18.531178] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.801 [2024-04-18 21:08:18.538248] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.801 [2024-04-18 21:08:18.538269] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.801 [2024-04-18 21:08:18.548794] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.801 [2024-04-18 21:08:18.548812] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.802 [2024-04-18 21:08:18.557581] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.802 [2024-04-18 21:08:18.557598] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.802 [2024-04-18 21:08:18.564565] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.802 [2024-04-18 21:08:18.564583] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.802 [2024-04-18 21:08:18.575352] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.802 [2024-04-18 21:08:18.575370] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.802 [2024-04-18 21:08:18.584263] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.802 [2024-04-18 21:08:18.584281] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.802 [2024-04-18 21:08:18.593041] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.802 [2024-04-18 21:08:18.593058] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.802 [2024-04-18 21:08:18.601871] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.802 [2024-04-18 21:08:18.601888] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.802 [2024-04-18 21:08:18.610438] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.802 [2024-04-18 21:08:18.610455] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.802 [2024-04-18 21:08:18.618827] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.802 [2024-04-18 21:08:18.618844] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.802 [2024-04-18 21:08:18.628430] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.802 [2024-04-18 21:08:18.628447] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.802 [2024-04-18 21:08:18.637178] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.802 [2024-04-18 21:08:18.637195] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.802 [2024-04-18 21:08:18.644082] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.802 [2024-04-18 21:08:18.644099] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.802 [2024-04-18 21:08:18.655266] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.802 [2024-04-18 21:08:18.655284] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.802 [2024-04-18 21:08:18.664457] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.802 [2024-04-18 21:08:18.664474] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.802 [2024-04-18 21:08:18.671406] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.802 [2024-04-18 21:08:18.671423] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.802 [2024-04-18 21:08:18.681939] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.802 [2024-04-18 21:08:18.681957] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.802 [2024-04-18 21:08:18.690281] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.802 [2024-04-18 21:08:18.690298] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.802 [2024-04-18 21:08:18.699727] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.802 [2024-04-18 21:08:18.699744] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.802 [2024-04-18 21:08:18.708988] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.802 [2024-04-18 21:08:18.709009] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.802 [2024-04-18 21:08:18.717419] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.802 [2024-04-18 21:08:18.717436] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.802 [2024-04-18 21:08:18.726627] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.802 [2024-04-18 21:08:18.726644] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.061 [2024-04-18 21:08:18.735410] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.061 [2024-04-18 21:08:18.735427] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.061 [2024-04-18 21:08:18.743872] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.061 [2024-04-18 21:08:18.743889] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.061 [2024-04-18 21:08:18.752628] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.061 [2024-04-18 21:08:18.752646] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.061 [2024-04-18 21:08:18.761030] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.061 [2024-04-18 21:08:18.761047] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.061 [2024-04-18 21:08:18.769589] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.061 [2024-04-18 21:08:18.769606] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.061 [2024-04-18 21:08:18.778918] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.061 [2024-04-18 21:08:18.778935] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.061 [2024-04-18 21:08:18.788408] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.061 [2024-04-18 21:08:18.788425] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.061 [2024-04-18 21:08:18.797336] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.061 [2024-04-18 21:08:18.797353] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.061 [2024-04-18 21:08:18.806506] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.061 [2024-04-18 21:08:18.806529] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.061 [2024-04-18 21:08:18.815023] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.061 [2024-04-18 21:08:18.815042] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.061 [2024-04-18 21:08:18.824306] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.061 [2024-04-18 21:08:18.824324] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.061 [2024-04-18 21:08:18.831204] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.062 [2024-04-18 21:08:18.831221] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.062 [2024-04-18 21:08:18.841808] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.062 [2024-04-18 21:08:18.841826] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.062 [2024-04-18 21:08:18.850628] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.062 [2024-04-18 21:08:18.850647] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.062 [2024-04-18 21:08:18.859375] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.062 [2024-04-18 21:08:18.859393] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.062 [2024-04-18 21:08:18.867821] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.062 [2024-04-18 21:08:18.867838] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.062 [2024-04-18 21:08:18.876718] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.062 [2024-04-18 21:08:18.876740] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.062 [2024-04-18 21:08:18.885441] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.062 [2024-04-18 21:08:18.885459] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.062 [2024-04-18 21:08:18.894019] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.062 [2024-04-18 21:08:18.894037] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.062 [2024-04-18 21:08:18.902900] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.062 [2024-04-18 21:08:18.902918] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.062 [2024-04-18 21:08:18.911627] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.062 [2024-04-18 21:08:18.911645] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.062 [2024-04-18 21:08:18.920335] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.062 [2024-04-18 21:08:18.920353] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.062 [2024-04-18 21:08:18.928893] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.062 [2024-04-18 21:08:18.928910] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.062 [2024-04-18 21:08:18.937126] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.062 [2024-04-18 21:08:18.937143] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.062 [2024-04-18 21:08:18.946190] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.062 [2024-04-18 21:08:18.946207] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.062 [2024-04-18 21:08:18.955246] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.062 [2024-04-18 21:08:18.955263] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.062 [2024-04-18 21:08:18.964892] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.062 [2024-04-18 21:08:18.964910] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.062 [2024-04-18 21:08:18.973707] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.062 [2024-04-18 21:08:18.973724] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.062 [2024-04-18 21:08:18.982541] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.062 [2024-04-18 21:08:18.982559] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.062 [2024-04-18 21:08:18.991193] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.062 [2024-04-18 21:08:18.991211] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.321 [2024-04-18 21:08:19.000939] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.321 [2024-04-18 21:08:19.000956] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.321 [2024-04-18 21:08:19.009726] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.321 [2024-04-18 21:08:19.009744] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.321 [2024-04-18 21:08:19.018628] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.321 [2024-04-18 21:08:19.018647] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.321 [2024-04-18 21:08:19.027213] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.321 [2024-04-18 21:08:19.027231] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.321 [2024-04-18 21:08:19.035818] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.321 [2024-04-18 21:08:19.035836] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.321 [2024-04-18 21:08:19.045117] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.321 [2024-04-18 21:08:19.045135] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.321 [2024-04-18 21:08:19.053926] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.321 [2024-04-18 21:08:19.053943] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.321 [2024-04-18 21:08:19.063288] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.321 [2024-04-18 21:08:19.063306] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.321 [2024-04-18 21:08:19.072133] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.322 [2024-04-18 21:08:19.072150] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.322 [2024-04-18 21:08:19.079037] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.322 [2024-04-18 21:08:19.079055] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.322 [2024-04-18 21:08:19.090313] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.322 [2024-04-18 21:08:19.090330] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.322 [2024-04-18 21:08:19.098973] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.322 [2024-04-18 21:08:19.098991] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.322 [2024-04-18 21:08:19.107717] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.322 [2024-04-18 21:08:19.107735] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.322 [2024-04-18 21:08:19.116397] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.322 [2024-04-18 21:08:19.116415] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.322 [2024-04-18 21:08:19.124593] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.322 [2024-04-18 21:08:19.124610] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.322 [2024-04-18 21:08:19.133426] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.322 [2024-04-18 21:08:19.133443] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.322 [2024-04-18 21:08:19.142679] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.322 [2024-04-18 21:08:19.142696] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.322 [2024-04-18 21:08:19.151340] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.322 [2024-04-18 21:08:19.151358] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.322 [2024-04-18 21:08:19.160157] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.322 [2024-04-18 21:08:19.160175] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.322 [2024-04-18 21:08:19.169079] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.322 [2024-04-18 21:08:19.169097] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.322 [2024-04-18 21:08:19.178013] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.322 [2024-04-18 21:08:19.178031] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.322 [2024-04-18 21:08:19.187398] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.322 [2024-04-18 21:08:19.187416] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.322 [2024-04-18 21:08:19.196841] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.322 [2024-04-18 21:08:19.196859] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.322 [2024-04-18 21:08:19.206404] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.322 [2024-04-18 21:08:19.206422] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.322 [2024-04-18 21:08:19.215354] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.322 [2024-04-18 21:08:19.215372] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.322 [2024-04-18 21:08:19.224729] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.322 [2024-04-18 21:08:19.224746] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.322 [2024-04-18 21:08:19.233536] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.322 [2024-04-18 21:08:19.233553] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.322 [2024-04-18 21:08:19.243038] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.322 [2024-04-18 21:08:19.243055] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.322 [2024-04-18 21:08:19.250851] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.322 [2024-04-18 21:08:19.250869] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.581 [2024-04-18 21:08:19.269533] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.581 [2024-04-18 21:08:19.269552] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.581 [2024-04-18 21:08:19.278355] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.581 [2024-04-18 21:08:19.278373] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.581 [2024-04-18 21:08:19.285417] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.581 [2024-04-18 21:08:19.285433] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.581 [2024-04-18 21:08:19.297997] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.581 [2024-04-18 21:08:19.298015] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.581 [2024-04-18 21:08:19.308406] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.581 [2024-04-18 21:08:19.308423] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.581 [2024-04-18 21:08:19.317641] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.581 [2024-04-18 21:08:19.317659] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.581 [2024-04-18 21:08:19.325634] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.581 [2024-04-18 21:08:19.325652] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.581 [2024-04-18 21:08:19.334769] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.581 [2024-04-18 21:08:19.334787] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.582 [2024-04-18 21:08:19.343785] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.582 [2024-04-18 21:08:19.343802] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.582 [2024-04-18 21:08:19.351170] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.582 [2024-04-18 21:08:19.351187] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.582 [2024-04-18 21:08:19.361356] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.582 [2024-04-18 21:08:19.361374] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.582 [2024-04-18 21:08:19.370192] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.582 [2024-04-18 21:08:19.370210] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.582 [2024-04-18 21:08:19.378636] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.582 [2024-04-18 21:08:19.378653] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.582 [2024-04-18 21:08:19.386673] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.582 [2024-04-18 21:08:19.386690] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.582 [2024-04-18 21:08:19.399625] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.582 [2024-04-18 21:08:19.399643] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.582 [2024-04-18 21:08:19.409166] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.582 [2024-04-18 21:08:19.409183] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.582 [2024-04-18 21:08:19.417756] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.582 [2024-04-18 21:08:19.417773] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.582 [2024-04-18 21:08:19.425690] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.582 [2024-04-18 21:08:19.425707] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.582 [2024-04-18 21:08:19.437102] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.582 [2024-04-18 21:08:19.437119] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.582 [2024-04-18 21:08:19.449044] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.582 [2024-04-18 21:08:19.449062] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.582 [2024-04-18 21:08:19.456183] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.582 [2024-04-18 21:08:19.456200] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.582 [2024-04-18 21:08:19.466068] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.582 [2024-04-18 21:08:19.466086] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.582 [2024-04-18 21:08:19.473608] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.582 [2024-04-18 21:08:19.473625] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.582 [2024-04-18 21:08:19.483446] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.582 [2024-04-18 21:08:19.483463] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.582 [2024-04-18 21:08:19.490320] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.582 [2024-04-18 21:08:19.490337] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.582 [2024-04-18 21:08:19.501436] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.582 [2024-04-18 21:08:19.501454] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.582 [2024-04-18 21:08:19.508777] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.582 [2024-04-18 21:08:19.508795] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.841 [2024-04-18 21:08:19.519323] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.841 [2024-04-18 21:08:19.519341] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.841 [2024-04-18 21:08:19.527658] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.841 [2024-04-18 21:08:19.527676] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.841 [2024-04-18 21:08:19.536068] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.841 [2024-04-18 21:08:19.536085] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.841 [2024-04-18 21:08:19.544172] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.841 [2024-04-18 21:08:19.544189] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.841 [2024-04-18 21:08:19.553761] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.841 [2024-04-18 21:08:19.553779] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.841 [2024-04-18 21:08:19.561228] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.842 [2024-04-18 21:08:19.561244] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.842 [2024-04-18 21:08:19.571368] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.842 [2024-04-18 21:08:19.571385] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.842 [2024-04-18 21:08:19.578600] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.842 [2024-04-18 21:08:19.578617] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.842 [2024-04-18 21:08:19.588037] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.842 [2024-04-18 21:08:19.588054] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.842 [2024-04-18 21:08:19.597652] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.842 [2024-04-18 21:08:19.597670] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.842 [2024-04-18 21:08:19.606987] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.842 [2024-04-18 21:08:19.607004] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.842 [2024-04-18 21:08:19.615856] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.842 [2024-04-18 21:08:19.615873] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.842 [2024-04-18 21:08:19.624721] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.842 [2024-04-18 21:08:19.624738] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.842 [2024-04-18 21:08:19.631620] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.842 [2024-04-18 21:08:19.631638] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.842 [2024-04-18 21:08:19.641800] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.842 [2024-04-18 21:08:19.641817] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.842 [2024-04-18 21:08:19.649761] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.842 [2024-04-18 21:08:19.649778] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.842 [2024-04-18 21:08:19.659172] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.842 [2024-04-18 21:08:19.659188] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.842 [2024-04-18 21:08:19.667913] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.842 [2024-04-18 21:08:19.667936] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.842 [2024-04-18 21:08:19.677080] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.842 [2024-04-18 21:08:19.677097] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.842 [2024-04-18 21:08:19.685690] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.842 [2024-04-18 21:08:19.685707] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.842 [2024-04-18 21:08:19.694065] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.842 [2024-04-18 21:08:19.694083] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.842 [2024-04-18 21:08:19.703333] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.842 [2024-04-18 21:08:19.703350] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.842 [2024-04-18 21:08:19.712605] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.842 [2024-04-18 21:08:19.712622] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.842 [2024-04-18 21:08:19.722576] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.842 [2024-04-18 21:08:19.722593] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.842 [2024-04-18 21:08:19.731072] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.842 [2024-04-18 21:08:19.731094] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.842 [2024-04-18 21:08:19.738847] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.842 [2024-04-18 21:08:19.738864] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.842 [2024-04-18 21:08:19.748411] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.842 [2024-04-18 21:08:19.748429] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.842 [2024-04-18 21:08:19.757190] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.842 [2024-04-18 21:08:19.757207] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.842 [2024-04-18 21:08:19.765999] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.842 [2024-04-18 21:08:19.766016] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.101 [2024-04-18 21:08:19.774605] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.101 [2024-04-18 21:08:19.774622] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.101 [2024-04-18 21:08:19.783274] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.101 [2024-04-18 21:08:19.783291] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.101 [2024-04-18 21:08:19.792822] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.101 [2024-04-18 21:08:19.792840] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.101 [2024-04-18 21:08:19.801308] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.101 [2024-04-18 21:08:19.801326] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.101 [2024-04-18 21:08:19.810068] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.101 [2024-04-18 21:08:19.810085] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.101 [2024-04-18 21:08:19.816843] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.101 [2024-04-18 21:08:19.816860] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.101 [2024-04-18 21:08:19.827313] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.101 [2024-04-18 21:08:19.827331] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.101 [2024-04-18 21:08:19.836349] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.101 [2024-04-18 21:08:19.836366] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.101 [2024-04-18 21:08:19.845432] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.101 [2024-04-18 21:08:19.845450] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.101 [2024-04-18 21:08:19.854442] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.101 [2024-04-18 21:08:19.854459] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.101 [2024-04-18 21:08:19.863262] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.101 [2024-04-18 21:08:19.863280] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.101 [2024-04-18 21:08:19.871265] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.101 [2024-04-18 21:08:19.871283] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.101 [2024-04-18 21:08:19.879436] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.101 [2024-04-18 21:08:19.879453] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.101 [2024-04-18 21:08:19.887358] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.101 [2024-04-18 21:08:19.887375] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.101 [2024-04-18 21:08:19.897631] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.101 [2024-04-18 21:08:19.897653] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.101 [2024-04-18 21:08:19.906190] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.101 [2024-04-18 21:08:19.906207] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.101 [2024-04-18 21:08:19.915256] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.101 [2024-04-18 21:08:19.915274] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.101 [2024-04-18 21:08:19.924378] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.101 [2024-04-18 21:08:19.924398] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.101 [2024-04-18 21:08:19.931736] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.101 [2024-04-18 21:08:19.931754] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.101 [2024-04-18 21:08:19.941687] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.101 [2024-04-18 21:08:19.941704] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.101 [2024-04-18 21:08:19.950246] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.101 [2024-04-18 21:08:19.950263] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.101 [2024-04-18 21:08:19.957474] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.101 [2024-04-18 21:08:19.957491] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.101 [2024-04-18 21:08:19.967256] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.101 [2024-04-18 21:08:19.967273] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.101 [2024-04-18 21:08:19.975104] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.101 [2024-04-18 21:08:19.975122] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.101 [2024-04-18 21:08:19.984474] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.101 [2024-04-18 21:08:19.984491] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.101 [2024-04-18 21:08:19.991532] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.101 [2024-04-18 21:08:19.991550] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.101 [2024-04-18 21:08:20.002855] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.101 [2024-04-18 21:08:20.002874] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.101 [2024-04-18 21:08:20.013830] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.101 [2024-04-18 21:08:20.013851] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.101 [2024-04-18 21:08:20.025046] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.101 [2024-04-18 21:08:20.025065] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.360 [2024-04-18 21:08:20.034217] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.360 [2024-04-18 21:08:20.034251] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.360 [2024-04-18 21:08:20.044262] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.360 [2024-04-18 21:08:20.044282] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.360 [2024-04-18 21:08:20.052938] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.360 [2024-04-18 21:08:20.052955] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.360 [2024-04-18 21:08:20.059954] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.360 [2024-04-18 21:08:20.059973] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.361 [2024-04-18 21:08:20.071309] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.361 [2024-04-18 21:08:20.071332] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.361 [2024-04-18 21:08:20.080313] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.361 [2024-04-18 21:08:20.080330] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.361 [2024-04-18 21:08:20.090047] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.361 [2024-04-18 21:08:20.090065] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.361 [2024-04-18 21:08:20.097004] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.361 [2024-04-18 21:08:20.097021] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.361 [2024-04-18 21:08:20.107342] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.361 [2024-04-18 21:08:20.107360] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.361 [2024-04-18 21:08:20.114456] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.361 [2024-04-18 21:08:20.114473] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.361 [2024-04-18 21:08:20.124900] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.361 [2024-04-18 21:08:20.124919] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.361 [2024-04-18 21:08:20.134179] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.361 [2024-04-18 21:08:20.134196] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.361 [2024-04-18 21:08:20.143280] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.361 [2024-04-18 21:08:20.143298] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.361 [2024-04-18 21:08:20.152058] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.361 [2024-04-18 21:08:20.152075] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.361 [2024-04-18 21:08:20.160516] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.361 [2024-04-18 21:08:20.160533] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.361 [2024-04-18 21:08:20.169169] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.361 [2024-04-18 21:08:20.169187] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.361 [2024-04-18 21:08:20.177679] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.361 [2024-04-18 21:08:20.177697] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.361 [2024-04-18 21:08:20.186372] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.361 [2024-04-18 21:08:20.186389] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.361 [2024-04-18 21:08:20.194951] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.361 [2024-04-18 21:08:20.194969] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.361 [2024-04-18 21:08:20.203664] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.361 [2024-04-18 21:08:20.203682] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.361 [2024-04-18 21:08:20.213445] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.361 [2024-04-18 21:08:20.213463] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.361 [2024-04-18 21:08:20.222123] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.361 [2024-04-18 21:08:20.222142] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.361 [2024-04-18 21:08:20.231548] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.361 [2024-04-18 21:08:20.231566] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.361 [2024-04-18 21:08:20.239014] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.361 [2024-04-18 21:08:20.239037] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.361 [2024-04-18 21:08:20.249826] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.361 [2024-04-18 21:08:20.249845] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.361 [2024-04-18 21:08:20.259280] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.361 [2024-04-18 21:08:20.259299] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.361 [2024-04-18 21:08:20.268157] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.361 [2024-04-18 21:08:20.268178] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.361 [2024-04-18 21:08:20.277532] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.361 [2024-04-18 21:08:20.277552] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.361 [2024-04-18 21:08:20.284574] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.361 [2024-04-18 21:08:20.284592] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.621 [2024-04-18 21:08:20.296155] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.621 [2024-04-18 21:08:20.296174] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.621 [2024-04-18 21:08:20.305354] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.621 [2024-04-18 21:08:20.305372] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.621 [2024-04-18 21:08:20.314316] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.621 [2024-04-18 21:08:20.314334] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.621 [2024-04-18 21:08:20.323165] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.621 [2024-04-18 21:08:20.323183] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.621 [2024-04-18 21:08:20.332588] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.621 [2024-04-18 21:08:20.332606] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.621 [2024-04-18 21:08:20.341179] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.621 [2024-04-18 21:08:20.341197] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.621 [2024-04-18 21:08:20.350244] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.621 [2024-04-18 21:08:20.350263] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.621 [2024-04-18 21:08:20.359579] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.621 [2024-04-18 21:08:20.359597] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.621 [2024-04-18 21:08:20.368240] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.621 [2024-04-18 21:08:20.368258] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.621 [2024-04-18 21:08:20.377533] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.621 [2024-04-18 21:08:20.377552] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.621 [2024-04-18 21:08:20.386279] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.621 [2024-04-18 21:08:20.386297] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.621 [2024-04-18 21:08:20.395573] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.621 [2024-04-18 21:08:20.395591] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.621 [2024-04-18 21:08:20.404999] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.621 [2024-04-18 21:08:20.405016] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.621 [2024-04-18 21:08:20.413670] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.621 [2024-04-18 21:08:20.413688] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.621 [2024-04-18 21:08:20.422309] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.621 [2024-04-18 21:08:20.422327] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.621 [2024-04-18 21:08:20.431206] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.621 [2024-04-18 21:08:20.431223] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.621 [2024-04-18 21:08:20.439879] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.621 [2024-04-18 21:08:20.439897] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.621 [2024-04-18 21:08:20.448706] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.621 [2024-04-18 21:08:20.448723] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.621 [2024-04-18 21:08:20.457528] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.621 [2024-04-18 21:08:20.457547] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.621 [2024-04-18 21:08:20.466154] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.621 [2024-04-18 21:08:20.466172] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.621 [2024-04-18 21:08:20.475603] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.621 [2024-04-18 21:08:20.475621] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.621 [2024-04-18 21:08:20.484350] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.621 [2024-04-18 21:08:20.484368] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.621 [2024-04-18 21:08:20.493386] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.621 [2024-04-18 21:08:20.493404] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.621 [2024-04-18 21:08:20.502843] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.621 [2024-04-18 21:08:20.502861] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.621 [2024-04-18 21:08:20.511647] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.621 [2024-04-18 21:08:20.511665] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.621 [2024-04-18 21:08:20.520071] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.621 [2024-04-18 21:08:20.520088] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.621 [2024-04-18 21:08:20.528898] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.621 [2024-04-18 21:08:20.528915] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.621 [2024-04-18 21:08:20.535490] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.621 [2024-04-18 21:08:20.535507] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.621 00:15:04.621 Latency(us) 00:15:04.621 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:04.621 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:04.621 Nvme1n1 : 5.00 15913.69 124.33 0.00 0.00 8036.74 2436.23 54024.46 00:15:04.621 =================================================================================================================== 00:15:04.621 Total : 15913.69 124.33 0.00 0.00 8036.74 2436.23 54024.46 00:15:04.621 [2024-04-18 21:08:20.543399] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.621 [2024-04-18 21:08:20.543413] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.621 [2024-04-18 21:08:20.551418] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.621 [2024-04-18 21:08:20.551432] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.882 [2024-04-18 21:08:20.559438] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.882 [2024-04-18 21:08:20.559449] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.882 [2024-04-18 21:08:20.567468] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.882 [2024-04-18 21:08:20.567481] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.882 [2024-04-18 21:08:20.575487] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.882 [2024-04-18 21:08:20.575499] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.882 [2024-04-18 21:08:20.583504] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.882 [2024-04-18 21:08:20.583520] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.882 [2024-04-18 21:08:20.591533] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.882 [2024-04-18 21:08:20.591544] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.882 [2024-04-18 21:08:20.599553] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.882 [2024-04-18 21:08:20.599564] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.882 [2024-04-18 21:08:20.607571] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.882 [2024-04-18 21:08:20.607582] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.882 [2024-04-18 21:08:20.615593] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.882 [2024-04-18 21:08:20.615602] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.882 [2024-04-18 21:08:20.623616] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.882 [2024-04-18 21:08:20.623626] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.882 [2024-04-18 21:08:20.631636] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.882 [2024-04-18 21:08:20.631645] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.882 [2024-04-18 21:08:20.639658] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.882 [2024-04-18 21:08:20.639668] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.882 [2024-04-18 21:08:20.647678] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.882 [2024-04-18 21:08:20.647687] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.882 [2024-04-18 21:08:20.655700] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.882 [2024-04-18 21:08:20.655709] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.882 [2024-04-18 21:08:20.663722] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.882 [2024-04-18 21:08:20.663730] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.882 [2024-04-18 21:08:20.671743] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.882 [2024-04-18 21:08:20.671751] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.882 [2024-04-18 21:08:20.679766] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.882 [2024-04-18 21:08:20.679776] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.882 [2024-04-18 21:08:20.687783] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.882 [2024-04-18 21:08:20.687791] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.882 [2024-04-18 21:08:20.695804] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.882 [2024-04-18 21:08:20.695813] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.882 [2024-04-18 21:08:20.703825] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.882 [2024-04-18 21:08:20.703834] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.882 [2024-04-18 21:08:20.711850] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.882 [2024-04-18 21:08:20.711858] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.882 [2024-04-18 21:08:20.719877] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.882 [2024-04-18 21:08:20.719891] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.882 [2024-04-18 21:08:20.727895] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.882 [2024-04-18 21:08:20.727905] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.882 [2024-04-18 21:08:20.735916] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.882 [2024-04-18 21:08:20.735924] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.882 [2024-04-18 21:08:20.743936] subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.882 [2024-04-18 21:08:20.743946] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3025121) - No such process 00:15:04.882 21:08:20 -- target/zcopy.sh@49 -- # wait 3025121 00:15:04.882 21:08:20 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:04.882 21:08:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:04.882 21:08:20 -- common/autotest_common.sh@10 -- # set +x 00:15:04.882 21:08:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:04.882 21:08:20 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:04.882 21:08:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:04.882 21:08:20 -- common/autotest_common.sh@10 -- # set +x 00:15:04.882 delay0 00:15:04.882 21:08:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:04.882 21:08:20 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:04.882 21:08:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:04.882 21:08:20 -- common/autotest_common.sh@10 -- # set +x 00:15:04.883 21:08:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:04.883 21:08:20 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:04.883 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.142 [2024-04-18 21:08:20.911679] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:11.709 Initializing NVMe Controllers 00:15:11.709 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:11.709 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:11.709 Initialization complete. Launching workers. 00:15:11.709 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 93 00:15:11.709 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 374, failed to submit 39 00:15:11.709 success 179, unsuccess 195, failed 0 00:15:11.709 21:08:27 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:15:11.709 21:08:27 -- target/zcopy.sh@60 -- # nvmftestfini 00:15:11.709 21:08:27 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:11.709 21:08:27 -- nvmf/common.sh@117 -- # sync 00:15:11.709 21:08:27 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:11.709 21:08:27 -- nvmf/common.sh@120 -- # set +e 00:15:11.709 21:08:27 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:11.709 21:08:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:11.709 rmmod nvme_tcp 00:15:11.709 rmmod nvme_fabrics 00:15:11.709 rmmod nvme_keyring 00:15:11.709 21:08:27 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:11.709 21:08:27 -- nvmf/common.sh@124 -- # set -e 00:15:11.709 21:08:27 -- nvmf/common.sh@125 -- # return 0 00:15:11.709 21:08:27 -- nvmf/common.sh@478 -- # '[' -n 3023061 ']' 00:15:11.709 21:08:27 -- nvmf/common.sh@479 -- # killprocess 3023061 00:15:11.709 21:08:27 -- common/autotest_common.sh@936 -- # '[' -z 3023061 ']' 00:15:11.709 21:08:27 -- common/autotest_common.sh@940 -- # kill -0 3023061 00:15:11.709 21:08:27 -- common/autotest_common.sh@941 -- # uname 00:15:11.709 21:08:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:11.709 21:08:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3023061 00:15:11.709 21:08:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:11.709 21:08:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:11.709 21:08:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3023061' 00:15:11.709 killing process with pid 3023061 00:15:11.709 21:08:27 -- common/autotest_common.sh@955 -- # kill 3023061 00:15:11.709 21:08:27 -- common/autotest_common.sh@960 -- # wait 3023061 00:15:11.709 21:08:27 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:11.709 21:08:27 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:11.709 21:08:27 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:11.709 21:08:27 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:11.709 21:08:27 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:11.709 21:08:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:11.709 21:08:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:11.709 21:08:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:13.616 21:08:29 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:13.616 00:15:13.616 real 0m31.947s 00:15:13.616 user 0m42.531s 00:15:13.616 sys 0m10.876s 00:15:13.616 21:08:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:13.616 21:08:29 -- common/autotest_common.sh@10 -- # set +x 00:15:13.616 ************************************ 00:15:13.616 END TEST nvmf_zcopy 00:15:13.616 ************************************ 00:15:13.893 21:08:29 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:13.893 21:08:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:13.893 21:08:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:13.893 21:08:29 -- common/autotest_common.sh@10 -- # set +x 00:15:13.893 ************************************ 00:15:13.893 START TEST nvmf_nmic 00:15:13.893 ************************************ 00:15:13.893 21:08:29 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:13.893 * Looking for test storage... 00:15:13.893 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:13.893 21:08:29 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:13.893 21:08:29 -- nvmf/common.sh@7 -- # uname -s 00:15:13.893 21:08:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:13.893 21:08:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:13.893 21:08:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:13.893 21:08:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:13.893 21:08:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:13.893 21:08:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:13.893 21:08:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:13.893 21:08:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:13.893 21:08:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:13.893 21:08:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:13.893 21:08:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:13.893 21:08:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:13.893 21:08:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:13.893 21:08:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:13.893 21:08:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:13.893 21:08:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:13.893 21:08:29 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:13.893 21:08:29 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:13.893 21:08:29 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:13.893 21:08:29 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:13.893 21:08:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.893 21:08:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.893 21:08:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.893 21:08:29 -- paths/export.sh@5 -- # export PATH 00:15:13.893 21:08:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.893 21:08:29 -- nvmf/common.sh@47 -- # : 0 00:15:13.893 21:08:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:13.893 21:08:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:13.893 21:08:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:13.893 21:08:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:13.893 21:08:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:13.893 21:08:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:13.893 21:08:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:13.893 21:08:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:14.171 21:08:29 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:14.171 21:08:29 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:14.171 21:08:29 -- target/nmic.sh@14 -- # nvmftestinit 00:15:14.171 21:08:29 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:14.171 21:08:29 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:14.171 21:08:29 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:14.171 21:08:29 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:14.171 21:08:29 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:14.171 21:08:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:14.171 21:08:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:14.171 21:08:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:14.171 21:08:29 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:14.171 21:08:29 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:14.171 21:08:29 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:14.171 21:08:29 -- common/autotest_common.sh@10 -- # set +x 00:15:20.743 21:08:35 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:20.743 21:08:35 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:20.743 21:08:35 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:20.743 21:08:35 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:20.743 21:08:35 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:20.743 21:08:35 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:20.743 21:08:35 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:20.743 21:08:35 -- nvmf/common.sh@295 -- # net_devs=() 00:15:20.743 21:08:35 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:20.743 21:08:35 -- nvmf/common.sh@296 -- # e810=() 00:15:20.743 21:08:35 -- nvmf/common.sh@296 -- # local -ga e810 00:15:20.743 21:08:35 -- nvmf/common.sh@297 -- # x722=() 00:15:20.743 21:08:35 -- nvmf/common.sh@297 -- # local -ga x722 00:15:20.743 21:08:35 -- nvmf/common.sh@298 -- # mlx=() 00:15:20.743 21:08:35 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:20.743 21:08:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:20.743 21:08:35 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:20.743 21:08:35 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:20.743 21:08:35 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:20.743 21:08:35 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:20.743 21:08:35 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:20.743 21:08:35 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:20.743 21:08:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:20.743 21:08:35 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:20.743 21:08:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:20.743 21:08:35 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:20.743 21:08:35 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:20.743 21:08:35 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:20.743 21:08:35 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:20.743 21:08:35 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:20.743 21:08:35 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:20.743 21:08:35 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:20.743 21:08:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:20.743 21:08:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:20.743 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:20.743 21:08:35 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:20.743 21:08:35 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:20.743 21:08:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:20.743 21:08:35 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:20.743 21:08:35 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:20.743 21:08:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:20.743 21:08:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:20.743 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:20.743 21:08:35 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:20.743 21:08:35 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:20.743 21:08:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:20.743 21:08:35 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:20.743 21:08:35 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:20.743 21:08:35 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:20.743 21:08:35 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:20.743 21:08:35 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:20.743 21:08:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:20.743 21:08:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:20.743 21:08:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:20.743 21:08:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:20.743 21:08:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:20.743 Found net devices under 0000:86:00.0: cvl_0_0 00:15:20.743 21:08:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:20.743 21:08:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:20.743 21:08:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:20.743 21:08:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:20.743 21:08:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:20.743 21:08:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:20.743 Found net devices under 0000:86:00.1: cvl_0_1 00:15:20.743 21:08:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:20.743 21:08:35 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:20.743 21:08:35 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:20.743 21:08:35 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:20.743 21:08:35 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:20.744 21:08:35 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:20.744 21:08:35 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:20.744 21:08:35 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:20.744 21:08:35 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:20.744 21:08:35 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:20.744 21:08:35 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:20.744 21:08:35 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:20.744 21:08:35 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:20.744 21:08:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:20.744 21:08:35 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:20.744 21:08:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:20.744 21:08:35 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:20.744 21:08:35 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:20.744 21:08:35 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:20.744 21:08:35 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:20.744 21:08:35 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:20.744 21:08:35 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:20.744 21:08:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:20.744 21:08:35 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:20.744 21:08:35 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:20.744 21:08:35 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:20.744 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:20.744 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:15:20.744 00:15:20.744 --- 10.0.0.2 ping statistics --- 00:15:20.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.744 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:15:20.744 21:08:35 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:20.744 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:20.744 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:15:20.744 00:15:20.744 --- 10.0.0.1 ping statistics --- 00:15:20.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.744 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:15:20.744 21:08:35 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:20.744 21:08:35 -- nvmf/common.sh@411 -- # return 0 00:15:20.744 21:08:35 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:20.744 21:08:35 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:20.744 21:08:35 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:20.744 21:08:35 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:20.744 21:08:35 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:20.744 21:08:35 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:20.744 21:08:35 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:20.744 21:08:35 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:20.744 21:08:35 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:20.744 21:08:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:20.744 21:08:35 -- common/autotest_common.sh@10 -- # set +x 00:15:20.744 21:08:35 -- nvmf/common.sh@470 -- # nvmfpid=3030989 00:15:20.744 21:08:35 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:20.744 21:08:35 -- nvmf/common.sh@471 -- # waitforlisten 3030989 00:15:20.744 21:08:35 -- common/autotest_common.sh@817 -- # '[' -z 3030989 ']' 00:15:20.744 21:08:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.744 21:08:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:20.744 21:08:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.744 21:08:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:20.744 21:08:35 -- common/autotest_common.sh@10 -- # set +x 00:15:20.744 [2024-04-18 21:08:35.974415] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:15:20.744 [2024-04-18 21:08:35.974457] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:20.744 EAL: No free 2048 kB hugepages reported on node 1 00:15:20.744 [2024-04-18 21:08:36.038612] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:20.744 [2024-04-18 21:08:36.118011] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:20.744 [2024-04-18 21:08:36.118045] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:20.744 [2024-04-18 21:08:36.118052] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:20.744 [2024-04-18 21:08:36.118059] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:20.744 [2024-04-18 21:08:36.118064] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:20.744 [2024-04-18 21:08:36.118096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:20.744 [2024-04-18 21:08:36.118184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:20.744 [2024-04-18 21:08:36.118271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:20.744 [2024-04-18 21:08:36.118272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.004 21:08:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:21.004 21:08:36 -- common/autotest_common.sh@850 -- # return 0 00:15:21.004 21:08:36 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:21.004 21:08:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:21.004 21:08:36 -- common/autotest_common.sh@10 -- # set +x 00:15:21.004 21:08:36 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:21.004 21:08:36 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:21.004 21:08:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:21.004 21:08:36 -- common/autotest_common.sh@10 -- # set +x 00:15:21.004 [2024-04-18 21:08:36.812348] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:21.004 21:08:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:21.004 21:08:36 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:21.004 21:08:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:21.004 21:08:36 -- common/autotest_common.sh@10 -- # set +x 00:15:21.004 Malloc0 00:15:21.004 21:08:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:21.004 21:08:36 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:21.004 21:08:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:21.004 21:08:36 -- common/autotest_common.sh@10 -- # set +x 00:15:21.004 21:08:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:21.004 21:08:36 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:21.004 21:08:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:21.004 21:08:36 -- common/autotest_common.sh@10 -- # set +x 00:15:21.004 21:08:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:21.004 21:08:36 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:21.004 21:08:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:21.004 21:08:36 -- common/autotest_common.sh@10 -- # set +x 00:15:21.004 [2024-04-18 21:08:36.868316] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:21.004 21:08:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:21.004 21:08:36 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:21.004 test case1: single bdev can't be used in multiple subsystems 00:15:21.004 21:08:36 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:21.004 21:08:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:21.004 21:08:36 -- common/autotest_common.sh@10 -- # set +x 00:15:21.004 21:08:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:21.004 21:08:36 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:21.004 21:08:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:21.004 21:08:36 -- common/autotest_common.sh@10 -- # set +x 00:15:21.004 21:08:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:21.004 21:08:36 -- target/nmic.sh@28 -- # nmic_status=0 00:15:21.004 21:08:36 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:21.004 21:08:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:21.004 21:08:36 -- common/autotest_common.sh@10 -- # set +x 00:15:21.004 [2024-04-18 21:08:36.892255] bdev.c:7987:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:21.004 [2024-04-18 21:08:36.892273] subsystem.c:2016:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:21.004 [2024-04-18 21:08:36.892280] nvmf_rpc.c:1538:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.004 request: 00:15:21.004 { 00:15:21.004 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:21.004 "namespace": { 00:15:21.004 "bdev_name": "Malloc0", 00:15:21.004 "no_auto_visible": false 00:15:21.004 }, 00:15:21.004 "method": "nvmf_subsystem_add_ns", 00:15:21.004 "req_id": 1 00:15:21.004 } 00:15:21.004 Got JSON-RPC error response 00:15:21.004 response: 00:15:21.004 { 00:15:21.004 "code": -32602, 00:15:21.004 "message": "Invalid parameters" 00:15:21.004 } 00:15:21.004 21:08:36 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:15:21.004 21:08:36 -- target/nmic.sh@29 -- # nmic_status=1 00:15:21.004 21:08:36 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:21.004 21:08:36 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:21.004 Adding namespace failed - expected result. 00:15:21.004 21:08:36 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:21.004 test case2: host connect to nvmf target in multiple paths 00:15:21.004 21:08:36 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:21.004 21:08:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:21.004 21:08:36 -- common/autotest_common.sh@10 -- # set +x 00:15:21.004 [2024-04-18 21:08:36.904372] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:21.004 21:08:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:21.004 21:08:36 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:22.385 21:08:38 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:15:23.324 21:08:39 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:23.324 21:08:39 -- common/autotest_common.sh@1184 -- # local i=0 00:15:23.324 21:08:39 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:15:23.324 21:08:39 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:15:23.324 21:08:39 -- common/autotest_common.sh@1191 -- # sleep 2 00:15:25.860 21:08:41 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:15:25.860 21:08:41 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:15:25.860 21:08:41 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:15:25.860 21:08:41 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:15:25.860 21:08:41 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:15:25.860 21:08:41 -- common/autotest_common.sh@1194 -- # return 0 00:15:25.860 21:08:41 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:25.860 [global] 00:15:25.860 thread=1 00:15:25.860 invalidate=1 00:15:25.860 rw=write 00:15:25.860 time_based=1 00:15:25.860 runtime=1 00:15:25.860 ioengine=libaio 00:15:25.860 direct=1 00:15:25.860 bs=4096 00:15:25.860 iodepth=1 00:15:25.860 norandommap=0 00:15:25.860 numjobs=1 00:15:25.860 00:15:25.860 verify_dump=1 00:15:25.860 verify_backlog=512 00:15:25.860 verify_state_save=0 00:15:25.860 do_verify=1 00:15:25.860 verify=crc32c-intel 00:15:25.860 [job0] 00:15:25.860 filename=/dev/nvme0n1 00:15:25.860 Could not set queue depth (nvme0n1) 00:15:25.860 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:25.860 fio-3.35 00:15:25.860 Starting 1 thread 00:15:26.799 00:15:26.799 job0: (groupid=0, jobs=1): err= 0: pid=3032058: Thu Apr 18 21:08:42 2024 00:15:26.799 read: IOPS=1408, BW=5634KiB/s (5770kB/s)(5640KiB/1001msec) 00:15:26.799 slat (nsec): min=5906, max=30715, avg=7081.25, stdev=1075.68 00:15:26.799 clat (usec): min=275, max=811, avg=436.90, stdev=57.27 00:15:26.799 lat (usec): min=282, max=842, avg=443.98, stdev=57.41 00:15:26.799 clat percentiles (usec): 00:15:26.799 | 1.00th=[ 285], 5.00th=[ 297], 10.00th=[ 367], 20.00th=[ 416], 00:15:26.799 | 30.00th=[ 420], 40.00th=[ 424], 50.00th=[ 433], 60.00th=[ 449], 00:15:26.799 | 70.00th=[ 469], 80.00th=[ 482], 90.00th=[ 490], 95.00th=[ 494], 00:15:26.799 | 99.00th=[ 603], 99.50th=[ 627], 99.90th=[ 791], 99.95th=[ 816], 00:15:26.799 | 99.99th=[ 816] 00:15:26.799 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:15:26.799 slat (nsec): min=9048, max=38929, avg=10164.43, stdev=1188.96 00:15:26.799 clat (usec): min=176, max=625, avg=229.09, stdev=47.87 00:15:26.799 lat (usec): min=186, max=664, avg=239.25, stdev=48.05 00:15:26.799 clat percentiles (usec): 00:15:26.799 | 1.00th=[ 180], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 194], 00:15:26.799 | 30.00th=[ 200], 40.00th=[ 206], 50.00th=[ 212], 60.00th=[ 219], 00:15:26.799 | 70.00th=[ 233], 80.00th=[ 265], 90.00th=[ 314], 95.00th=[ 338], 00:15:26.799 | 99.00th=[ 351], 99.50th=[ 359], 99.90th=[ 416], 99.95th=[ 627], 00:15:26.799 | 99.99th=[ 627] 00:15:26.799 bw ( KiB/s): min= 8168, max= 8168, per=100.00%, avg=8168.00, stdev= 0.00, samples=1 00:15:26.799 iops : min= 2042, max= 2042, avg=2042.00, stdev= 0.00, samples=1 00:15:26.799 lat (usec) : 250=39.68%, 500=58.89%, 750=1.29%, 1000=0.14% 00:15:26.799 cpu : usr=1.50%, sys=2.60%, ctx=2946, majf=0, minf=2 00:15:26.799 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:26.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:26.799 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:26.799 issued rwts: total=1410,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:26.799 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:26.799 00:15:26.799 Run status group 0 (all jobs): 00:15:26.799 READ: bw=5634KiB/s (5770kB/s), 5634KiB/s-5634KiB/s (5770kB/s-5770kB/s), io=5640KiB (5775kB), run=1001-1001msec 00:15:26.799 WRITE: bw=6138KiB/s (6285kB/s), 6138KiB/s-6138KiB/s (6285kB/s-6285kB/s), io=6144KiB (6291kB), run=1001-1001msec 00:15:26.799 00:15:26.799 Disk stats (read/write): 00:15:26.799 nvme0n1: ios=1243/1536, merge=0/0, ticks=568/343, in_queue=911, util=91.98% 00:15:26.799 21:08:42 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:27.059 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:27.059 21:08:42 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:27.059 21:08:42 -- common/autotest_common.sh@1205 -- # local i=0 00:15:27.059 21:08:42 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:15:27.059 21:08:42 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:27.059 21:08:42 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:15:27.059 21:08:42 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:27.059 21:08:42 -- common/autotest_common.sh@1217 -- # return 0 00:15:27.059 21:08:42 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:15:27.059 21:08:42 -- target/nmic.sh@53 -- # nvmftestfini 00:15:27.059 21:08:42 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:27.059 21:08:42 -- nvmf/common.sh@117 -- # sync 00:15:27.059 21:08:42 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:27.059 21:08:42 -- nvmf/common.sh@120 -- # set +e 00:15:27.059 21:08:42 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:27.059 21:08:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:27.059 rmmod nvme_tcp 00:15:27.059 rmmod nvme_fabrics 00:15:27.059 rmmod nvme_keyring 00:15:27.059 21:08:42 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:27.059 21:08:42 -- nvmf/common.sh@124 -- # set -e 00:15:27.059 21:08:42 -- nvmf/common.sh@125 -- # return 0 00:15:27.059 21:08:42 -- nvmf/common.sh@478 -- # '[' -n 3030989 ']' 00:15:27.059 21:08:42 -- nvmf/common.sh@479 -- # killprocess 3030989 00:15:27.059 21:08:42 -- common/autotest_common.sh@936 -- # '[' -z 3030989 ']' 00:15:27.059 21:08:42 -- common/autotest_common.sh@940 -- # kill -0 3030989 00:15:27.059 21:08:42 -- common/autotest_common.sh@941 -- # uname 00:15:27.059 21:08:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:27.059 21:08:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3030989 00:15:27.059 21:08:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:27.059 21:08:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:27.059 21:08:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3030989' 00:15:27.059 killing process with pid 3030989 00:15:27.059 21:08:42 -- common/autotest_common.sh@955 -- # kill 3030989 00:15:27.059 21:08:42 -- common/autotest_common.sh@960 -- # wait 3030989 00:15:27.319 21:08:43 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:27.319 21:08:43 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:27.319 21:08:43 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:27.319 21:08:43 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:27.319 21:08:43 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:27.319 21:08:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.319 21:08:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:27.319 21:08:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.856 21:08:45 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:29.856 00:15:29.856 real 0m15.574s 00:15:29.856 user 0m35.188s 00:15:29.856 sys 0m5.388s 00:15:29.856 21:08:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:29.856 21:08:45 -- common/autotest_common.sh@10 -- # set +x 00:15:29.856 ************************************ 00:15:29.856 END TEST nvmf_nmic 00:15:29.856 ************************************ 00:15:29.856 21:08:45 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:29.856 21:08:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:29.856 21:08:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:29.856 21:08:45 -- common/autotest_common.sh@10 -- # set +x 00:15:29.856 ************************************ 00:15:29.856 START TEST nvmf_fio_target 00:15:29.856 ************************************ 00:15:29.856 21:08:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:29.856 * Looking for test storage... 00:15:29.856 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:29.856 21:08:45 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:29.856 21:08:45 -- nvmf/common.sh@7 -- # uname -s 00:15:29.856 21:08:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:29.856 21:08:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:29.856 21:08:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:29.856 21:08:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:29.856 21:08:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:29.856 21:08:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:29.856 21:08:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:29.856 21:08:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:29.856 21:08:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:29.856 21:08:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:29.856 21:08:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:29.856 21:08:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:29.856 21:08:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:29.856 21:08:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:29.856 21:08:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:29.856 21:08:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:29.856 21:08:45 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:29.856 21:08:45 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:29.856 21:08:45 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:29.856 21:08:45 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:29.856 21:08:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.856 21:08:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.856 21:08:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.856 21:08:45 -- paths/export.sh@5 -- # export PATH 00:15:29.856 21:08:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.856 21:08:45 -- nvmf/common.sh@47 -- # : 0 00:15:29.856 21:08:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:29.856 21:08:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:29.856 21:08:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:29.856 21:08:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:29.856 21:08:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:29.856 21:08:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:29.856 21:08:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:29.856 21:08:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:29.856 21:08:45 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:29.856 21:08:45 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:29.856 21:08:45 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:29.856 21:08:45 -- target/fio.sh@16 -- # nvmftestinit 00:15:29.856 21:08:45 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:29.856 21:08:45 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:29.856 21:08:45 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:29.856 21:08:45 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:29.856 21:08:45 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:29.856 21:08:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.856 21:08:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:29.856 21:08:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.856 21:08:45 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:29.856 21:08:45 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:29.856 21:08:45 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:29.856 21:08:45 -- common/autotest_common.sh@10 -- # set +x 00:15:36.431 21:08:51 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:36.431 21:08:51 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:36.431 21:08:51 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:36.431 21:08:51 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:36.431 21:08:51 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:36.431 21:08:51 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:36.431 21:08:51 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:36.431 21:08:51 -- nvmf/common.sh@295 -- # net_devs=() 00:15:36.431 21:08:51 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:36.431 21:08:51 -- nvmf/common.sh@296 -- # e810=() 00:15:36.431 21:08:51 -- nvmf/common.sh@296 -- # local -ga e810 00:15:36.431 21:08:51 -- nvmf/common.sh@297 -- # x722=() 00:15:36.431 21:08:51 -- nvmf/common.sh@297 -- # local -ga x722 00:15:36.431 21:08:51 -- nvmf/common.sh@298 -- # mlx=() 00:15:36.431 21:08:51 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:36.431 21:08:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:36.431 21:08:51 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:36.431 21:08:51 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:36.431 21:08:51 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:36.431 21:08:51 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:36.431 21:08:51 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:36.431 21:08:51 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:36.431 21:08:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:36.431 21:08:51 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:36.431 21:08:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:36.431 21:08:51 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:36.431 21:08:51 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:36.431 21:08:51 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:36.431 21:08:51 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:36.431 21:08:51 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:36.431 21:08:51 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:36.431 21:08:51 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:36.431 21:08:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:36.431 21:08:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:36.431 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:36.431 21:08:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:36.431 21:08:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:36.431 21:08:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:36.431 21:08:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:36.431 21:08:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:36.431 21:08:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:36.431 21:08:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:36.431 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:36.431 21:08:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:36.431 21:08:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:36.431 21:08:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:36.431 21:08:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:36.431 21:08:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:36.431 21:08:51 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:36.431 21:08:51 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:36.431 21:08:51 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:36.431 21:08:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:36.431 21:08:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:36.431 21:08:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:36.431 21:08:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:36.431 21:08:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:36.431 Found net devices under 0000:86:00.0: cvl_0_0 00:15:36.431 21:08:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:36.431 21:08:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:36.431 21:08:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:36.431 21:08:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:36.431 21:08:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:36.431 21:08:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:36.431 Found net devices under 0000:86:00.1: cvl_0_1 00:15:36.431 21:08:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:36.431 21:08:51 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:36.431 21:08:51 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:36.431 21:08:51 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:36.431 21:08:51 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:36.431 21:08:51 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:36.431 21:08:51 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:36.431 21:08:51 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:36.431 21:08:51 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:36.431 21:08:51 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:36.431 21:08:51 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:36.431 21:08:51 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:36.431 21:08:51 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:36.431 21:08:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:36.431 21:08:51 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:36.431 21:08:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:36.431 21:08:51 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:36.431 21:08:51 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:36.431 21:08:51 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:36.431 21:08:51 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:36.431 21:08:51 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:36.431 21:08:51 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:36.431 21:08:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:36.431 21:08:51 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:36.432 21:08:52 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:36.432 21:08:52 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:36.432 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:36.432 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:15:36.432 00:15:36.432 --- 10.0.0.2 ping statistics --- 00:15:36.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.432 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:15:36.432 21:08:52 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:36.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:36.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:15:36.432 00:15:36.432 --- 10.0.0.1 ping statistics --- 00:15:36.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.432 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:15:36.432 21:08:52 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:36.432 21:08:52 -- nvmf/common.sh@411 -- # return 0 00:15:36.432 21:08:52 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:36.432 21:08:52 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:36.432 21:08:52 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:36.432 21:08:52 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:36.432 21:08:52 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:36.432 21:08:52 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:36.432 21:08:52 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:36.432 21:08:52 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:15:36.432 21:08:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:36.432 21:08:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:36.432 21:08:52 -- common/autotest_common.sh@10 -- # set +x 00:15:36.432 21:08:52 -- nvmf/common.sh@470 -- # nvmfpid=3036125 00:15:36.432 21:08:52 -- nvmf/common.sh@471 -- # waitforlisten 3036125 00:15:36.432 21:08:52 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:36.432 21:08:52 -- common/autotest_common.sh@817 -- # '[' -z 3036125 ']' 00:15:36.432 21:08:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.432 21:08:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:36.432 21:08:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.432 21:08:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:36.432 21:08:52 -- common/autotest_common.sh@10 -- # set +x 00:15:36.432 [2024-04-18 21:08:52.133637] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:15:36.432 [2024-04-18 21:08:52.133682] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.432 EAL: No free 2048 kB hugepages reported on node 1 00:15:36.432 [2024-04-18 21:08:52.197293] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:36.432 [2024-04-18 21:08:52.268602] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:36.432 [2024-04-18 21:08:52.268643] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:36.432 [2024-04-18 21:08:52.268650] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:36.432 [2024-04-18 21:08:52.268656] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:36.432 [2024-04-18 21:08:52.268661] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:36.432 [2024-04-18 21:08:52.268711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:36.432 [2024-04-18 21:08:52.268726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:36.432 [2024-04-18 21:08:52.268835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:36.432 [2024-04-18 21:08:52.268837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.372 21:08:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:37.372 21:08:52 -- common/autotest_common.sh@850 -- # return 0 00:15:37.372 21:08:52 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:37.372 21:08:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:37.372 21:08:52 -- common/autotest_common.sh@10 -- # set +x 00:15:37.372 21:08:52 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:37.372 21:08:52 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:37.372 [2024-04-18 21:08:53.135926] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:37.372 21:08:53 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:37.632 21:08:53 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:15:37.632 21:08:53 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:37.632 21:08:53 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:15:37.632 21:08:53 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:37.924 21:08:53 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:15:37.924 21:08:53 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:38.215 21:08:53 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:15:38.215 21:08:53 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:15:38.215 21:08:54 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:38.475 21:08:54 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:15:38.475 21:08:54 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:38.734 21:08:54 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:15:38.734 21:08:54 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:38.992 21:08:54 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:15:38.992 21:08:54 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:15:38.992 21:08:54 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:39.251 21:08:55 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:39.251 21:08:55 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:39.512 21:08:55 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:39.512 21:08:55 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:39.771 21:08:55 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:39.771 [2024-04-18 21:08:55.618455] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:39.771 21:08:55 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:15:40.031 21:08:55 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:15:40.290 21:08:56 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:41.669 21:08:57 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:15:41.669 21:08:57 -- common/autotest_common.sh@1184 -- # local i=0 00:15:41.669 21:08:57 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:15:41.669 21:08:57 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:15:41.669 21:08:57 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:15:41.669 21:08:57 -- common/autotest_common.sh@1191 -- # sleep 2 00:15:43.603 21:08:59 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:15:43.603 21:08:59 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:15:43.603 21:08:59 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:15:43.603 21:08:59 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:15:43.603 21:08:59 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:15:43.603 21:08:59 -- common/autotest_common.sh@1194 -- # return 0 00:15:43.603 21:08:59 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:43.603 [global] 00:15:43.603 thread=1 00:15:43.603 invalidate=1 00:15:43.603 rw=write 00:15:43.603 time_based=1 00:15:43.603 runtime=1 00:15:43.603 ioengine=libaio 00:15:43.603 direct=1 00:15:43.603 bs=4096 00:15:43.603 iodepth=1 00:15:43.603 norandommap=0 00:15:43.603 numjobs=1 00:15:43.603 00:15:43.603 verify_dump=1 00:15:43.603 verify_backlog=512 00:15:43.603 verify_state_save=0 00:15:43.603 do_verify=1 00:15:43.603 verify=crc32c-intel 00:15:43.603 [job0] 00:15:43.603 filename=/dev/nvme0n1 00:15:43.603 [job1] 00:15:43.603 filename=/dev/nvme0n2 00:15:43.603 [job2] 00:15:43.603 filename=/dev/nvme0n3 00:15:43.603 [job3] 00:15:43.603 filename=/dev/nvme0n4 00:15:43.603 Could not set queue depth (nvme0n1) 00:15:43.603 Could not set queue depth (nvme0n2) 00:15:43.603 Could not set queue depth (nvme0n3) 00:15:43.603 Could not set queue depth (nvme0n4) 00:15:43.862 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:43.862 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:43.862 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:43.862 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:43.862 fio-3.35 00:15:43.862 Starting 4 threads 00:15:45.240 00:15:45.240 job0: (groupid=0, jobs=1): err= 0: pid=3037600: Thu Apr 18 21:09:00 2024 00:15:45.240 read: IOPS=21, BW=84.7KiB/s (86.7kB/s)(88.0KiB/1039msec) 00:15:45.240 slat (nsec): min=8832, max=23601, avg=19874.91, stdev=4340.30 00:15:45.240 clat (usec): min=40820, max=42119, avg=41211.72, stdev=458.78 00:15:45.240 lat (usec): min=40836, max=42142, avg=41231.59, stdev=459.33 00:15:45.240 clat percentiles (usec): 00:15:45.240 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:15:45.240 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:45.240 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:15:45.240 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:45.240 | 99.99th=[42206] 00:15:45.240 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:15:45.240 slat (nsec): min=9484, max=37990, avg=10686.11, stdev=1556.07 00:15:45.240 clat (usec): min=157, max=463, avg=238.21, stdev=36.79 00:15:45.240 lat (usec): min=167, max=474, avg=248.90, stdev=36.99 00:15:45.240 clat percentiles (usec): 00:15:45.240 | 1.00th=[ 161], 5.00th=[ 186], 10.00th=[ 202], 20.00th=[ 215], 00:15:45.240 | 30.00th=[ 221], 40.00th=[ 227], 50.00th=[ 233], 60.00th=[ 239], 00:15:45.240 | 70.00th=[ 249], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 306], 00:15:45.240 | 99.00th=[ 355], 99.50th=[ 396], 99.90th=[ 465], 99.95th=[ 465], 00:15:45.240 | 99.99th=[ 465] 00:15:45.240 bw ( KiB/s): min= 4096, max= 4096, per=23.09%, avg=4096.00, stdev= 0.00, samples=1 00:15:45.240 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:45.240 lat (usec) : 250=68.16%, 500=27.72% 00:15:45.240 lat (msec) : 50=4.12% 00:15:45.240 cpu : usr=0.29%, sys=0.48%, ctx=535, majf=0, minf=2 00:15:45.240 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:45.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:45.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:45.240 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:45.240 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:45.240 job1: (groupid=0, jobs=1): err= 0: pid=3037616: Thu Apr 18 21:09:00 2024 00:15:45.240 read: IOPS=1066, BW=4268KiB/s (4370kB/s)(4272KiB/1001msec) 00:15:45.240 slat (nsec): min=4782, max=27763, avg=7675.81, stdev=2256.47 00:15:45.240 clat (usec): min=271, max=41850, avg=560.44, stdev=2173.05 00:15:45.240 lat (usec): min=279, max=41861, avg=568.11, stdev=2173.13 00:15:45.240 clat percentiles (usec): 00:15:45.240 | 1.00th=[ 306], 5.00th=[ 326], 10.00th=[ 343], 20.00th=[ 371], 00:15:45.240 | 30.00th=[ 392], 40.00th=[ 416], 50.00th=[ 441], 60.00th=[ 461], 00:15:45.240 | 70.00th=[ 486], 80.00th=[ 510], 90.00th=[ 545], 95.00th=[ 570], 00:15:45.240 | 99.00th=[ 775], 99.50th=[ 914], 99.90th=[41681], 99.95th=[41681], 00:15:45.240 | 99.99th=[41681] 00:15:45.240 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:15:45.240 slat (usec): min=4, max=3509, avg=12.53, stdev=89.32 00:15:45.240 clat (usec): min=157, max=1094, avg=237.28, stdev=80.51 00:15:45.240 lat (usec): min=168, max=4318, avg=249.81, stdev=130.96 00:15:45.240 clat percentiles (usec): 00:15:45.240 | 1.00th=[ 167], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 190], 00:15:45.240 | 30.00th=[ 196], 40.00th=[ 206], 50.00th=[ 221], 60.00th=[ 239], 00:15:45.240 | 70.00th=[ 245], 80.00th=[ 258], 90.00th=[ 289], 95.00th=[ 355], 00:15:45.240 | 99.00th=[ 627], 99.50th=[ 701], 99.90th=[ 807], 99.95th=[ 1090], 00:15:45.240 | 99.99th=[ 1090] 00:15:45.240 bw ( KiB/s): min= 7416, max= 7416, per=41.80%, avg=7416.00, stdev= 0.00, samples=1 00:15:45.240 iops : min= 1854, max= 1854, avg=1854.00, stdev= 0.00, samples=1 00:15:45.240 lat (usec) : 250=44.66%, 500=44.16%, 750=10.64%, 1000=0.35% 00:15:45.240 lat (msec) : 2=0.08%, 50=0.12% 00:15:45.240 cpu : usr=1.90%, sys=2.70%, ctx=2607, majf=0, minf=1 00:15:45.240 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:45.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:45.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:45.240 issued rwts: total=1068,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:45.240 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:45.240 job2: (groupid=0, jobs=1): err= 0: pid=3037642: Thu Apr 18 21:09:00 2024 00:15:45.240 read: IOPS=1018, BW=4076KiB/s (4173kB/s)(4100KiB/1006msec) 00:15:45.240 slat (nsec): min=6582, max=23788, avg=8466.40, stdev=1463.71 00:15:45.240 clat (usec): min=316, max=41292, avg=538.33, stdev=1278.40 00:15:45.240 lat (usec): min=323, max=41302, avg=546.80, stdev=1278.46 00:15:45.240 clat percentiles (usec): 00:15:45.240 | 1.00th=[ 334], 5.00th=[ 371], 10.00th=[ 433], 20.00th=[ 461], 00:15:45.240 | 30.00th=[ 474], 40.00th=[ 482], 50.00th=[ 490], 60.00th=[ 498], 00:15:45.240 | 70.00th=[ 519], 80.00th=[ 537], 90.00th=[ 562], 95.00th=[ 586], 00:15:45.240 | 99.00th=[ 848], 99.50th=[ 889], 99.90th=[ 2900], 99.95th=[41157], 00:15:45.240 | 99.99th=[41157] 00:15:45.240 write: IOPS=1526, BW=6107KiB/s (6254kB/s)(6144KiB/1006msec); 0 zone resets 00:15:45.240 slat (nsec): min=10666, max=41389, avg=12476.50, stdev=2390.94 00:15:45.240 clat (usec): min=176, max=920, avg=270.17, stdev=78.86 00:15:45.240 lat (usec): min=188, max=950, avg=282.65, stdev=79.64 00:15:45.240 clat percentiles (usec): 00:15:45.240 | 1.00th=[ 186], 5.00th=[ 206], 10.00th=[ 215], 20.00th=[ 225], 00:15:45.240 | 30.00th=[ 231], 40.00th=[ 239], 50.00th=[ 247], 60.00th=[ 258], 00:15:45.240 | 70.00th=[ 277], 80.00th=[ 306], 90.00th=[ 351], 95.00th=[ 383], 00:15:45.240 | 99.00th=[ 611], 99.50th=[ 734], 99.90th=[ 889], 99.95th=[ 922], 00:15:45.240 | 99.99th=[ 922] 00:15:45.240 bw ( KiB/s): min= 5984, max= 6304, per=34.63%, avg=6144.00, stdev=226.27, samples=2 00:15:45.240 iops : min= 1496, max= 1576, avg=1536.00, stdev=56.57, samples=2 00:15:45.240 lat (usec) : 250=32.68%, 500=49.82%, 750=16.48%, 1000=0.94% 00:15:45.240 lat (msec) : 4=0.04%, 50=0.04% 00:15:45.240 cpu : usr=2.49%, sys=3.88%, ctx=2563, majf=0, minf=1 00:15:45.240 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:45.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:45.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:45.240 issued rwts: total=1025,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:45.240 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:45.240 job3: (groupid=0, jobs=1): err= 0: pid=3037647: Thu Apr 18 21:09:00 2024 00:15:45.240 read: IOPS=917, BW=3671KiB/s (3759kB/s)(3788KiB/1032msec) 00:15:45.240 slat (nsec): min=7430, max=29230, avg=8588.77, stdev=1635.92 00:15:45.240 clat (usec): min=270, max=41881, avg=739.76, stdev=3482.86 00:15:45.240 lat (usec): min=279, max=41889, avg=748.35, stdev=3483.07 00:15:45.240 clat percentiles (usec): 00:15:45.240 | 1.00th=[ 293], 5.00th=[ 318], 10.00th=[ 330], 20.00th=[ 355], 00:15:45.240 | 30.00th=[ 379], 40.00th=[ 412], 50.00th=[ 437], 60.00th=[ 461], 00:15:45.240 | 70.00th=[ 486], 80.00th=[ 506], 90.00th=[ 537], 95.00th=[ 570], 00:15:45.240 | 99.00th=[ 930], 99.50th=[40633], 99.90th=[41681], 99.95th=[41681], 00:15:45.240 | 99.99th=[41681] 00:15:45.240 write: IOPS=992, BW=3969KiB/s (4064kB/s)(4096KiB/1032msec); 0 zone resets 00:15:45.240 slat (usec): min=8, max=38966, avg=50.57, stdev=1217.31 00:15:45.240 clat (usec): min=176, max=552, avg=255.40, stdev=56.80 00:15:45.240 lat (usec): min=187, max=39518, avg=305.97, stdev=1227.90 00:15:45.240 clat percentiles (usec): 00:15:45.240 | 1.00th=[ 182], 5.00th=[ 194], 10.00th=[ 202], 20.00th=[ 215], 00:15:45.240 | 30.00th=[ 225], 40.00th=[ 233], 50.00th=[ 243], 60.00th=[ 251], 00:15:45.240 | 70.00th=[ 265], 80.00th=[ 281], 90.00th=[ 334], 95.00th=[ 367], 00:15:45.240 | 99.00th=[ 482], 99.50th=[ 498], 99.90th=[ 529], 99.95th=[ 553], 00:15:45.240 | 99.99th=[ 553] 00:15:45.240 bw ( KiB/s): min= 4087, max= 4096, per=23.06%, avg=4091.50, stdev= 6.36, samples=2 00:15:45.240 iops : min= 1021, max= 1024, avg=1022.50, stdev= 2.12, samples=2 00:15:45.240 lat (usec) : 250=30.64%, 500=58.35%, 750=10.30%, 1000=0.30% 00:15:45.240 lat (msec) : 4=0.05%, 50=0.36% 00:15:45.240 cpu : usr=1.94%, sys=2.81%, ctx=1974, majf=0, minf=1 00:15:45.240 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:45.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:45.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:45.240 issued rwts: total=947,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:45.240 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:45.240 00:15:45.240 Run status group 0 (all jobs): 00:15:45.240 READ: bw=11.5MiB/s (12.1MB/s), 84.7KiB/s-4268KiB/s (86.7kB/s-4370kB/s), io=12.0MiB (12.5MB), run=1001-1039msec 00:15:45.240 WRITE: bw=17.3MiB/s (18.2MB/s), 1971KiB/s-6138KiB/s (2018kB/s-6285kB/s), io=18.0MiB (18.9MB), run=1001-1039msec 00:15:45.240 00:15:45.240 Disk stats (read/write): 00:15:45.240 nvme0n1: ios=41/512, merge=0/0, ticks=1568/122, in_queue=1690, util=85.47% 00:15:45.241 nvme0n2: ios=1047/1024, merge=0/0, ticks=710/244, in_queue=954, util=91.06% 00:15:45.241 nvme0n3: ios=1046/1128, merge=0/0, ticks=1364/283, in_queue=1647, util=93.12% 00:15:45.241 nvme0n4: ios=850/1024, merge=0/0, ticks=1396/244, in_queue=1640, util=95.58% 00:15:45.241 21:09:00 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:15:45.241 [global] 00:15:45.241 thread=1 00:15:45.241 invalidate=1 00:15:45.241 rw=randwrite 00:15:45.241 time_based=1 00:15:45.241 runtime=1 00:15:45.241 ioengine=libaio 00:15:45.241 direct=1 00:15:45.241 bs=4096 00:15:45.241 iodepth=1 00:15:45.241 norandommap=0 00:15:45.241 numjobs=1 00:15:45.241 00:15:45.241 verify_dump=1 00:15:45.241 verify_backlog=512 00:15:45.241 verify_state_save=0 00:15:45.241 do_verify=1 00:15:45.241 verify=crc32c-intel 00:15:45.241 [job0] 00:15:45.241 filename=/dev/nvme0n1 00:15:45.241 [job1] 00:15:45.241 filename=/dev/nvme0n2 00:15:45.241 [job2] 00:15:45.241 filename=/dev/nvme0n3 00:15:45.241 [job3] 00:15:45.241 filename=/dev/nvme0n4 00:15:45.241 Could not set queue depth (nvme0n1) 00:15:45.241 Could not set queue depth (nvme0n2) 00:15:45.241 Could not set queue depth (nvme0n3) 00:15:45.241 Could not set queue depth (nvme0n4) 00:15:45.241 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:45.241 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:45.241 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:45.241 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:45.241 fio-3.35 00:15:45.241 Starting 4 threads 00:15:46.622 00:15:46.622 job0: (groupid=0, jobs=1): err= 0: pid=3038131: Thu Apr 18 21:09:02 2024 00:15:46.622 read: IOPS=516, BW=2068KiB/s (2117kB/s)(2080KiB/1006msec) 00:15:46.622 slat (nsec): min=2917, max=23261, avg=5205.80, stdev=2473.75 00:15:46.622 clat (usec): min=305, max=41987, avg=1376.13, stdev=5873.81 00:15:46.622 lat (usec): min=308, max=41997, avg=1381.33, stdev=5874.77 00:15:46.622 clat percentiles (usec): 00:15:46.622 | 1.00th=[ 314], 5.00th=[ 338], 10.00th=[ 363], 20.00th=[ 396], 00:15:46.622 | 30.00th=[ 424], 40.00th=[ 457], 50.00th=[ 490], 60.00th=[ 506], 00:15:46.622 | 70.00th=[ 537], 80.00th=[ 570], 90.00th=[ 594], 95.00th=[ 627], 00:15:46.622 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:15:46.622 | 99.99th=[42206] 00:15:46.622 write: IOPS=1017, BW=4072KiB/s (4169kB/s)(4096KiB/1006msec); 0 zone resets 00:15:46.622 slat (nsec): min=3781, max=45940, avg=8842.80, stdev=3725.21 00:15:46.622 clat (usec): min=182, max=1257, avg=267.98, stdev=57.55 00:15:46.622 lat (usec): min=186, max=1271, avg=276.82, stdev=59.24 00:15:46.622 clat percentiles (usec): 00:15:46.622 | 1.00th=[ 190], 5.00th=[ 200], 10.00th=[ 208], 20.00th=[ 223], 00:15:46.622 | 30.00th=[ 237], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 273], 00:15:46.622 | 70.00th=[ 285], 80.00th=[ 306], 90.00th=[ 347], 95.00th=[ 355], 00:15:46.622 | 99.00th=[ 379], 99.50th=[ 412], 99.90th=[ 529], 99.95th=[ 1254], 00:15:46.622 | 99.99th=[ 1254] 00:15:46.622 bw ( KiB/s): min= 4096, max= 4096, per=20.10%, avg=4096.00, stdev= 0.00, samples=2 00:15:46.622 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:15:46.622 lat (usec) : 250=24.16%, 500=61.20%, 750=13.67%, 1000=0.06% 00:15:46.622 lat (msec) : 2=0.13%, 50=0.78% 00:15:46.622 cpu : usr=0.80%, sys=0.90%, ctx=1547, majf=0, minf=2 00:15:46.622 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:46.622 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:46.622 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:46.622 issued rwts: total=520,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:46.622 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:46.622 job1: (groupid=0, jobs=1): err= 0: pid=3038132: Thu Apr 18 21:09:02 2024 00:15:46.622 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:15:46.622 slat (nsec): min=3491, max=40298, avg=8648.00, stdev=2778.21 00:15:46.622 clat (usec): min=330, max=1081, avg=507.17, stdev=74.52 00:15:46.622 lat (usec): min=336, max=1095, avg=515.82, stdev=74.77 00:15:46.622 clat percentiles (usec): 00:15:46.622 | 1.00th=[ 351], 5.00th=[ 400], 10.00th=[ 437], 20.00th=[ 478], 00:15:46.622 | 30.00th=[ 490], 40.00th=[ 494], 50.00th=[ 498], 60.00th=[ 502], 00:15:46.622 | 70.00th=[ 510], 80.00th=[ 523], 90.00th=[ 586], 95.00th=[ 652], 00:15:46.622 | 99.00th=[ 783], 99.50th=[ 807], 99.90th=[ 1057], 99.95th=[ 1090], 00:15:46.622 | 99.99th=[ 1090] 00:15:46.622 write: IOPS=1432, BW=5730KiB/s (5868kB/s)(5736KiB/1001msec); 0 zone resets 00:15:46.622 slat (usec): min=7, max=36229, avg=37.92, stdev=956.40 00:15:46.622 clat (usec): min=177, max=3837, avg=284.76, stdev=133.57 00:15:46.623 lat (usec): min=200, max=36657, avg=322.68, stdev=969.52 00:15:46.623 clat percentiles (usec): 00:15:46.623 | 1.00th=[ 196], 5.00th=[ 208], 10.00th=[ 217], 20.00th=[ 227], 00:15:46.623 | 30.00th=[ 237], 40.00th=[ 251], 50.00th=[ 265], 60.00th=[ 273], 00:15:46.623 | 70.00th=[ 302], 80.00th=[ 334], 90.00th=[ 359], 95.00th=[ 383], 00:15:46.623 | 99.00th=[ 570], 99.50th=[ 709], 99.90th=[ 1778], 99.95th=[ 3851], 00:15:46.623 | 99.99th=[ 3851] 00:15:46.623 bw ( KiB/s): min= 5792, max= 5792, per=28.43%, avg=5792.00, stdev= 0.00, samples=1 00:15:46.623 iops : min= 1448, max= 1448, avg=1448.00, stdev= 0.00, samples=1 00:15:46.623 lat (usec) : 250=23.27%, 500=56.55%, 750=19.28%, 1000=0.61% 00:15:46.623 lat (msec) : 2=0.24%, 4=0.04% 00:15:46.623 cpu : usr=2.70%, sys=3.40%, ctx=2460, majf=0, minf=1 00:15:46.623 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:46.623 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:46.623 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:46.623 issued rwts: total=1024,1434,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:46.623 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:46.623 job2: (groupid=0, jobs=1): err= 0: pid=3038136: Thu Apr 18 21:09:02 2024 00:15:46.623 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:15:46.623 slat (nsec): min=4144, max=23891, avg=9116.21, stdev=2210.18 00:15:46.623 clat (usec): min=267, max=1013, avg=562.86, stdev=101.57 00:15:46.623 lat (usec): min=273, max=1020, avg=571.97, stdev=102.13 00:15:46.623 clat percentiles (usec): 00:15:46.623 | 1.00th=[ 338], 5.00th=[ 433], 10.00th=[ 469], 20.00th=[ 490], 00:15:46.623 | 30.00th=[ 502], 40.00th=[ 529], 50.00th=[ 545], 60.00th=[ 562], 00:15:46.623 | 70.00th=[ 603], 80.00th=[ 635], 90.00th=[ 676], 95.00th=[ 775], 00:15:46.623 | 99.00th=[ 873], 99.50th=[ 914], 99.90th=[ 988], 99.95th=[ 1012], 00:15:46.623 | 99.99th=[ 1012] 00:15:46.623 write: IOPS=1128, BW=4515KiB/s (4624kB/s)(4520KiB/1001msec); 0 zone resets 00:15:46.623 slat (nsec): min=7432, max=38304, avg=11043.81, stdev=2445.20 00:15:46.623 clat (usec): min=222, max=1037, avg=348.25, stdev=86.92 00:15:46.623 lat (usec): min=230, max=1054, avg=359.30, stdev=87.45 00:15:46.623 clat percentiles (usec): 00:15:46.623 | 1.00th=[ 231], 5.00th=[ 251], 10.00th=[ 262], 20.00th=[ 273], 00:15:46.623 | 30.00th=[ 289], 40.00th=[ 310], 50.00th=[ 334], 60.00th=[ 355], 00:15:46.623 | 70.00th=[ 383], 80.00th=[ 412], 90.00th=[ 449], 95.00th=[ 498], 00:15:46.623 | 99.00th=[ 685], 99.50th=[ 709], 99.90th=[ 807], 99.95th=[ 1037], 00:15:46.623 | 99.99th=[ 1037] 00:15:46.623 bw ( KiB/s): min= 4600, max= 4600, per=22.58%, avg=4600.00, stdev= 0.00, samples=1 00:15:46.623 iops : min= 1150, max= 1150, avg=1150.00, stdev= 0.00, samples=1 00:15:46.623 lat (usec) : 250=2.55%, 500=60.58%, 750=34.08%, 1000=2.69% 00:15:46.623 lat (msec) : 2=0.09% 00:15:46.623 cpu : usr=1.80%, sys=3.20%, ctx=2155, majf=0, minf=1 00:15:46.623 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:46.623 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:46.623 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:46.623 issued rwts: total=1024,1130,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:46.623 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:46.623 job3: (groupid=0, jobs=1): err= 0: pid=3038137: Thu Apr 18 21:09:02 2024 00:15:46.623 read: IOPS=1237, BW=4951KiB/s (5070kB/s)(4956KiB/1001msec) 00:15:46.623 slat (nsec): min=7447, max=23011, avg=8598.72, stdev=1130.68 00:15:46.623 clat (usec): min=331, max=1367, avg=459.44, stdev=58.47 00:15:46.623 lat (usec): min=340, max=1376, avg=468.04, stdev=58.45 00:15:46.623 clat percentiles (usec): 00:15:46.623 | 1.00th=[ 347], 5.00th=[ 367], 10.00th=[ 400], 20.00th=[ 429], 00:15:46.623 | 30.00th=[ 441], 40.00th=[ 445], 50.00th=[ 453], 60.00th=[ 461], 00:15:46.623 | 70.00th=[ 469], 80.00th=[ 486], 90.00th=[ 545], 95.00th=[ 553], 00:15:46.623 | 99.00th=[ 570], 99.50th=[ 611], 99.90th=[ 922], 99.95th=[ 1369], 00:15:46.623 | 99.99th=[ 1369] 00:15:46.623 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:15:46.623 slat (nsec): min=10866, max=38655, avg=12287.65, stdev=1842.10 00:15:46.623 clat (usec): min=194, max=884, avg=254.95, stdev=43.01 00:15:46.623 lat (usec): min=206, max=896, avg=267.23, stdev=43.27 00:15:46.623 clat percentiles (usec): 00:15:46.623 | 1.00th=[ 206], 5.00th=[ 217], 10.00th=[ 221], 20.00th=[ 227], 00:15:46.623 | 30.00th=[ 233], 40.00th=[ 239], 50.00th=[ 245], 60.00th=[ 251], 00:15:46.623 | 70.00th=[ 262], 80.00th=[ 277], 90.00th=[ 306], 95.00th=[ 343], 00:15:46.623 | 99.00th=[ 367], 99.50th=[ 396], 99.90th=[ 685], 99.95th=[ 889], 00:15:46.623 | 99.99th=[ 889] 00:15:46.623 bw ( KiB/s): min= 7016, max= 7016, per=34.44%, avg=7016.00, stdev= 0.00, samples=1 00:15:46.623 iops : min= 1754, max= 1754, avg=1754.00, stdev= 0.00, samples=1 00:15:46.623 lat (usec) : 250=31.82%, 500=60.65%, 750=7.39%, 1000=0.11% 00:15:46.623 lat (msec) : 2=0.04% 00:15:46.623 cpu : usr=2.80%, sys=4.30%, ctx=2776, majf=0, minf=1 00:15:46.623 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:46.623 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:46.623 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:46.623 issued rwts: total=1239,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:46.623 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:46.623 00:15:46.623 Run status group 0 (all jobs): 00:15:46.623 READ: bw=14.8MiB/s (15.5MB/s), 2068KiB/s-4951KiB/s (2117kB/s-5070kB/s), io=14.9MiB (15.6MB), run=1001-1006msec 00:15:46.623 WRITE: bw=19.9MiB/s (20.9MB/s), 4072KiB/s-6138KiB/s (4169kB/s-6285kB/s), io=20.0MiB (21.0MB), run=1001-1006msec 00:15:46.623 00:15:46.623 Disk stats (read/write): 00:15:46.623 nvme0n1: ios=546/1024, merge=0/0, ticks=785/266, in_queue=1051, util=98.10% 00:15:46.623 nvme0n2: ios=1050/1030, merge=0/0, ticks=1468/283, in_queue=1751, util=98.27% 00:15:46.623 nvme0n3: ios=850/1024, merge=0/0, ticks=1428/349, in_queue=1777, util=97.40% 00:15:46.623 nvme0n4: ios=1082/1346, merge=0/0, ticks=1089/328, in_queue=1417, util=98.22% 00:15:46.623 21:09:02 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:15:46.623 [global] 00:15:46.623 thread=1 00:15:46.623 invalidate=1 00:15:46.623 rw=write 00:15:46.623 time_based=1 00:15:46.623 runtime=1 00:15:46.623 ioengine=libaio 00:15:46.623 direct=1 00:15:46.623 bs=4096 00:15:46.623 iodepth=128 00:15:46.623 norandommap=0 00:15:46.623 numjobs=1 00:15:46.623 00:15:46.623 verify_dump=1 00:15:46.623 verify_backlog=512 00:15:46.623 verify_state_save=0 00:15:46.623 do_verify=1 00:15:46.623 verify=crc32c-intel 00:15:46.623 [job0] 00:15:46.623 filename=/dev/nvme0n1 00:15:46.623 [job1] 00:15:46.623 filename=/dev/nvme0n2 00:15:46.623 [job2] 00:15:46.623 filename=/dev/nvme0n3 00:15:46.623 [job3] 00:15:46.623 filename=/dev/nvme0n4 00:15:46.623 Could not set queue depth (nvme0n1) 00:15:46.623 Could not set queue depth (nvme0n2) 00:15:46.623 Could not set queue depth (nvme0n3) 00:15:46.623 Could not set queue depth (nvme0n4) 00:15:46.883 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:46.883 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:46.883 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:46.883 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:46.883 fio-3.35 00:15:46.883 Starting 4 threads 00:15:48.298 00:15:48.298 job0: (groupid=0, jobs=1): err= 0: pid=3038573: Thu Apr 18 21:09:03 2024 00:15:48.298 read: IOPS=3320, BW=13.0MiB/s (13.6MB/s)(13.0MiB/1004msec) 00:15:48.298 slat (nsec): min=1120, max=22813k, avg=138205.69, stdev=1007348.29 00:15:48.298 clat (usec): min=2611, max=48993, avg=16498.15, stdev=7025.41 00:15:48.298 lat (usec): min=4957, max=48998, avg=16636.35, stdev=7107.78 00:15:48.298 clat percentiles (usec): 00:15:48.298 | 1.00th=[ 6128], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10421], 00:15:48.298 | 30.00th=[11731], 40.00th=[13173], 50.00th=[14222], 60.00th=[16909], 00:15:48.298 | 70.00th=[19006], 80.00th=[21365], 90.00th=[26608], 95.00th=[30016], 00:15:48.298 | 99.00th=[40109], 99.50th=[46400], 99.90th=[49021], 99.95th=[49021], 00:15:48.298 | 99.99th=[49021] 00:15:48.298 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:15:48.298 slat (usec): min=2, max=13944, avg=144.37, stdev=783.53 00:15:48.298 clat (usec): min=2079, max=59037, avg=20176.46, stdev=12978.09 00:15:48.298 lat (usec): min=2091, max=59049, avg=20320.83, stdev=13055.27 00:15:48.298 clat percentiles (usec): 00:15:48.298 | 1.00th=[ 3851], 5.00th=[ 7046], 10.00th=[ 7832], 20.00th=[ 9241], 00:15:48.298 | 30.00th=[10421], 40.00th=[12387], 50.00th=[16450], 60.00th=[20055], 00:15:48.298 | 70.00th=[23725], 80.00th=[31851], 90.00th=[41681], 95.00th=[45876], 00:15:48.298 | 99.00th=[54264], 99.50th=[55313], 99.90th=[58983], 99.95th=[58983], 00:15:48.298 | 99.99th=[58983] 00:15:48.298 bw ( KiB/s): min=13328, max=15344, per=23.27%, avg=14336.00, stdev=1425.53, samples=2 00:15:48.298 iops : min= 3332, max= 3836, avg=3584.00, stdev=356.38, samples=2 00:15:48.298 lat (msec) : 4=0.61%, 10=20.03%, 20=44.84%, 50=33.13%, 100=1.39% 00:15:48.298 cpu : usr=1.99%, sys=3.89%, ctx=464, majf=0, minf=1 00:15:48.298 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:15:48.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:48.298 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:48.298 issued rwts: total=3334,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:48.298 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:48.298 job1: (groupid=0, jobs=1): err= 0: pid=3038574: Thu Apr 18 21:09:03 2024 00:15:48.298 read: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec) 00:15:48.298 slat (nsec): min=1128, max=46029k, avg=127896.14, stdev=1230485.39 00:15:48.298 clat (usec): min=1074, max=56621, avg=18618.48, stdev=10176.10 00:15:48.298 lat (usec): min=1079, max=80864, avg=18746.37, stdev=10259.30 00:15:48.298 clat percentiles (usec): 00:15:48.298 | 1.00th=[ 5211], 5.00th=[ 8455], 10.00th=[10159], 20.00th=[11338], 00:15:48.298 | 30.00th=[12387], 40.00th=[14091], 50.00th=[16188], 60.00th=[17957], 00:15:48.298 | 70.00th=[20317], 80.00th=[23987], 90.00th=[29492], 95.00th=[40633], 00:15:48.298 | 99.00th=[55837], 99.50th=[56361], 99.90th=[56361], 99.95th=[56361], 00:15:48.298 | 99.99th=[56361] 00:15:48.298 write: IOPS=3720, BW=14.5MiB/s (15.2MB/s)(14.6MiB/1008msec); 0 zone resets 00:15:48.298 slat (nsec): min=1971, max=13611k, avg=102907.62, stdev=719234.81 00:15:48.298 clat (usec): min=1172, max=57328, avg=16353.81, stdev=8303.85 00:15:48.298 lat (usec): min=1183, max=57338, avg=16456.71, stdev=8346.29 00:15:48.298 clat percentiles (usec): 00:15:48.298 | 1.00th=[ 3884], 5.00th=[ 6390], 10.00th=[ 8029], 20.00th=[ 9241], 00:15:48.298 | 30.00th=[10945], 40.00th=[12256], 50.00th=[13960], 60.00th=[16057], 00:15:48.298 | 70.00th=[19792], 80.00th=[23987], 90.00th=[27657], 95.00th=[32113], 00:15:48.298 | 99.00th=[42206], 99.50th=[47449], 99.90th=[50594], 99.95th=[56361], 00:15:48.298 | 99.99th=[57410] 00:15:48.298 bw ( KiB/s): min=12824, max=16152, per=23.52%, avg=14488.00, stdev=2353.25, samples=2 00:15:48.298 iops : min= 3206, max= 4038, avg=3622.00, stdev=588.31, samples=2 00:15:48.298 lat (msec) : 2=0.08%, 4=0.85%, 10=16.92%, 20=52.54%, 50=27.35% 00:15:48.298 lat (msec) : 100=2.26% 00:15:48.298 cpu : usr=2.28%, sys=3.67%, ctx=307, majf=0, minf=1 00:15:48.298 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:15:48.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:48.298 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:48.298 issued rwts: total=3584,3750,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:48.298 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:48.298 job2: (groupid=0, jobs=1): err= 0: pid=3038575: Thu Apr 18 21:09:03 2024 00:15:48.298 read: IOPS=3308, BW=12.9MiB/s (13.6MB/s)(13.0MiB/1005msec) 00:15:48.298 slat (nsec): min=1056, max=25019k, avg=137728.44, stdev=935720.95 00:15:48.298 clat (usec): min=2818, max=52390, avg=16653.64, stdev=7746.40 00:15:48.298 lat (usec): min=5783, max=55500, avg=16791.36, stdev=7767.44 00:15:48.298 clat percentiles (usec): 00:15:48.298 | 1.00th=[ 7504], 5.00th=[ 9634], 10.00th=[10814], 20.00th=[12518], 00:15:48.298 | 30.00th=[13829], 40.00th=[14353], 50.00th=[14746], 60.00th=[15008], 00:15:48.298 | 70.00th=[15926], 80.00th=[17695], 90.00th=[24249], 95.00th=[33817], 00:15:48.298 | 99.00th=[48497], 99.50th=[52167], 99.90th=[52167], 99.95th=[52167], 00:15:48.298 | 99.99th=[52167] 00:15:48.298 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:15:48.298 slat (nsec): min=1928, max=46352k, avg=147786.02, stdev=1274933.34 00:15:48.298 clat (usec): min=6161, max=73084, avg=17386.05, stdev=7433.86 00:15:48.298 lat (usec): min=6165, max=97723, avg=17533.84, stdev=7618.22 00:15:48.298 clat percentiles (usec): 00:15:48.298 | 1.00th=[ 8848], 5.00th=[10945], 10.00th=[11731], 20.00th=[12649], 00:15:48.298 | 30.00th=[13173], 40.00th=[13960], 50.00th=[14353], 60.00th=[15139], 00:15:48.298 | 70.00th=[17957], 80.00th=[22414], 90.00th=[26870], 95.00th=[31065], 00:15:48.298 | 99.00th=[42730], 99.50th=[50594], 99.90th=[50594], 99.95th=[55837], 00:15:48.298 | 99.99th=[72877] 00:15:48.298 bw ( KiB/s): min=12288, max=16384, per=23.27%, avg=14336.00, stdev=2896.31, samples=2 00:15:48.298 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:15:48.298 lat (msec) : 4=0.01%, 10=4.39%, 20=75.06%, 50=19.81%, 100=0.72% 00:15:48.298 cpu : usr=1.39%, sys=2.99%, ctx=405, majf=0, minf=1 00:15:48.298 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:15:48.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:48.298 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:48.298 issued rwts: total=3325,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:48.298 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:48.298 job3: (groupid=0, jobs=1): err= 0: pid=3038577: Thu Apr 18 21:09:03 2024 00:15:48.298 read: IOPS=4491, BW=17.5MiB/s (18.4MB/s)(17.6MiB/1002msec) 00:15:48.298 slat (nsec): min=1425, max=24989k, avg=107774.65, stdev=810925.00 00:15:48.298 clat (usec): min=871, max=51800, avg=14470.25, stdev=6034.40 00:15:48.298 lat (usec): min=6082, max=51804, avg=14578.02, stdev=6065.17 00:15:48.298 clat percentiles (usec): 00:15:48.298 | 1.00th=[ 6718], 5.00th=[ 8848], 10.00th=[ 9503], 20.00th=[10552], 00:15:48.298 | 30.00th=[11994], 40.00th=[12780], 50.00th=[13566], 60.00th=[14353], 00:15:48.298 | 70.00th=[15270], 80.00th=[16909], 90.00th=[18482], 95.00th=[21890], 00:15:48.298 | 99.00th=[49546], 99.50th=[51119], 99.90th=[51643], 99.95th=[51643], 00:15:48.298 | 99.99th=[51643] 00:15:48.298 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:15:48.298 slat (usec): min=2, max=29133, avg=105.34, stdev=748.65 00:15:48.298 clat (usec): min=1657, max=30608, avg=12635.98, stdev=4902.91 00:15:48.298 lat (usec): min=1675, max=30797, avg=12741.32, stdev=4929.48 00:15:48.298 clat percentiles (usec): 00:15:48.298 | 1.00th=[ 4686], 5.00th=[ 6521], 10.00th=[ 7439], 20.00th=[ 8979], 00:15:48.299 | 30.00th=[ 9765], 40.00th=[11076], 50.00th=[11863], 60.00th=[12649], 00:15:48.299 | 70.00th=[13829], 80.00th=[15533], 90.00th=[17695], 95.00th=[24511], 00:15:48.299 | 99.00th=[28443], 99.50th=[29230], 99.90th=[30016], 99.95th=[30540], 00:15:48.299 | 99.99th=[30540] 00:15:48.299 bw ( KiB/s): min=16384, max=20480, per=29.92%, avg=18432.00, stdev=2896.31, samples=2 00:15:48.299 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:15:48.299 lat (usec) : 1000=0.01% 00:15:48.299 lat (msec) : 2=0.02%, 4=0.10%, 10=23.42%, 20=68.80%, 50=7.35% 00:15:48.299 lat (msec) : 100=0.31% 00:15:48.299 cpu : usr=3.20%, sys=5.09%, ctx=417, majf=0, minf=1 00:15:48.299 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:15:48.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:48.299 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:48.299 issued rwts: total=4500,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:48.299 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:48.299 00:15:48.299 Run status group 0 (all jobs): 00:15:48.299 READ: bw=57.1MiB/s (59.9MB/s), 12.9MiB/s-17.5MiB/s (13.6MB/s-18.4MB/s), io=57.6MiB (60.4MB), run=1002-1008msec 00:15:48.299 WRITE: bw=60.2MiB/s (63.1MB/s), 13.9MiB/s-18.0MiB/s (14.6MB/s-18.8MB/s), io=60.6MiB (63.6MB), run=1002-1008msec 00:15:48.299 00:15:48.299 Disk stats (read/write): 00:15:48.299 nvme0n1: ios=2598/2735, merge=0/0, ticks=45435/58677, in_queue=104112, util=96.90% 00:15:48.299 nvme0n2: ios=3108/3453, merge=0/0, ticks=50979/52597, in_queue=103576, util=100.00% 00:15:48.299 nvme0n3: ios=2795/3072, merge=0/0, ticks=17345/18785, in_queue=36130, util=95.54% 00:15:48.299 nvme0n4: ios=3918/4096, merge=0/0, ticks=46717/44517, in_queue=91234, util=100.00% 00:15:48.299 21:09:03 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:15:48.299 [global] 00:15:48.299 thread=1 00:15:48.299 invalidate=1 00:15:48.299 rw=randwrite 00:15:48.299 time_based=1 00:15:48.299 runtime=1 00:15:48.299 ioengine=libaio 00:15:48.299 direct=1 00:15:48.299 bs=4096 00:15:48.299 iodepth=128 00:15:48.299 norandommap=0 00:15:48.299 numjobs=1 00:15:48.299 00:15:48.299 verify_dump=1 00:15:48.299 verify_backlog=512 00:15:48.299 verify_state_save=0 00:15:48.299 do_verify=1 00:15:48.299 verify=crc32c-intel 00:15:48.299 [job0] 00:15:48.299 filename=/dev/nvme0n1 00:15:48.299 [job1] 00:15:48.299 filename=/dev/nvme0n2 00:15:48.299 [job2] 00:15:48.299 filename=/dev/nvme0n3 00:15:48.299 [job3] 00:15:48.299 filename=/dev/nvme0n4 00:15:48.299 Could not set queue depth (nvme0n1) 00:15:48.299 Could not set queue depth (nvme0n2) 00:15:48.299 Could not set queue depth (nvme0n3) 00:15:48.299 Could not set queue depth (nvme0n4) 00:15:48.559 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:48.559 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:48.559 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:48.559 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:48.559 fio-3.35 00:15:48.559 Starting 4 threads 00:15:49.932 00:15:49.932 job0: (groupid=0, jobs=1): err= 0: pid=3038943: Thu Apr 18 21:09:05 2024 00:15:49.932 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec) 00:15:49.932 slat (nsec): min=1016, max=43053k, avg=186403.51, stdev=1312147.78 00:15:49.932 clat (usec): min=5433, max=61192, avg=24181.48, stdev=15165.84 00:15:49.932 lat (usec): min=5437, max=64988, avg=24367.89, stdev=15231.11 00:15:49.932 clat percentiles (usec): 00:15:49.932 | 1.00th=[ 5473], 5.00th=[10028], 10.00th=[10814], 20.00th=[12125], 00:15:49.932 | 30.00th=[12780], 40.00th=[14353], 50.00th=[15533], 60.00th=[21103], 00:15:49.932 | 70.00th=[31327], 80.00th=[41157], 90.00th=[48497], 95.00th=[53216], 00:15:49.932 | 99.00th=[61080], 99.50th=[61080], 99.90th=[61080], 99.95th=[61080], 00:15:49.932 | 99.99th=[61080] 00:15:49.932 write: IOPS=3245, BW=12.7MiB/s (13.3MB/s)(12.7MiB/1005msec); 0 zone resets 00:15:49.932 slat (nsec): min=1708, max=19766k, avg=127317.73, stdev=803956.73 00:15:49.932 clat (usec): min=1031, max=51494, avg=16114.84, stdev=8681.66 00:15:49.932 lat (usec): min=1633, max=51501, avg=16242.16, stdev=8717.55 00:15:49.932 clat percentiles (usec): 00:15:49.932 | 1.00th=[ 4047], 5.00th=[ 6390], 10.00th=[ 8979], 20.00th=[10552], 00:15:49.932 | 30.00th=[11338], 40.00th=[11994], 50.00th=[12649], 60.00th=[13435], 00:15:49.932 | 70.00th=[16188], 80.00th=[23725], 90.00th=[28443], 95.00th=[34341], 00:15:49.932 | 99.00th=[44303], 99.50th=[49021], 99.90th=[51643], 99.95th=[51643], 00:15:49.932 | 99.99th=[51643] 00:15:49.932 bw ( KiB/s): min= 6928, max=18144, per=21.57%, avg=12536.00, stdev=7930.91, samples=2 00:15:49.932 iops : min= 1732, max= 4536, avg=3134.00, stdev=1982.73, samples=2 00:15:49.932 lat (msec) : 2=0.24%, 4=0.24%, 10=10.48%, 20=55.95%, 50=28.21% 00:15:49.932 lat (msec) : 100=4.88% 00:15:49.933 cpu : usr=2.39%, sys=2.29%, ctx=335, majf=0, minf=1 00:15:49.933 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:15:49.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:49.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:49.933 issued rwts: total=3072,3262,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:49.933 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:49.933 job1: (groupid=0, jobs=1): err= 0: pid=3038944: Thu Apr 18 21:09:05 2024 00:15:49.933 read: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec) 00:15:49.933 slat (nsec): min=1635, max=13617k, avg=154104.01, stdev=952102.62 00:15:49.933 clat (usec): min=12866, max=45514, avg=19117.23, stdev=4132.99 00:15:49.933 lat (usec): min=12875, max=45521, avg=19271.33, stdev=4233.60 00:15:49.933 clat percentiles (usec): 00:15:49.933 | 1.00th=[13042], 5.00th=[14091], 10.00th=[14877], 20.00th=[15664], 00:15:49.933 | 30.00th=[16319], 40.00th=[17433], 50.00th=[18220], 60.00th=[19792], 00:15:49.933 | 70.00th=[20841], 80.00th=[22414], 90.00th=[23987], 95.00th=[25035], 00:15:49.933 | 99.00th=[34341], 99.50th=[40109], 99.90th=[45351], 99.95th=[45351], 00:15:49.933 | 99.99th=[45351] 00:15:49.933 write: IOPS=3145, BW=12.3MiB/s (12.9MB/s)(12.4MiB/1006msec); 0 zone resets 00:15:49.933 slat (usec): min=2, max=15801, avg=160.64, stdev=807.23 00:15:49.933 clat (usec): min=5395, max=58473, avg=21684.95, stdev=8548.58 00:15:49.933 lat (usec): min=5401, max=58478, avg=21845.59, stdev=8610.85 00:15:49.933 clat percentiles (usec): 00:15:49.933 | 1.00th=[ 8029], 5.00th=[12518], 10.00th=[12911], 20.00th=[13566], 00:15:49.933 | 30.00th=[13960], 40.00th=[16712], 50.00th=[21627], 60.00th=[25035], 00:15:49.933 | 70.00th=[26870], 80.00th=[28705], 90.00th=[30278], 95.00th=[34341], 00:15:49.933 | 99.00th=[47449], 99.50th=[52691], 99.90th=[58459], 99.95th=[58459], 00:15:49.933 | 99.99th=[58459] 00:15:49.933 bw ( KiB/s): min=12288, max=12288, per=21.14%, avg=12288.00, stdev= 0.00, samples=2 00:15:49.933 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:15:49.933 lat (msec) : 10=0.87%, 20=53.54%, 50=45.11%, 100=0.48% 00:15:49.933 cpu : usr=1.99%, sys=3.98%, ctx=375, majf=0, minf=1 00:15:49.933 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:15:49.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:49.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:49.933 issued rwts: total=3072,3164,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:49.933 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:49.933 job2: (groupid=0, jobs=1): err= 0: pid=3038945: Thu Apr 18 21:09:05 2024 00:15:49.933 read: IOPS=2653, BW=10.4MiB/s (10.9MB/s)(10.4MiB/1005msec) 00:15:49.933 slat (nsec): min=1070, max=13392k, avg=174076.26, stdev=965675.33 00:15:49.933 clat (usec): min=2068, max=46586, avg=21583.90, stdev=7236.06 00:15:49.933 lat (usec): min=4941, max=46643, avg=21757.97, stdev=7313.81 00:15:49.933 clat percentiles (usec): 00:15:49.933 | 1.00th=[10290], 5.00th=[11076], 10.00th=[12387], 20.00th=[14877], 00:15:49.933 | 30.00th=[17695], 40.00th=[19792], 50.00th=[21103], 60.00th=[22152], 00:15:49.933 | 70.00th=[23725], 80.00th=[27657], 90.00th=[31589], 95.00th=[34866], 00:15:49.933 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[43779], 00:15:49.933 | 99.99th=[46400] 00:15:49.933 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:15:49.933 slat (nsec): min=1862, max=17429k, avg=164476.39, stdev=846704.28 00:15:49.933 clat (usec): min=5167, max=54576, avg=22506.13, stdev=11335.50 00:15:49.933 lat (usec): min=5176, max=54583, avg=22670.61, stdev=11415.95 00:15:49.933 clat percentiles (usec): 00:15:49.933 | 1.00th=[ 7111], 5.00th=[10028], 10.00th=[10552], 20.00th=[12780], 00:15:49.933 | 30.00th=[15008], 40.00th=[17695], 50.00th=[19006], 60.00th=[21365], 00:15:49.933 | 70.00th=[24773], 80.00th=[32900], 90.00th=[41157], 95.00th=[46400], 00:15:49.933 | 99.00th=[52167], 99.50th=[52691], 99.90th=[54264], 99.95th=[54264], 00:15:49.933 | 99.99th=[54789] 00:15:49.933 bw ( KiB/s): min=11336, max=13072, per=21.00%, avg=12204.00, stdev=1227.54, samples=2 00:15:49.933 iops : min= 2834, max= 3268, avg=3051.00, stdev=306.88, samples=2 00:15:49.933 lat (msec) : 4=0.02%, 10=2.88%, 20=45.06%, 50=50.74%, 100=1.31% 00:15:49.933 cpu : usr=1.59%, sys=4.98%, ctx=314, majf=0, minf=1 00:15:49.933 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:15:49.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:49.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:49.933 issued rwts: total=2667,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:49.933 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:49.933 job3: (groupid=0, jobs=1): err= 0: pid=3038946: Thu Apr 18 21:09:05 2024 00:15:49.933 read: IOPS=4967, BW=19.4MiB/s (20.3MB/s)(19.5MiB/1004msec) 00:15:49.933 slat (nsec): min=1066, max=11114k, avg=98596.37, stdev=687357.52 00:15:49.933 clat (usec): min=2681, max=35129, avg=12613.18, stdev=4141.81 00:15:49.933 lat (usec): min=5029, max=35138, avg=12711.78, stdev=4204.11 00:15:49.933 clat percentiles (usec): 00:15:49.933 | 1.00th=[ 6456], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9634], 00:15:49.933 | 30.00th=[10028], 40.00th=[10683], 50.00th=[11863], 60.00th=[12518], 00:15:49.933 | 70.00th=[12780], 80.00th=[14091], 90.00th=[19006], 95.00th=[21365], 00:15:49.933 | 99.00th=[26084], 99.50th=[27132], 99.90th=[34866], 99.95th=[34866], 00:15:49.933 | 99.99th=[35390] 00:15:49.933 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:15:49.933 slat (nsec): min=1906, max=8784.6k, avg=91785.64, stdev=516850.07 00:15:49.933 clat (usec): min=3210, max=35124, avg=12397.44, stdev=4746.18 00:15:49.933 lat (usec): min=3219, max=35131, avg=12489.22, stdev=4761.04 00:15:49.933 clat percentiles (usec): 00:15:49.933 | 1.00th=[ 4883], 5.00th=[ 6259], 10.00th=[ 7111], 20.00th=[ 8717], 00:15:49.933 | 30.00th=[ 9896], 40.00th=[10814], 50.00th=[11994], 60.00th=[12387], 00:15:49.933 | 70.00th=[13304], 80.00th=[14877], 90.00th=[19006], 95.00th=[22152], 00:15:49.933 | 99.00th=[27657], 99.50th=[28443], 99.90th=[31327], 99.95th=[34866], 00:15:49.933 | 99.99th=[34866] 00:15:49.933 bw ( KiB/s): min=16896, max=24064, per=35.24%, avg=20480.00, stdev=5068.54, samples=2 00:15:49.933 iops : min= 4224, max= 6016, avg=5120.00, stdev=1267.14, samples=2 00:15:49.933 lat (msec) : 4=0.16%, 10=30.67%, 20=60.36%, 50=8.81% 00:15:49.933 cpu : usr=3.79%, sys=5.38%, ctx=448, majf=0, minf=1 00:15:49.933 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:15:49.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:49.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:49.933 issued rwts: total=4987,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:49.933 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:49.933 00:15:49.933 Run status group 0 (all jobs): 00:15:49.933 READ: bw=53.6MiB/s (56.2MB/s), 10.4MiB/s-19.4MiB/s (10.9MB/s-20.3MB/s), io=53.9MiB (56.5MB), run=1004-1006msec 00:15:49.933 WRITE: bw=56.8MiB/s (59.5MB/s), 11.9MiB/s-19.9MiB/s (12.5MB/s-20.9MB/s), io=57.1MiB (59.9MB), run=1004-1006msec 00:15:49.933 00:15:49.933 Disk stats (read/write): 00:15:49.933 nvme0n1: ios=2774/3072, merge=0/0, ticks=16622/14064, in_queue=30686, util=86.87% 00:15:49.933 nvme0n2: ios=2524/2560, merge=0/0, ticks=23964/28677, in_queue=52641, util=87.17% 00:15:49.933 nvme0n3: ios=2323/2560, merge=0/0, ticks=18965/21575, in_queue=40540, util=88.60% 00:15:49.933 nvme0n4: ios=4153/4238, merge=0/0, ticks=36299/31839, in_queue=68138, util=98.00% 00:15:49.933 21:09:05 -- target/fio.sh@55 -- # sync 00:15:49.933 21:09:05 -- target/fio.sh@59 -- # fio_pid=3039177 00:15:49.933 21:09:05 -- target/fio.sh@61 -- # sleep 3 00:15:49.933 21:09:05 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:15:49.933 [global] 00:15:49.933 thread=1 00:15:49.933 invalidate=1 00:15:49.933 rw=read 00:15:49.933 time_based=1 00:15:49.933 runtime=10 00:15:49.933 ioengine=libaio 00:15:49.933 direct=1 00:15:49.933 bs=4096 00:15:49.933 iodepth=1 00:15:49.933 norandommap=1 00:15:49.933 numjobs=1 00:15:49.933 00:15:49.933 [job0] 00:15:49.933 filename=/dev/nvme0n1 00:15:49.933 [job1] 00:15:49.933 filename=/dev/nvme0n2 00:15:49.933 [job2] 00:15:49.933 filename=/dev/nvme0n3 00:15:49.933 [job3] 00:15:49.933 filename=/dev/nvme0n4 00:15:49.933 Could not set queue depth (nvme0n1) 00:15:49.933 Could not set queue depth (nvme0n2) 00:15:49.933 Could not set queue depth (nvme0n3) 00:15:49.933 Could not set queue depth (nvme0n4) 00:15:49.933 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:49.933 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:49.933 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:49.933 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:49.933 fio-3.35 00:15:49.933 Starting 4 threads 00:15:53.207 21:09:08 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:15:53.207 21:09:08 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:15:53.207 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=483328, buflen=4096 00:15:53.207 fio: pid=3039323, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:53.207 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=26333184, buflen=4096 00:15:53.207 fio: pid=3039322, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:53.207 21:09:08 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:53.207 21:09:08 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:15:53.207 21:09:09 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:53.207 21:09:09 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:15:53.207 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=315392, buflen=4096 00:15:53.207 fio: pid=3039320, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:53.465 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=28831744, buflen=4096 00:15:53.465 fio: pid=3039321, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:53.465 21:09:09 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:53.465 21:09:09 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:15:53.465 00:15:53.465 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3039320: Thu Apr 18 21:09:09 2024 00:15:53.465 read: IOPS=24, BW=97.3KiB/s (99.6kB/s)(308KiB/3166msec) 00:15:53.465 slat (usec): min=12, max=21902, avg=303.87, stdev=2477.27 00:15:53.465 clat (usec): min=716, max=42030, avg=40533.59, stdev=4606.57 00:15:53.465 lat (usec): min=751, max=63096, avg=40841.16, stdev=5273.07 00:15:53.465 clat percentiles (usec): 00:15:53.465 | 1.00th=[ 717], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:15:53.465 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:53.465 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:15:53.465 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:53.465 | 99.99th=[42206] 00:15:53.465 bw ( KiB/s): min= 96, max= 104, per=0.59%, avg=97.83, stdev= 3.25, samples=6 00:15:53.465 iops : min= 24, max= 26, avg=24.33, stdev= 0.82, samples=6 00:15:53.465 lat (usec) : 750=1.28% 00:15:53.465 lat (msec) : 50=97.44% 00:15:53.465 cpu : usr=0.13%, sys=0.00%, ctx=80, majf=0, minf=1 00:15:53.465 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:53.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:53.465 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:53.465 issued rwts: total=78,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:53.465 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:53.465 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3039321: Thu Apr 18 21:09:09 2024 00:15:53.465 read: IOPS=2119, BW=8478KiB/s (8682kB/s)(27.5MiB/3321msec) 00:15:53.465 slat (usec): min=4, max=25503, avg=17.10, stdev=402.50 00:15:53.465 clat (usec): min=291, max=2449, avg=451.33, stdev=118.64 00:15:53.465 lat (usec): min=299, max=26205, avg=468.43, stdev=422.65 00:15:53.465 clat percentiles (usec): 00:15:53.465 | 1.00th=[ 310], 5.00th=[ 318], 10.00th=[ 326], 20.00th=[ 347], 00:15:53.465 | 30.00th=[ 371], 40.00th=[ 375], 50.00th=[ 404], 60.00th=[ 490], 00:15:53.465 | 70.00th=[ 523], 80.00th=[ 562], 90.00th=[ 594], 95.00th=[ 627], 00:15:53.465 | 99.00th=[ 734], 99.50th=[ 816], 99.90th=[ 1074], 99.95th=[ 1958], 00:15:53.465 | 99.99th=[ 2442] 00:15:53.465 bw ( KiB/s): min= 6576, max=11216, per=52.99%, avg=8720.17, stdev=1793.86, samples=6 00:15:53.465 iops : min= 1644, max= 2804, avg=2180.00, stdev=448.51, samples=6 00:15:53.465 lat (usec) : 500=62.37%, 750=36.76%, 1000=0.70% 00:15:53.465 lat (msec) : 2=0.11%, 4=0.04% 00:15:53.465 cpu : usr=1.48%, sys=3.10%, ctx=7045, majf=0, minf=1 00:15:53.465 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:53.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:53.465 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:53.465 issued rwts: total=7040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:53.465 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:53.466 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3039322: Thu Apr 18 21:09:09 2024 00:15:53.466 read: IOPS=2219, BW=8877KiB/s (9090kB/s)(25.1MiB/2897msec) 00:15:53.466 slat (nsec): min=6429, max=50882, avg=8106.28, stdev=2099.45 00:15:53.466 clat (usec): min=229, max=41222, avg=437.39, stdev=1087.37 00:15:53.466 lat (usec): min=236, max=41233, avg=445.49, stdev=1087.74 00:15:53.466 clat percentiles (usec): 00:15:53.466 | 1.00th=[ 289], 5.00th=[ 318], 10.00th=[ 326], 20.00th=[ 334], 00:15:53.466 | 30.00th=[ 371], 40.00th=[ 375], 50.00th=[ 379], 60.00th=[ 408], 00:15:53.466 | 70.00th=[ 457], 80.00th=[ 474], 90.00th=[ 498], 95.00th=[ 515], 00:15:53.466 | 99.00th=[ 652], 99.50th=[ 717], 99.90th=[ 1811], 99.95th=[41157], 00:15:53.466 | 99.99th=[41157] 00:15:53.466 bw ( KiB/s): min= 5776, max=11216, per=54.95%, avg=9043.20, stdev=2107.31, samples=5 00:15:53.466 iops : min= 1444, max= 2804, avg=2260.80, stdev=526.83, samples=5 00:15:53.466 lat (usec) : 250=0.17%, 500=90.31%, 750=9.16%, 1000=0.22% 00:15:53.466 lat (msec) : 2=0.03%, 4=0.02%, 50=0.08% 00:15:53.466 cpu : usr=1.38%, sys=2.80%, ctx=6431, majf=0, minf=1 00:15:53.466 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:53.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:53.466 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:53.466 issued rwts: total=6430,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:53.466 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:53.466 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3039323: Thu Apr 18 21:09:09 2024 00:15:53.466 read: IOPS=43, BW=174KiB/s (178kB/s)(472KiB/2719msec) 00:15:53.466 slat (nsec): min=7330, max=33978, avg=16372.84, stdev=7328.29 00:15:53.466 clat (usec): min=354, max=42122, avg=22841.05, stdev=20260.46 00:15:53.466 lat (usec): min=362, max=42146, avg=22857.37, stdev=20267.65 00:15:53.466 clat percentiles (usec): 00:15:53.466 | 1.00th=[ 359], 5.00th=[ 416], 10.00th=[ 429], 20.00th=[ 441], 00:15:53.466 | 30.00th=[ 529], 40.00th=[ 578], 50.00th=[41157], 60.00th=[41157], 00:15:53.466 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:15:53.466 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:53.466 | 99.99th=[42206] 00:15:53.466 bw ( KiB/s): min= 96, max= 520, per=1.09%, avg=180.80, stdev=189.62, samples=5 00:15:53.466 iops : min= 24, max= 130, avg=45.20, stdev=47.40, samples=5 00:15:53.466 lat (usec) : 500=22.69%, 750=21.01% 00:15:53.466 lat (msec) : 2=0.84%, 50=54.62% 00:15:53.466 cpu : usr=0.04%, sys=0.07%, ctx=119, majf=0, minf=2 00:15:53.466 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:53.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:53.466 complete : 0=0.8%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:53.466 issued rwts: total=119,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:53.466 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:53.466 00:15:53.466 Run status group 0 (all jobs): 00:15:53.466 READ: bw=16.1MiB/s (16.9MB/s), 97.3KiB/s-8877KiB/s (99.6kB/s-9090kB/s), io=53.4MiB (56.0MB), run=2719-3321msec 00:15:53.466 00:15:53.466 Disk stats (read/write): 00:15:53.466 nvme0n1: ios=115/0, merge=0/0, ticks=4089/0, in_queue=4089, util=98.46% 00:15:53.466 nvme0n2: ios=6722/0, merge=0/0, ticks=2914/0, in_queue=2914, util=95.48% 00:15:53.466 nvme0n3: ios=6422/0, merge=0/0, ticks=3680/0, in_queue=3680, util=99.02% 00:15:53.466 nvme0n4: ios=115/0, merge=0/0, ticks=2572/0, in_queue=2572, util=96.45% 00:15:53.723 21:09:09 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:53.723 21:09:09 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:15:53.980 21:09:09 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:53.980 21:09:09 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:15:53.980 21:09:09 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:53.980 21:09:09 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:15:54.237 21:09:10 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:54.237 21:09:10 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:15:54.495 21:09:10 -- target/fio.sh@69 -- # fio_status=0 00:15:54.495 21:09:10 -- target/fio.sh@70 -- # wait 3039177 00:15:54.495 21:09:10 -- target/fio.sh@70 -- # fio_status=4 00:15:54.495 21:09:10 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:54.495 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:54.495 21:09:10 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:54.495 21:09:10 -- common/autotest_common.sh@1205 -- # local i=0 00:15:54.495 21:09:10 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:15:54.495 21:09:10 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:54.495 21:09:10 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:15:54.495 21:09:10 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:54.752 21:09:10 -- common/autotest_common.sh@1217 -- # return 0 00:15:54.752 21:09:10 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:15:54.752 21:09:10 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:15:54.752 nvmf hotplug test: fio failed as expected 00:15:54.752 21:09:10 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:54.752 21:09:10 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:15:54.752 21:09:10 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:15:54.752 21:09:10 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:15:54.752 21:09:10 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:15:54.752 21:09:10 -- target/fio.sh@91 -- # nvmftestfini 00:15:54.752 21:09:10 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:54.752 21:09:10 -- nvmf/common.sh@117 -- # sync 00:15:54.752 21:09:10 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:54.752 21:09:10 -- nvmf/common.sh@120 -- # set +e 00:15:54.752 21:09:10 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:54.752 21:09:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:54.752 rmmod nvme_tcp 00:15:54.752 rmmod nvme_fabrics 00:15:54.752 rmmod nvme_keyring 00:15:55.011 21:09:10 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:55.011 21:09:10 -- nvmf/common.sh@124 -- # set -e 00:15:55.011 21:09:10 -- nvmf/common.sh@125 -- # return 0 00:15:55.011 21:09:10 -- nvmf/common.sh@478 -- # '[' -n 3036125 ']' 00:15:55.011 21:09:10 -- nvmf/common.sh@479 -- # killprocess 3036125 00:15:55.011 21:09:10 -- common/autotest_common.sh@936 -- # '[' -z 3036125 ']' 00:15:55.011 21:09:10 -- common/autotest_common.sh@940 -- # kill -0 3036125 00:15:55.011 21:09:10 -- common/autotest_common.sh@941 -- # uname 00:15:55.011 21:09:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:55.011 21:09:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3036125 00:15:55.011 21:09:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:55.011 21:09:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:55.011 21:09:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3036125' 00:15:55.011 killing process with pid 3036125 00:15:55.011 21:09:10 -- common/autotest_common.sh@955 -- # kill 3036125 00:15:55.011 21:09:10 -- common/autotest_common.sh@960 -- # wait 3036125 00:15:55.270 21:09:10 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:55.270 21:09:10 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:55.270 21:09:10 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:55.270 21:09:10 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:55.270 21:09:10 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:55.270 21:09:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:55.270 21:09:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:55.270 21:09:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:57.252 21:09:13 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:57.252 00:15:57.252 real 0m27.601s 00:15:57.252 user 1m46.361s 00:15:57.252 sys 0m8.681s 00:15:57.252 21:09:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:57.252 21:09:13 -- common/autotest_common.sh@10 -- # set +x 00:15:57.252 ************************************ 00:15:57.252 END TEST nvmf_fio_target 00:15:57.252 ************************************ 00:15:57.252 21:09:13 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:57.252 21:09:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:57.252 21:09:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:57.252 21:09:13 -- common/autotest_common.sh@10 -- # set +x 00:15:57.510 ************************************ 00:15:57.510 START TEST nvmf_bdevio 00:15:57.510 ************************************ 00:15:57.510 21:09:13 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:57.510 * Looking for test storage... 00:15:57.510 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:57.510 21:09:13 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:57.511 21:09:13 -- nvmf/common.sh@7 -- # uname -s 00:15:57.511 21:09:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:57.511 21:09:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:57.511 21:09:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:57.511 21:09:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:57.511 21:09:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:57.511 21:09:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:57.511 21:09:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:57.511 21:09:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:57.511 21:09:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:57.511 21:09:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:57.511 21:09:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:57.511 21:09:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:57.511 21:09:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:57.511 21:09:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:57.511 21:09:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:57.511 21:09:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:57.511 21:09:13 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:57.511 21:09:13 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:57.511 21:09:13 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:57.511 21:09:13 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:57.511 21:09:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.511 21:09:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.511 21:09:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.511 21:09:13 -- paths/export.sh@5 -- # export PATH 00:15:57.511 21:09:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.511 21:09:13 -- nvmf/common.sh@47 -- # : 0 00:15:57.511 21:09:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:57.511 21:09:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:57.511 21:09:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:57.511 21:09:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:57.511 21:09:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:57.511 21:09:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:57.511 21:09:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:57.511 21:09:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:57.511 21:09:13 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:57.511 21:09:13 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:57.511 21:09:13 -- target/bdevio.sh@14 -- # nvmftestinit 00:15:57.511 21:09:13 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:57.511 21:09:13 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:57.511 21:09:13 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:57.511 21:09:13 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:57.511 21:09:13 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:57.511 21:09:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:57.511 21:09:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:57.511 21:09:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:57.511 21:09:13 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:57.511 21:09:13 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:57.511 21:09:13 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:57.511 21:09:13 -- common/autotest_common.sh@10 -- # set +x 00:16:04.075 21:09:19 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:04.075 21:09:19 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:04.075 21:09:19 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:04.075 21:09:19 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:04.075 21:09:19 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:04.075 21:09:19 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:04.075 21:09:19 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:04.075 21:09:19 -- nvmf/common.sh@295 -- # net_devs=() 00:16:04.075 21:09:19 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:04.075 21:09:19 -- nvmf/common.sh@296 -- # e810=() 00:16:04.075 21:09:19 -- nvmf/common.sh@296 -- # local -ga e810 00:16:04.075 21:09:19 -- nvmf/common.sh@297 -- # x722=() 00:16:04.075 21:09:19 -- nvmf/common.sh@297 -- # local -ga x722 00:16:04.075 21:09:19 -- nvmf/common.sh@298 -- # mlx=() 00:16:04.075 21:09:19 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:04.075 21:09:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:04.075 21:09:19 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:04.075 21:09:19 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:04.075 21:09:19 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:04.075 21:09:19 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:04.075 21:09:19 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:04.075 21:09:19 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:04.075 21:09:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:04.075 21:09:19 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:04.075 21:09:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:04.075 21:09:19 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:04.075 21:09:19 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:04.075 21:09:19 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:04.075 21:09:19 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:04.075 21:09:19 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:04.075 21:09:19 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:04.075 21:09:19 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:04.075 21:09:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:04.075 21:09:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:04.075 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:04.075 21:09:19 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:04.075 21:09:19 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:04.075 21:09:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:04.075 21:09:19 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:04.075 21:09:19 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:04.075 21:09:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:04.075 21:09:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:04.075 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:04.075 21:09:19 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:04.075 21:09:19 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:04.075 21:09:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:04.075 21:09:19 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:04.075 21:09:19 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:04.075 21:09:19 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:04.075 21:09:19 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:04.075 21:09:19 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:04.075 21:09:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:04.075 21:09:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:04.075 21:09:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:04.075 21:09:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:04.075 21:09:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:04.075 Found net devices under 0000:86:00.0: cvl_0_0 00:16:04.075 21:09:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:04.075 21:09:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:04.075 21:09:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:04.075 21:09:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:04.075 21:09:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:04.075 21:09:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:04.075 Found net devices under 0000:86:00.1: cvl_0_1 00:16:04.075 21:09:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:04.075 21:09:19 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:04.075 21:09:19 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:04.075 21:09:19 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:04.075 21:09:19 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:04.075 21:09:19 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:04.075 21:09:19 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:04.075 21:09:19 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:04.075 21:09:19 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:04.075 21:09:19 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:04.075 21:09:19 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:04.075 21:09:19 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:04.075 21:09:19 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:04.075 21:09:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:04.075 21:09:19 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:04.075 21:09:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:04.075 21:09:19 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:04.075 21:09:19 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:04.076 21:09:19 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:04.076 21:09:19 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:04.076 21:09:19 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:04.076 21:09:19 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:04.076 21:09:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:04.076 21:09:19 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:04.076 21:09:19 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:04.076 21:09:19 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:04.076 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:04.076 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:16:04.076 00:16:04.076 --- 10.0.0.2 ping statistics --- 00:16:04.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.076 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:16:04.076 21:09:19 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:04.076 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:04.076 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:16:04.076 00:16:04.076 --- 10.0.0.1 ping statistics --- 00:16:04.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.076 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:16:04.076 21:09:19 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:04.076 21:09:19 -- nvmf/common.sh@411 -- # return 0 00:16:04.076 21:09:19 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:04.076 21:09:19 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:04.076 21:09:19 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:04.076 21:09:19 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:04.076 21:09:19 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:04.076 21:09:19 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:04.076 21:09:19 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:04.076 21:09:19 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:04.076 21:09:19 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:04.076 21:09:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:04.076 21:09:19 -- common/autotest_common.sh@10 -- # set +x 00:16:04.076 21:09:19 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:04.076 21:09:19 -- nvmf/common.sh@470 -- # nvmfpid=3044467 00:16:04.076 21:09:19 -- nvmf/common.sh@471 -- # waitforlisten 3044467 00:16:04.076 21:09:19 -- common/autotest_common.sh@817 -- # '[' -z 3044467 ']' 00:16:04.076 21:09:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.076 21:09:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:04.076 21:09:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:04.076 21:09:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:04.076 21:09:19 -- common/autotest_common.sh@10 -- # set +x 00:16:04.076 [2024-04-18 21:09:19.569315] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:16:04.076 [2024-04-18 21:09:19.569355] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:04.076 EAL: No free 2048 kB hugepages reported on node 1 00:16:04.076 [2024-04-18 21:09:19.633049] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:04.076 [2024-04-18 21:09:19.710722] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:04.076 [2024-04-18 21:09:19.710758] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:04.076 [2024-04-18 21:09:19.710765] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:04.076 [2024-04-18 21:09:19.710771] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:04.076 [2024-04-18 21:09:19.710777] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:04.076 [2024-04-18 21:09:19.710885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:04.076 [2024-04-18 21:09:19.711001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:04.076 [2024-04-18 21:09:19.711106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:04.076 [2024-04-18 21:09:19.711107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:04.645 21:09:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:04.645 21:09:20 -- common/autotest_common.sh@850 -- # return 0 00:16:04.645 21:09:20 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:04.645 21:09:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:04.645 21:09:20 -- common/autotest_common.sh@10 -- # set +x 00:16:04.645 21:09:20 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:04.645 21:09:20 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:04.645 21:09:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:04.645 21:09:20 -- common/autotest_common.sh@10 -- # set +x 00:16:04.645 [2024-04-18 21:09:20.434375] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:04.645 21:09:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:04.645 21:09:20 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:04.645 21:09:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:04.645 21:09:20 -- common/autotest_common.sh@10 -- # set +x 00:16:04.645 Malloc0 00:16:04.645 21:09:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:04.645 21:09:20 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:04.645 21:09:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:04.645 21:09:20 -- common/autotest_common.sh@10 -- # set +x 00:16:04.645 21:09:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:04.645 21:09:20 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:04.645 21:09:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:04.645 21:09:20 -- common/autotest_common.sh@10 -- # set +x 00:16:04.645 21:09:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:04.645 21:09:20 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:04.645 21:09:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:04.645 21:09:20 -- common/autotest_common.sh@10 -- # set +x 00:16:04.645 [2024-04-18 21:09:20.490186] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:04.645 21:09:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:04.645 21:09:20 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:04.645 21:09:20 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:04.645 21:09:20 -- nvmf/common.sh@521 -- # config=() 00:16:04.645 21:09:20 -- nvmf/common.sh@521 -- # local subsystem config 00:16:04.645 21:09:20 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:04.645 21:09:20 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:04.645 { 00:16:04.645 "params": { 00:16:04.645 "name": "Nvme$subsystem", 00:16:04.645 "trtype": "$TEST_TRANSPORT", 00:16:04.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:04.645 "adrfam": "ipv4", 00:16:04.645 "trsvcid": "$NVMF_PORT", 00:16:04.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:04.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:04.645 "hdgst": ${hdgst:-false}, 00:16:04.645 "ddgst": ${ddgst:-false} 00:16:04.645 }, 00:16:04.645 "method": "bdev_nvme_attach_controller" 00:16:04.645 } 00:16:04.645 EOF 00:16:04.645 )") 00:16:04.645 21:09:20 -- nvmf/common.sh@543 -- # cat 00:16:04.645 21:09:20 -- nvmf/common.sh@545 -- # jq . 00:16:04.645 21:09:20 -- nvmf/common.sh@546 -- # IFS=, 00:16:04.645 21:09:20 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:04.645 "params": { 00:16:04.646 "name": "Nvme1", 00:16:04.646 "trtype": "tcp", 00:16:04.646 "traddr": "10.0.0.2", 00:16:04.646 "adrfam": "ipv4", 00:16:04.646 "trsvcid": "4420", 00:16:04.646 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:04.646 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:04.646 "hdgst": false, 00:16:04.646 "ddgst": false 00:16:04.646 }, 00:16:04.646 "method": "bdev_nvme_attach_controller" 00:16:04.646 }' 00:16:04.646 [2024-04-18 21:09:20.536613] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:16:04.646 [2024-04-18 21:09:20.536658] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3044503 ] 00:16:04.646 EAL: No free 2048 kB hugepages reported on node 1 00:16:04.904 [2024-04-18 21:09:20.598102] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:04.904 [2024-04-18 21:09:20.672787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:04.904 [2024-04-18 21:09:20.672884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.904 [2024-04-18 21:09:20.672885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:05.163 I/O targets: 00:16:05.163 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:05.163 00:16:05.163 00:16:05.163 CUnit - A unit testing framework for C - Version 2.1-3 00:16:05.163 http://cunit.sourceforge.net/ 00:16:05.163 00:16:05.163 00:16:05.163 Suite: bdevio tests on: Nvme1n1 00:16:05.163 Test: blockdev write read block ...passed 00:16:05.163 Test: blockdev write zeroes read block ...passed 00:16:05.163 Test: blockdev write zeroes read no split ...passed 00:16:05.163 Test: blockdev write zeroes read split ...passed 00:16:05.163 Test: blockdev write zeroes read split partial ...passed 00:16:05.163 Test: blockdev reset ...[2024-04-18 21:09:21.089312] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:05.163 [2024-04-18 21:09:21.089379] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd44720 (9): Bad file descriptor 00:16:05.421 [2024-04-18 21:09:21.144487] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:05.421 passed 00:16:05.421 Test: blockdev write read 8 blocks ...passed 00:16:05.421 Test: blockdev write read size > 128k ...passed 00:16:05.422 Test: blockdev write read invalid size ...passed 00:16:05.422 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:05.422 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:05.422 Test: blockdev write read max offset ...passed 00:16:05.422 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:05.422 Test: blockdev writev readv 8 blocks ...passed 00:16:05.681 Test: blockdev writev readv 30 x 1block ...passed 00:16:05.681 Test: blockdev writev readv block ...passed 00:16:05.681 Test: blockdev writev readv size > 128k ...passed 00:16:05.681 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:05.681 Test: blockdev comparev and writev ...[2024-04-18 21:09:21.404035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:05.681 [2024-04-18 21:09:21.404062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:05.681 [2024-04-18 21:09:21.404075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:05.681 [2024-04-18 21:09:21.404083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:05.681 [2024-04-18 21:09:21.404464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:05.681 [2024-04-18 21:09:21.404476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:05.681 [2024-04-18 21:09:21.404487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:05.681 [2024-04-18 21:09:21.404494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:05.681 [2024-04-18 21:09:21.404867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:05.681 [2024-04-18 21:09:21.404878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:05.681 [2024-04-18 21:09:21.404889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:05.681 [2024-04-18 21:09:21.404896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:05.681 [2024-04-18 21:09:21.405255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:05.681 [2024-04-18 21:09:21.405265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:05.681 [2024-04-18 21:09:21.405276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:05.681 [2024-04-18 21:09:21.405283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:05.681 passed 00:16:05.681 Test: blockdev nvme passthru rw ...passed 00:16:05.681 Test: blockdev nvme passthru vendor specific ...[2024-04-18 21:09:21.488184] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:05.681 [2024-04-18 21:09:21.488199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:05.681 [2024-04-18 21:09:21.488442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:05.681 [2024-04-18 21:09:21.488451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:05.681 [2024-04-18 21:09:21.488692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:05.681 [2024-04-18 21:09:21.488702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:05.681 [2024-04-18 21:09:21.488940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:05.681 [2024-04-18 21:09:21.488949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:05.681 passed 00:16:05.681 Test: blockdev nvme admin passthru ...passed 00:16:05.681 Test: blockdev copy ...passed 00:16:05.681 00:16:05.681 Run Summary: Type Total Ran Passed Failed Inactive 00:16:05.681 suites 1 1 n/a 0 0 00:16:05.681 tests 23 23 23 0 0 00:16:05.681 asserts 152 152 152 0 n/a 00:16:05.681 00:16:05.681 Elapsed time = 1.354 seconds 00:16:05.940 21:09:21 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:05.940 21:09:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:05.940 21:09:21 -- common/autotest_common.sh@10 -- # set +x 00:16:05.940 21:09:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:05.941 21:09:21 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:05.941 21:09:21 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:05.941 21:09:21 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:05.941 21:09:21 -- nvmf/common.sh@117 -- # sync 00:16:05.941 21:09:21 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:05.941 21:09:21 -- nvmf/common.sh@120 -- # set +e 00:16:05.941 21:09:21 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:05.941 21:09:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:05.941 rmmod nvme_tcp 00:16:05.941 rmmod nvme_fabrics 00:16:05.941 rmmod nvme_keyring 00:16:05.941 21:09:21 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:05.941 21:09:21 -- nvmf/common.sh@124 -- # set -e 00:16:05.941 21:09:21 -- nvmf/common.sh@125 -- # return 0 00:16:05.941 21:09:21 -- nvmf/common.sh@478 -- # '[' -n 3044467 ']' 00:16:05.941 21:09:21 -- nvmf/common.sh@479 -- # killprocess 3044467 00:16:05.941 21:09:21 -- common/autotest_common.sh@936 -- # '[' -z 3044467 ']' 00:16:05.941 21:09:21 -- common/autotest_common.sh@940 -- # kill -0 3044467 00:16:05.941 21:09:21 -- common/autotest_common.sh@941 -- # uname 00:16:05.941 21:09:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:05.941 21:09:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3044467 00:16:05.941 21:09:21 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:16:05.941 21:09:21 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:16:05.941 21:09:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3044467' 00:16:05.941 killing process with pid 3044467 00:16:05.941 21:09:21 -- common/autotest_common.sh@955 -- # kill 3044467 00:16:05.941 21:09:21 -- common/autotest_common.sh@960 -- # wait 3044467 00:16:06.200 21:09:22 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:06.200 21:09:22 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:06.200 21:09:22 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:06.200 21:09:22 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:06.200 21:09:22 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:06.200 21:09:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:06.200 21:09:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:06.200 21:09:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.735 21:09:24 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:08.735 00:16:08.735 real 0m10.925s 00:16:08.735 user 0m13.420s 00:16:08.735 sys 0m5.191s 00:16:08.735 21:09:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:08.735 21:09:24 -- common/autotest_common.sh@10 -- # set +x 00:16:08.735 ************************************ 00:16:08.735 END TEST nvmf_bdevio 00:16:08.735 ************************************ 00:16:08.735 21:09:24 -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:08.735 21:09:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:08.735 21:09:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:08.735 21:09:24 -- common/autotest_common.sh@10 -- # set +x 00:16:08.735 ************************************ 00:16:08.735 START TEST nvmf_auth_target 00:16:08.735 ************************************ 00:16:08.735 21:09:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:08.735 * Looking for test storage... 00:16:08.735 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:08.735 21:09:24 -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:08.735 21:09:24 -- nvmf/common.sh@7 -- # uname -s 00:16:08.735 21:09:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:08.735 21:09:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:08.735 21:09:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:08.735 21:09:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:08.735 21:09:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:08.735 21:09:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:08.735 21:09:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:08.735 21:09:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:08.735 21:09:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:08.735 21:09:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:08.735 21:09:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:08.735 21:09:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:08.735 21:09:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:08.735 21:09:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:08.735 21:09:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:08.735 21:09:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:08.735 21:09:24 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:08.735 21:09:24 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:08.735 21:09:24 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:08.735 21:09:24 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:08.735 21:09:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.735 21:09:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.735 21:09:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.735 21:09:24 -- paths/export.sh@5 -- # export PATH 00:16:08.736 21:09:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.736 21:09:24 -- nvmf/common.sh@47 -- # : 0 00:16:08.736 21:09:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:08.736 21:09:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:08.736 21:09:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:08.736 21:09:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:08.736 21:09:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:08.736 21:09:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:08.736 21:09:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:08.736 21:09:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:08.736 21:09:24 -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:08.736 21:09:24 -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:08.736 21:09:24 -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:08.736 21:09:24 -- target/auth.sh@16 -- # hostnqn=nqn.2024-03.io.spdk:host0 00:16:08.736 21:09:24 -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:08.736 21:09:24 -- target/auth.sh@18 -- # keys=() 00:16:08.736 21:09:24 -- target/auth.sh@53 -- # nvmftestinit 00:16:08.736 21:09:24 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:08.736 21:09:24 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:08.736 21:09:24 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:08.736 21:09:24 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:08.736 21:09:24 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:08.736 21:09:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:08.736 21:09:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:08.736 21:09:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.736 21:09:24 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:08.736 21:09:24 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:08.736 21:09:24 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:08.736 21:09:24 -- common/autotest_common.sh@10 -- # set +x 00:16:15.304 21:09:30 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:15.304 21:09:30 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:15.304 21:09:30 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:15.304 21:09:30 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:15.304 21:09:30 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:15.304 21:09:30 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:15.304 21:09:30 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:15.304 21:09:30 -- nvmf/common.sh@295 -- # net_devs=() 00:16:15.304 21:09:30 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:15.304 21:09:30 -- nvmf/common.sh@296 -- # e810=() 00:16:15.304 21:09:30 -- nvmf/common.sh@296 -- # local -ga e810 00:16:15.304 21:09:30 -- nvmf/common.sh@297 -- # x722=() 00:16:15.304 21:09:30 -- nvmf/common.sh@297 -- # local -ga x722 00:16:15.304 21:09:30 -- nvmf/common.sh@298 -- # mlx=() 00:16:15.304 21:09:30 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:15.304 21:09:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:15.304 21:09:30 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:15.304 21:09:30 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:15.304 21:09:30 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:15.304 21:09:30 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:15.304 21:09:30 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:15.304 21:09:30 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:15.304 21:09:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:15.304 21:09:30 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:15.304 21:09:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:15.304 21:09:30 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:15.304 21:09:30 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:15.304 21:09:30 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:15.304 21:09:30 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:15.304 21:09:30 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:15.304 21:09:30 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:15.304 21:09:30 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:15.304 21:09:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:15.304 21:09:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:15.304 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:15.304 21:09:30 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:15.304 21:09:30 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:15.304 21:09:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:15.304 21:09:30 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:15.304 21:09:30 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:15.304 21:09:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:15.304 21:09:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:15.304 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:15.304 21:09:30 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:15.304 21:09:30 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:15.304 21:09:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:15.304 21:09:30 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:15.304 21:09:30 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:15.304 21:09:30 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:15.304 21:09:30 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:15.304 21:09:30 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:15.304 21:09:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:15.304 21:09:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:15.304 21:09:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:15.304 21:09:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:15.304 21:09:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:15.304 Found net devices under 0000:86:00.0: cvl_0_0 00:16:15.304 21:09:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:15.304 21:09:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:15.304 21:09:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:15.304 21:09:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:15.304 21:09:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:15.304 21:09:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:15.304 Found net devices under 0000:86:00.1: cvl_0_1 00:16:15.304 21:09:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:15.304 21:09:30 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:15.304 21:09:30 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:15.304 21:09:30 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:15.304 21:09:30 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:15.304 21:09:30 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:15.304 21:09:30 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:15.304 21:09:30 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:15.304 21:09:30 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:15.304 21:09:30 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:15.304 21:09:30 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:15.304 21:09:30 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:15.304 21:09:30 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:15.304 21:09:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:15.304 21:09:30 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:15.304 21:09:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:15.304 21:09:30 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:15.304 21:09:30 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:15.304 21:09:30 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:15.304 21:09:30 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:15.304 21:09:30 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:15.304 21:09:30 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:15.304 21:09:30 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:15.304 21:09:30 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:15.304 21:09:30 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:15.304 21:09:30 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:15.304 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:15.304 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:16:15.304 00:16:15.304 --- 10.0.0.2 ping statistics --- 00:16:15.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.304 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:16:15.304 21:09:30 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:15.304 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:15.304 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:16:15.304 00:16:15.304 --- 10.0.0.1 ping statistics --- 00:16:15.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.304 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:16:15.304 21:09:30 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:15.304 21:09:30 -- nvmf/common.sh@411 -- # return 0 00:16:15.304 21:09:30 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:15.304 21:09:30 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:15.304 21:09:30 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:15.304 21:09:30 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:15.304 21:09:30 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:15.304 21:09:30 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:15.304 21:09:30 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:15.304 21:09:30 -- target/auth.sh@54 -- # nvmfappstart -L nvmf_auth 00:16:15.304 21:09:30 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:15.304 21:09:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:15.304 21:09:30 -- common/autotest_common.sh@10 -- # set +x 00:16:15.304 21:09:30 -- nvmf/common.sh@470 -- # nvmfpid=3048765 00:16:15.304 21:09:30 -- nvmf/common.sh@471 -- # waitforlisten 3048765 00:16:15.304 21:09:30 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:15.304 21:09:30 -- common/autotest_common.sh@817 -- # '[' -z 3048765 ']' 00:16:15.304 21:09:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.304 21:09:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:15.304 21:09:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.304 21:09:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:15.304 21:09:30 -- common/autotest_common.sh@10 -- # set +x 00:16:15.871 21:09:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:15.871 21:09:31 -- common/autotest_common.sh@850 -- # return 0 00:16:15.871 21:09:31 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:15.871 21:09:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:15.871 21:09:31 -- common/autotest_common.sh@10 -- # set +x 00:16:15.871 21:09:31 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:15.871 21:09:31 -- target/auth.sh@56 -- # hostpid=3048801 00:16:15.871 21:09:31 -- target/auth.sh@58 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:15.871 21:09:31 -- target/auth.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:15.871 21:09:31 -- target/auth.sh@60 -- # gen_dhchap_key null 48 00:16:15.871 21:09:31 -- nvmf/common.sh@712 -- # local digest len file key 00:16:15.871 21:09:31 -- nvmf/common.sh@713 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:15.871 21:09:31 -- nvmf/common.sh@713 -- # local -A digests 00:16:15.871 21:09:31 -- nvmf/common.sh@715 -- # digest=null 00:16:15.871 21:09:31 -- nvmf/common.sh@715 -- # len=48 00:16:15.871 21:09:31 -- nvmf/common.sh@716 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:15.871 21:09:31 -- nvmf/common.sh@716 -- # key=7b919b21d3af1cf5aadcf293d55a39b479c4122a5fb2f501 00:16:15.871 21:09:31 -- nvmf/common.sh@717 -- # mktemp -t spdk.key-null.XXX 00:16:15.871 21:09:31 -- nvmf/common.sh@717 -- # file=/tmp/spdk.key-null.pZ1 00:16:15.871 21:09:31 -- nvmf/common.sh@718 -- # format_dhchap_key 7b919b21d3af1cf5aadcf293d55a39b479c4122a5fb2f501 0 00:16:15.871 21:09:31 -- nvmf/common.sh@708 -- # format_key DHHC-1 7b919b21d3af1cf5aadcf293d55a39b479c4122a5fb2f501 0 00:16:15.871 21:09:31 -- nvmf/common.sh@691 -- # local prefix key digest 00:16:15.871 21:09:31 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:16:15.871 21:09:31 -- nvmf/common.sh@693 -- # key=7b919b21d3af1cf5aadcf293d55a39b479c4122a5fb2f501 00:16:15.871 21:09:31 -- nvmf/common.sh@693 -- # digest=0 00:16:15.871 21:09:31 -- nvmf/common.sh@694 -- # python - 00:16:15.871 21:09:31 -- nvmf/common.sh@719 -- # chmod 0600 /tmp/spdk.key-null.pZ1 00:16:15.872 21:09:31 -- nvmf/common.sh@721 -- # echo /tmp/spdk.key-null.pZ1 00:16:15.872 21:09:31 -- target/auth.sh@60 -- # keys[0]=/tmp/spdk.key-null.pZ1 00:16:15.872 21:09:31 -- target/auth.sh@61 -- # gen_dhchap_key sha256 32 00:16:15.872 21:09:31 -- nvmf/common.sh@712 -- # local digest len file key 00:16:15.872 21:09:31 -- nvmf/common.sh@713 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:15.872 21:09:31 -- nvmf/common.sh@713 -- # local -A digests 00:16:15.872 21:09:31 -- nvmf/common.sh@715 -- # digest=sha256 00:16:15.872 21:09:31 -- nvmf/common.sh@715 -- # len=32 00:16:15.872 21:09:31 -- nvmf/common.sh@716 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:15.872 21:09:31 -- nvmf/common.sh@716 -- # key=e1e97e51b551ed8b1befbf737d5bed5b 00:16:15.872 21:09:31 -- nvmf/common.sh@717 -- # mktemp -t spdk.key-sha256.XXX 00:16:15.872 21:09:31 -- nvmf/common.sh@717 -- # file=/tmp/spdk.key-sha256.4en 00:16:15.872 21:09:31 -- nvmf/common.sh@718 -- # format_dhchap_key e1e97e51b551ed8b1befbf737d5bed5b 1 00:16:15.872 21:09:31 -- nvmf/common.sh@708 -- # format_key DHHC-1 e1e97e51b551ed8b1befbf737d5bed5b 1 00:16:15.872 21:09:31 -- nvmf/common.sh@691 -- # local prefix key digest 00:16:15.872 21:09:31 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:16:15.872 21:09:31 -- nvmf/common.sh@693 -- # key=e1e97e51b551ed8b1befbf737d5bed5b 00:16:15.872 21:09:31 -- nvmf/common.sh@693 -- # digest=1 00:16:15.872 21:09:31 -- nvmf/common.sh@694 -- # python - 00:16:15.872 21:09:31 -- nvmf/common.sh@719 -- # chmod 0600 /tmp/spdk.key-sha256.4en 00:16:15.872 21:09:31 -- nvmf/common.sh@721 -- # echo /tmp/spdk.key-sha256.4en 00:16:15.872 21:09:31 -- target/auth.sh@61 -- # keys[1]=/tmp/spdk.key-sha256.4en 00:16:15.872 21:09:31 -- target/auth.sh@62 -- # gen_dhchap_key sha384 48 00:16:15.872 21:09:31 -- nvmf/common.sh@712 -- # local digest len file key 00:16:15.872 21:09:31 -- nvmf/common.sh@713 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:15.872 21:09:31 -- nvmf/common.sh@713 -- # local -A digests 00:16:15.872 21:09:31 -- nvmf/common.sh@715 -- # digest=sha384 00:16:15.872 21:09:31 -- nvmf/common.sh@715 -- # len=48 00:16:15.872 21:09:31 -- nvmf/common.sh@716 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:15.872 21:09:31 -- nvmf/common.sh@716 -- # key=a477222c888c77fb42ae35b60bdfc06b990a4d2a35661455 00:16:15.872 21:09:31 -- nvmf/common.sh@717 -- # mktemp -t spdk.key-sha384.XXX 00:16:15.872 21:09:31 -- nvmf/common.sh@717 -- # file=/tmp/spdk.key-sha384.HBs 00:16:15.872 21:09:31 -- nvmf/common.sh@718 -- # format_dhchap_key a477222c888c77fb42ae35b60bdfc06b990a4d2a35661455 2 00:16:15.872 21:09:31 -- nvmf/common.sh@708 -- # format_key DHHC-1 a477222c888c77fb42ae35b60bdfc06b990a4d2a35661455 2 00:16:15.872 21:09:31 -- nvmf/common.sh@691 -- # local prefix key digest 00:16:15.872 21:09:31 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:16:15.872 21:09:31 -- nvmf/common.sh@693 -- # key=a477222c888c77fb42ae35b60bdfc06b990a4d2a35661455 00:16:15.872 21:09:31 -- nvmf/common.sh@693 -- # digest=2 00:16:15.872 21:09:31 -- nvmf/common.sh@694 -- # python - 00:16:15.872 21:09:31 -- nvmf/common.sh@719 -- # chmod 0600 /tmp/spdk.key-sha384.HBs 00:16:15.872 21:09:31 -- nvmf/common.sh@721 -- # echo /tmp/spdk.key-sha384.HBs 00:16:15.872 21:09:31 -- target/auth.sh@62 -- # keys[2]=/tmp/spdk.key-sha384.HBs 00:16:15.872 21:09:31 -- target/auth.sh@63 -- # gen_dhchap_key sha512 64 00:16:15.872 21:09:31 -- nvmf/common.sh@712 -- # local digest len file key 00:16:15.872 21:09:31 -- nvmf/common.sh@713 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:15.872 21:09:31 -- nvmf/common.sh@713 -- # local -A digests 00:16:15.872 21:09:31 -- nvmf/common.sh@715 -- # digest=sha512 00:16:15.872 21:09:31 -- nvmf/common.sh@715 -- # len=64 00:16:15.872 21:09:31 -- nvmf/common.sh@716 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:15.872 21:09:31 -- nvmf/common.sh@716 -- # key=381bde14435df5952cd65202fc0958fbbf836f196c6f2270a445966c98ea1a32 00:16:15.872 21:09:31 -- nvmf/common.sh@717 -- # mktemp -t spdk.key-sha512.XXX 00:16:15.872 21:09:31 -- nvmf/common.sh@717 -- # file=/tmp/spdk.key-sha512.oz4 00:16:15.872 21:09:31 -- nvmf/common.sh@718 -- # format_dhchap_key 381bde14435df5952cd65202fc0958fbbf836f196c6f2270a445966c98ea1a32 3 00:16:15.872 21:09:31 -- nvmf/common.sh@708 -- # format_key DHHC-1 381bde14435df5952cd65202fc0958fbbf836f196c6f2270a445966c98ea1a32 3 00:16:15.872 21:09:31 -- nvmf/common.sh@691 -- # local prefix key digest 00:16:15.872 21:09:31 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:16:15.872 21:09:31 -- nvmf/common.sh@693 -- # key=381bde14435df5952cd65202fc0958fbbf836f196c6f2270a445966c98ea1a32 00:16:15.872 21:09:31 -- nvmf/common.sh@693 -- # digest=3 00:16:15.872 21:09:31 -- nvmf/common.sh@694 -- # python - 00:16:15.872 21:09:31 -- nvmf/common.sh@719 -- # chmod 0600 /tmp/spdk.key-sha512.oz4 00:16:15.872 21:09:31 -- nvmf/common.sh@721 -- # echo /tmp/spdk.key-sha512.oz4 00:16:15.872 21:09:31 -- target/auth.sh@63 -- # keys[3]=/tmp/spdk.key-sha512.oz4 00:16:15.872 21:09:31 -- target/auth.sh@65 -- # waitforlisten 3048765 00:16:15.872 21:09:31 -- common/autotest_common.sh@817 -- # '[' -z 3048765 ']' 00:16:15.872 21:09:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.872 21:09:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:15.872 21:09:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.872 21:09:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:15.872 21:09:31 -- common/autotest_common.sh@10 -- # set +x 00:16:16.130 21:09:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:16.130 21:09:31 -- common/autotest_common.sh@850 -- # return 0 00:16:16.130 21:09:31 -- target/auth.sh@66 -- # waitforlisten 3048801 /var/tmp/host.sock 00:16:16.130 21:09:31 -- common/autotest_common.sh@817 -- # '[' -z 3048801 ']' 00:16:16.130 21:09:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/host.sock 00:16:16.130 21:09:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:16.130 21:09:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:16.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:16.130 21:09:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:16.130 21:09:31 -- common/autotest_common.sh@10 -- # set +x 00:16:16.388 21:09:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:16.388 21:09:32 -- common/autotest_common.sh@850 -- # return 0 00:16:16.388 21:09:32 -- target/auth.sh@67 -- # rpc_cmd 00:16:16.388 21:09:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:16.388 21:09:32 -- common/autotest_common.sh@10 -- # set +x 00:16:16.388 21:09:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:16.388 21:09:32 -- target/auth.sh@74 -- # for i in "${!keys[@]}" 00:16:16.388 21:09:32 -- target/auth.sh@75 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.pZ1 00:16:16.388 21:09:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:16.388 21:09:32 -- common/autotest_common.sh@10 -- # set +x 00:16:16.388 21:09:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:16.388 21:09:32 -- target/auth.sh@76 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.pZ1 00:16:16.388 21:09:32 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.pZ1 00:16:16.646 21:09:32 -- target/auth.sh@74 -- # for i in "${!keys[@]}" 00:16:16.646 21:09:32 -- target/auth.sh@75 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.4en 00:16:16.646 21:09:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:16.646 21:09:32 -- common/autotest_common.sh@10 -- # set +x 00:16:16.646 21:09:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:16.646 21:09:32 -- target/auth.sh@76 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.4en 00:16:16.646 21:09:32 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.4en 00:16:16.646 21:09:32 -- target/auth.sh@74 -- # for i in "${!keys[@]}" 00:16:16.646 21:09:32 -- target/auth.sh@75 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.HBs 00:16:16.646 21:09:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:16.646 21:09:32 -- common/autotest_common.sh@10 -- # set +x 00:16:16.646 21:09:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:16.646 21:09:32 -- target/auth.sh@76 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.HBs 00:16:16.646 21:09:32 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.HBs 00:16:16.904 21:09:32 -- target/auth.sh@74 -- # for i in "${!keys[@]}" 00:16:16.904 21:09:32 -- target/auth.sh@75 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.oz4 00:16:16.904 21:09:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:16.904 21:09:32 -- common/autotest_common.sh@10 -- # set +x 00:16:16.904 21:09:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:16.904 21:09:32 -- target/auth.sh@76 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.oz4 00:16:16.904 21:09:32 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.oz4 00:16:17.161 21:09:32 -- target/auth.sh@80 -- # for digest in "${digests[@]}" 00:16:17.161 21:09:32 -- target/auth.sh@81 -- # for dhgroup in "${dhgroups[@]}" 00:16:17.161 21:09:32 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:16:17.161 21:09:32 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:17.161 21:09:32 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:17.419 21:09:33 -- target/auth.sh@85 -- # connect_authenticate sha256 null 0 00:16:17.419 21:09:33 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:17.419 21:09:33 -- target/auth.sh@36 -- # digest=sha256 00:16:17.419 21:09:33 -- target/auth.sh@36 -- # dhgroup=null 00:16:17.419 21:09:33 -- target/auth.sh@36 -- # key=key0 00:16:17.419 21:09:33 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key0 00:16:17.419 21:09:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:17.419 21:09:33 -- common/autotest_common.sh@10 -- # set +x 00:16:17.419 21:09:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:17.419 21:09:33 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:17.419 21:09:33 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:17.419 00:16:17.419 21:09:33 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:17.419 21:09:33 -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:17.419 21:09:33 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.677 21:09:33 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.677 21:09:33 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.677 21:09:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:17.677 21:09:33 -- common/autotest_common.sh@10 -- # set +x 00:16:17.677 21:09:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:17.677 21:09:33 -- target/auth.sh@44 -- # qpairs='[ 00:16:17.677 { 00:16:17.677 "cntlid": 1, 00:16:17.677 "qid": 0, 00:16:17.677 "state": "enabled", 00:16:17.677 "listen_address": { 00:16:17.677 "trtype": "TCP", 00:16:17.677 "adrfam": "IPv4", 00:16:17.677 "traddr": "10.0.0.2", 00:16:17.677 "trsvcid": "4420" 00:16:17.677 }, 00:16:17.677 "peer_address": { 00:16:17.677 "trtype": "TCP", 00:16:17.677 "adrfam": "IPv4", 00:16:17.677 "traddr": "10.0.0.1", 00:16:17.677 "trsvcid": "45054" 00:16:17.677 }, 00:16:17.677 "auth": { 00:16:17.677 "state": "completed", 00:16:17.677 "digest": "sha256", 00:16:17.677 "dhgroup": "null" 00:16:17.677 } 00:16:17.677 } 00:16:17.677 ]' 00:16:17.677 21:09:33 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:17.677 21:09:33 -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:17.677 21:09:33 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:17.935 21:09:33 -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:16:17.935 21:09:33 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:17.935 21:09:33 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.935 21:09:33 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.935 21:09:33 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.935 21:09:33 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:16:17.935 21:09:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:17.935 21:09:33 -- common/autotest_common.sh@10 -- # set +x 00:16:17.935 21:09:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:17.935 21:09:33 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:16:17.935 21:09:33 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:17.935 21:09:33 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:18.192 21:09:34 -- target/auth.sh@85 -- # connect_authenticate sha256 null 1 00:16:18.192 21:09:34 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:18.192 21:09:34 -- target/auth.sh@36 -- # digest=sha256 00:16:18.192 21:09:34 -- target/auth.sh@36 -- # dhgroup=null 00:16:18.192 21:09:34 -- target/auth.sh@36 -- # key=key1 00:16:18.192 21:09:34 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key1 00:16:18.192 21:09:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:18.192 21:09:34 -- common/autotest_common.sh@10 -- # set +x 00:16:18.192 21:09:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:18.192 21:09:34 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:18.192 21:09:34 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:18.450 00:16:18.450 21:09:34 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:18.450 21:09:34 -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:18.450 21:09:34 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.708 21:09:34 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.708 21:09:34 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.708 21:09:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:18.708 21:09:34 -- common/autotest_common.sh@10 -- # set +x 00:16:18.708 21:09:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:18.708 21:09:34 -- target/auth.sh@44 -- # qpairs='[ 00:16:18.708 { 00:16:18.708 "cntlid": 2, 00:16:18.708 "qid": 0, 00:16:18.708 "state": "enabled", 00:16:18.708 "listen_address": { 00:16:18.708 "trtype": "TCP", 00:16:18.708 "adrfam": "IPv4", 00:16:18.708 "traddr": "10.0.0.2", 00:16:18.708 "trsvcid": "4420" 00:16:18.708 }, 00:16:18.708 "peer_address": { 00:16:18.708 "trtype": "TCP", 00:16:18.708 "adrfam": "IPv4", 00:16:18.708 "traddr": "10.0.0.1", 00:16:18.708 "trsvcid": "45062" 00:16:18.708 }, 00:16:18.708 "auth": { 00:16:18.708 "state": "completed", 00:16:18.708 "digest": "sha256", 00:16:18.708 "dhgroup": "null" 00:16:18.708 } 00:16:18.708 } 00:16:18.708 ]' 00:16:18.708 21:09:34 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:18.708 21:09:34 -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:18.708 21:09:34 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:18.708 21:09:34 -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:16:18.708 21:09:34 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:18.708 21:09:34 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.708 21:09:34 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.708 21:09:34 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.967 21:09:34 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:16:18.967 21:09:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:18.967 21:09:34 -- common/autotest_common.sh@10 -- # set +x 00:16:18.967 21:09:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:18.967 21:09:34 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:16:18.967 21:09:34 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:18.967 21:09:34 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:19.260 21:09:34 -- target/auth.sh@85 -- # connect_authenticate sha256 null 2 00:16:19.260 21:09:34 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:19.260 21:09:34 -- target/auth.sh@36 -- # digest=sha256 00:16:19.260 21:09:34 -- target/auth.sh@36 -- # dhgroup=null 00:16:19.260 21:09:34 -- target/auth.sh@36 -- # key=key2 00:16:19.260 21:09:34 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key2 00:16:19.260 21:09:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:19.260 21:09:34 -- common/autotest_common.sh@10 -- # set +x 00:16:19.260 21:09:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:19.260 21:09:34 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:19.260 21:09:34 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:19.260 00:16:19.518 21:09:35 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:19.519 21:09:35 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.519 21:09:35 -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:19.519 21:09:35 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.519 21:09:35 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.519 21:09:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:19.519 21:09:35 -- common/autotest_common.sh@10 -- # set +x 00:16:19.519 21:09:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:19.519 21:09:35 -- target/auth.sh@44 -- # qpairs='[ 00:16:19.519 { 00:16:19.519 "cntlid": 3, 00:16:19.519 "qid": 0, 00:16:19.519 "state": "enabled", 00:16:19.519 "listen_address": { 00:16:19.519 "trtype": "TCP", 00:16:19.519 "adrfam": "IPv4", 00:16:19.519 "traddr": "10.0.0.2", 00:16:19.519 "trsvcid": "4420" 00:16:19.519 }, 00:16:19.519 "peer_address": { 00:16:19.519 "trtype": "TCP", 00:16:19.519 "adrfam": "IPv4", 00:16:19.519 "traddr": "10.0.0.1", 00:16:19.519 "trsvcid": "45068" 00:16:19.519 }, 00:16:19.519 "auth": { 00:16:19.519 "state": "completed", 00:16:19.519 "digest": "sha256", 00:16:19.519 "dhgroup": "null" 00:16:19.519 } 00:16:19.519 } 00:16:19.519 ]' 00:16:19.519 21:09:35 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:19.519 21:09:35 -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:19.519 21:09:35 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:19.776 21:09:35 -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:16:19.776 21:09:35 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:19.776 21:09:35 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.776 21:09:35 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.776 21:09:35 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.776 21:09:35 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:16:19.776 21:09:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:19.776 21:09:35 -- common/autotest_common.sh@10 -- # set +x 00:16:19.776 21:09:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:19.776 21:09:35 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:16:19.776 21:09:35 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:19.776 21:09:35 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:20.033 21:09:35 -- target/auth.sh@85 -- # connect_authenticate sha256 null 3 00:16:20.033 21:09:35 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:20.033 21:09:35 -- target/auth.sh@36 -- # digest=sha256 00:16:20.033 21:09:35 -- target/auth.sh@36 -- # dhgroup=null 00:16:20.034 21:09:35 -- target/auth.sh@36 -- # key=key3 00:16:20.034 21:09:35 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key3 00:16:20.034 21:09:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.034 21:09:35 -- common/autotest_common.sh@10 -- # set +x 00:16:20.034 21:09:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.034 21:09:35 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:20.034 21:09:35 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:20.292 00:16:20.292 21:09:36 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:20.292 21:09:36 -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:20.292 21:09:36 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.549 21:09:36 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.549 21:09:36 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.549 21:09:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.549 21:09:36 -- common/autotest_common.sh@10 -- # set +x 00:16:20.549 21:09:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.549 21:09:36 -- target/auth.sh@44 -- # qpairs='[ 00:16:20.549 { 00:16:20.549 "cntlid": 4, 00:16:20.549 "qid": 0, 00:16:20.549 "state": "enabled", 00:16:20.549 "listen_address": { 00:16:20.549 "trtype": "TCP", 00:16:20.549 "adrfam": "IPv4", 00:16:20.549 "traddr": "10.0.0.2", 00:16:20.549 "trsvcid": "4420" 00:16:20.549 }, 00:16:20.549 "peer_address": { 00:16:20.549 "trtype": "TCP", 00:16:20.549 "adrfam": "IPv4", 00:16:20.549 "traddr": "10.0.0.1", 00:16:20.549 "trsvcid": "48714" 00:16:20.549 }, 00:16:20.549 "auth": { 00:16:20.549 "state": "completed", 00:16:20.549 "digest": "sha256", 00:16:20.549 "dhgroup": "null" 00:16:20.549 } 00:16:20.549 } 00:16:20.549 ]' 00:16:20.549 21:09:36 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:20.549 21:09:36 -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:20.549 21:09:36 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:20.549 21:09:36 -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:16:20.549 21:09:36 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:20.549 21:09:36 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.549 21:09:36 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.549 21:09:36 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.807 21:09:36 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:16:20.807 21:09:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.807 21:09:36 -- common/autotest_common.sh@10 -- # set +x 00:16:20.807 21:09:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.807 21:09:36 -- target/auth.sh@81 -- # for dhgroup in "${dhgroups[@]}" 00:16:20.807 21:09:36 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:16:20.807 21:09:36 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:20.807 21:09:36 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:21.065 21:09:36 -- target/auth.sh@85 -- # connect_authenticate sha256 ffdhe2048 0 00:16:21.065 21:09:36 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:21.065 21:09:36 -- target/auth.sh@36 -- # digest=sha256 00:16:21.065 21:09:36 -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:21.065 21:09:36 -- target/auth.sh@36 -- # key=key0 00:16:21.065 21:09:36 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key0 00:16:21.065 21:09:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:21.065 21:09:36 -- common/autotest_common.sh@10 -- # set +x 00:16:21.065 21:09:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:21.065 21:09:36 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:21.065 21:09:36 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:21.324 00:16:21.324 21:09:37 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:21.324 21:09:37 -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:21.324 21:09:37 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.324 21:09:37 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.324 21:09:37 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.324 21:09:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:21.324 21:09:37 -- common/autotest_common.sh@10 -- # set +x 00:16:21.324 21:09:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:21.324 21:09:37 -- target/auth.sh@44 -- # qpairs='[ 00:16:21.324 { 00:16:21.324 "cntlid": 5, 00:16:21.324 "qid": 0, 00:16:21.324 "state": "enabled", 00:16:21.324 "listen_address": { 00:16:21.324 "trtype": "TCP", 00:16:21.324 "adrfam": "IPv4", 00:16:21.324 "traddr": "10.0.0.2", 00:16:21.324 "trsvcid": "4420" 00:16:21.324 }, 00:16:21.324 "peer_address": { 00:16:21.324 "trtype": "TCP", 00:16:21.324 "adrfam": "IPv4", 00:16:21.324 "traddr": "10.0.0.1", 00:16:21.324 "trsvcid": "48726" 00:16:21.324 }, 00:16:21.324 "auth": { 00:16:21.324 "state": "completed", 00:16:21.324 "digest": "sha256", 00:16:21.324 "dhgroup": "ffdhe2048" 00:16:21.324 } 00:16:21.324 } 00:16:21.324 ]' 00:16:21.324 21:09:37 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:21.581 21:09:37 -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:21.581 21:09:37 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:21.581 21:09:37 -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:21.581 21:09:37 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:21.581 21:09:37 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.581 21:09:37 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.581 21:09:37 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.839 21:09:37 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:16:21.839 21:09:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:21.839 21:09:37 -- common/autotest_common.sh@10 -- # set +x 00:16:21.839 21:09:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:21.839 21:09:37 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:16:21.839 21:09:37 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:21.839 21:09:37 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:21.839 21:09:37 -- target/auth.sh@85 -- # connect_authenticate sha256 ffdhe2048 1 00:16:21.839 21:09:37 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:21.839 21:09:37 -- target/auth.sh@36 -- # digest=sha256 00:16:21.839 21:09:37 -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:21.839 21:09:37 -- target/auth.sh@36 -- # key=key1 00:16:21.839 21:09:37 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key1 00:16:21.839 21:09:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:21.839 21:09:37 -- common/autotest_common.sh@10 -- # set +x 00:16:21.839 21:09:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:21.839 21:09:37 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:21.839 21:09:37 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:22.097 00:16:22.097 21:09:37 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:22.097 21:09:37 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.097 21:09:37 -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:22.355 21:09:38 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.355 21:09:38 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.355 21:09:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:22.355 21:09:38 -- common/autotest_common.sh@10 -- # set +x 00:16:22.355 21:09:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:22.355 21:09:38 -- target/auth.sh@44 -- # qpairs='[ 00:16:22.355 { 00:16:22.355 "cntlid": 6, 00:16:22.355 "qid": 0, 00:16:22.355 "state": "enabled", 00:16:22.355 "listen_address": { 00:16:22.355 "trtype": "TCP", 00:16:22.355 "adrfam": "IPv4", 00:16:22.355 "traddr": "10.0.0.2", 00:16:22.355 "trsvcid": "4420" 00:16:22.355 }, 00:16:22.355 "peer_address": { 00:16:22.355 "trtype": "TCP", 00:16:22.355 "adrfam": "IPv4", 00:16:22.355 "traddr": "10.0.0.1", 00:16:22.355 "trsvcid": "48738" 00:16:22.355 }, 00:16:22.355 "auth": { 00:16:22.355 "state": "completed", 00:16:22.355 "digest": "sha256", 00:16:22.355 "dhgroup": "ffdhe2048" 00:16:22.355 } 00:16:22.355 } 00:16:22.355 ]' 00:16:22.355 21:09:38 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:22.355 21:09:38 -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:22.355 21:09:38 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:22.355 21:09:38 -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:22.355 21:09:38 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:22.355 21:09:38 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.355 21:09:38 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.355 21:09:38 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.613 21:09:38 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:16:22.613 21:09:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:22.613 21:09:38 -- common/autotest_common.sh@10 -- # set +x 00:16:22.613 21:09:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:22.613 21:09:38 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:16:22.613 21:09:38 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:22.613 21:09:38 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:22.870 21:09:38 -- target/auth.sh@85 -- # connect_authenticate sha256 ffdhe2048 2 00:16:22.870 21:09:38 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:22.870 21:09:38 -- target/auth.sh@36 -- # digest=sha256 00:16:22.870 21:09:38 -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:22.870 21:09:38 -- target/auth.sh@36 -- # key=key2 00:16:22.870 21:09:38 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key2 00:16:22.870 21:09:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:22.870 21:09:38 -- common/autotest_common.sh@10 -- # set +x 00:16:22.870 21:09:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:22.870 21:09:38 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:22.871 21:09:38 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:23.128 00:16:23.128 21:09:38 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:23.128 21:09:38 -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:23.128 21:09:38 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.128 21:09:39 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.128 21:09:39 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.128 21:09:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:23.128 21:09:39 -- common/autotest_common.sh@10 -- # set +x 00:16:23.386 21:09:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:23.386 21:09:39 -- target/auth.sh@44 -- # qpairs='[ 00:16:23.386 { 00:16:23.386 "cntlid": 7, 00:16:23.386 "qid": 0, 00:16:23.386 "state": "enabled", 00:16:23.386 "listen_address": { 00:16:23.386 "trtype": "TCP", 00:16:23.386 "adrfam": "IPv4", 00:16:23.386 "traddr": "10.0.0.2", 00:16:23.386 "trsvcid": "4420" 00:16:23.386 }, 00:16:23.386 "peer_address": { 00:16:23.386 "trtype": "TCP", 00:16:23.386 "adrfam": "IPv4", 00:16:23.386 "traddr": "10.0.0.1", 00:16:23.386 "trsvcid": "48746" 00:16:23.386 }, 00:16:23.386 "auth": { 00:16:23.386 "state": "completed", 00:16:23.386 "digest": "sha256", 00:16:23.386 "dhgroup": "ffdhe2048" 00:16:23.386 } 00:16:23.386 } 00:16:23.386 ]' 00:16:23.386 21:09:39 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:23.386 21:09:39 -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:23.386 21:09:39 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:23.386 21:09:39 -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:23.386 21:09:39 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:23.386 21:09:39 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.386 21:09:39 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.386 21:09:39 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.644 21:09:39 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:16:23.644 21:09:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:23.644 21:09:39 -- common/autotest_common.sh@10 -- # set +x 00:16:23.644 21:09:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:23.644 21:09:39 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:16:23.644 21:09:39 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:23.644 21:09:39 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:23.644 21:09:39 -- target/auth.sh@85 -- # connect_authenticate sha256 ffdhe2048 3 00:16:23.644 21:09:39 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:23.644 21:09:39 -- target/auth.sh@36 -- # digest=sha256 00:16:23.644 21:09:39 -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:23.644 21:09:39 -- target/auth.sh@36 -- # key=key3 00:16:23.644 21:09:39 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key3 00:16:23.644 21:09:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:23.644 21:09:39 -- common/autotest_common.sh@10 -- # set +x 00:16:23.644 21:09:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:23.644 21:09:39 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:23.644 21:09:39 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:23.902 00:16:23.902 21:09:39 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:23.902 21:09:39 -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:23.902 21:09:39 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.160 21:09:39 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.160 21:09:39 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.160 21:09:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.160 21:09:39 -- common/autotest_common.sh@10 -- # set +x 00:16:24.160 21:09:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.160 21:09:40 -- target/auth.sh@44 -- # qpairs='[ 00:16:24.160 { 00:16:24.160 "cntlid": 8, 00:16:24.160 "qid": 0, 00:16:24.160 "state": "enabled", 00:16:24.160 "listen_address": { 00:16:24.160 "trtype": "TCP", 00:16:24.160 "adrfam": "IPv4", 00:16:24.160 "traddr": "10.0.0.2", 00:16:24.160 "trsvcid": "4420" 00:16:24.160 }, 00:16:24.160 "peer_address": { 00:16:24.160 "trtype": "TCP", 00:16:24.160 "adrfam": "IPv4", 00:16:24.160 "traddr": "10.0.0.1", 00:16:24.160 "trsvcid": "48760" 00:16:24.160 }, 00:16:24.160 "auth": { 00:16:24.160 "state": "completed", 00:16:24.160 "digest": "sha256", 00:16:24.160 "dhgroup": "ffdhe2048" 00:16:24.160 } 00:16:24.160 } 00:16:24.160 ]' 00:16:24.160 21:09:40 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:24.160 21:09:40 -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:24.160 21:09:40 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:24.417 21:09:40 -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:24.417 21:09:40 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:24.417 21:09:40 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.417 21:09:40 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.417 21:09:40 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.417 21:09:40 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:16:24.417 21:09:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.417 21:09:40 -- common/autotest_common.sh@10 -- # set +x 00:16:24.417 21:09:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.417 21:09:40 -- target/auth.sh@81 -- # for dhgroup in "${dhgroups[@]}" 00:16:24.417 21:09:40 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:16:24.417 21:09:40 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:24.417 21:09:40 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:24.675 21:09:40 -- target/auth.sh@85 -- # connect_authenticate sha256 ffdhe3072 0 00:16:24.675 21:09:40 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:24.675 21:09:40 -- target/auth.sh@36 -- # digest=sha256 00:16:24.675 21:09:40 -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:24.675 21:09:40 -- target/auth.sh@36 -- # key=key0 00:16:24.675 21:09:40 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key0 00:16:24.675 21:09:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.675 21:09:40 -- common/autotest_common.sh@10 -- # set +x 00:16:24.675 21:09:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.675 21:09:40 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:24.675 21:09:40 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:24.932 00:16:24.932 21:09:40 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:24.932 21:09:40 -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:24.932 21:09:40 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.190 21:09:40 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.190 21:09:40 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.190 21:09:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:25.190 21:09:40 -- common/autotest_common.sh@10 -- # set +x 00:16:25.190 21:09:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:25.190 21:09:40 -- target/auth.sh@44 -- # qpairs='[ 00:16:25.190 { 00:16:25.190 "cntlid": 9, 00:16:25.190 "qid": 0, 00:16:25.190 "state": "enabled", 00:16:25.190 "listen_address": { 00:16:25.190 "trtype": "TCP", 00:16:25.190 "adrfam": "IPv4", 00:16:25.190 "traddr": "10.0.0.2", 00:16:25.190 "trsvcid": "4420" 00:16:25.190 }, 00:16:25.190 "peer_address": { 00:16:25.190 "trtype": "TCP", 00:16:25.190 "adrfam": "IPv4", 00:16:25.190 "traddr": "10.0.0.1", 00:16:25.190 "trsvcid": "48772" 00:16:25.190 }, 00:16:25.190 "auth": { 00:16:25.190 "state": "completed", 00:16:25.190 "digest": "sha256", 00:16:25.190 "dhgroup": "ffdhe3072" 00:16:25.190 } 00:16:25.190 } 00:16:25.190 ]' 00:16:25.190 21:09:40 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:25.190 21:09:41 -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:25.190 21:09:41 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:25.190 21:09:41 -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:25.190 21:09:41 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:25.190 21:09:41 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.190 21:09:41 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.190 21:09:41 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.448 21:09:41 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:16:25.448 21:09:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:25.448 21:09:41 -- common/autotest_common.sh@10 -- # set +x 00:16:25.448 21:09:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:25.448 21:09:41 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:16:25.448 21:09:41 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:25.448 21:09:41 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:25.706 21:09:41 -- target/auth.sh@85 -- # connect_authenticate sha256 ffdhe3072 1 00:16:25.706 21:09:41 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:25.706 21:09:41 -- target/auth.sh@36 -- # digest=sha256 00:16:25.706 21:09:41 -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:25.706 21:09:41 -- target/auth.sh@36 -- # key=key1 00:16:25.706 21:09:41 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key1 00:16:25.706 21:09:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:25.706 21:09:41 -- common/autotest_common.sh@10 -- # set +x 00:16:25.706 21:09:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:25.706 21:09:41 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:25.706 21:09:41 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:25.964 00:16:25.964 21:09:41 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:25.964 21:09:41 -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:25.964 21:09:41 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.964 21:09:41 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.964 21:09:41 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.964 21:09:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:25.964 21:09:41 -- common/autotest_common.sh@10 -- # set +x 00:16:26.222 21:09:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:26.222 21:09:41 -- target/auth.sh@44 -- # qpairs='[ 00:16:26.222 { 00:16:26.222 "cntlid": 10, 00:16:26.222 "qid": 0, 00:16:26.222 "state": "enabled", 00:16:26.222 "listen_address": { 00:16:26.222 "trtype": "TCP", 00:16:26.222 "adrfam": "IPv4", 00:16:26.222 "traddr": "10.0.0.2", 00:16:26.222 "trsvcid": "4420" 00:16:26.222 }, 00:16:26.222 "peer_address": { 00:16:26.222 "trtype": "TCP", 00:16:26.222 "adrfam": "IPv4", 00:16:26.222 "traddr": "10.0.0.1", 00:16:26.222 "trsvcid": "48784" 00:16:26.222 }, 00:16:26.222 "auth": { 00:16:26.222 "state": "completed", 00:16:26.222 "digest": "sha256", 00:16:26.222 "dhgroup": "ffdhe3072" 00:16:26.222 } 00:16:26.222 } 00:16:26.222 ]' 00:16:26.222 21:09:41 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:26.222 21:09:41 -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:26.222 21:09:41 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:26.222 21:09:41 -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:26.222 21:09:41 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:26.222 21:09:42 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.222 21:09:42 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.222 21:09:42 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.481 21:09:42 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:16:26.481 21:09:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:26.481 21:09:42 -- common/autotest_common.sh@10 -- # set +x 00:16:26.481 21:09:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:26.481 21:09:42 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:16:26.481 21:09:42 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:26.481 21:09:42 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:26.481 21:09:42 -- target/auth.sh@85 -- # connect_authenticate sha256 ffdhe3072 2 00:16:26.481 21:09:42 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:26.481 21:09:42 -- target/auth.sh@36 -- # digest=sha256 00:16:26.481 21:09:42 -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:26.481 21:09:42 -- target/auth.sh@36 -- # key=key2 00:16:26.481 21:09:42 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key2 00:16:26.481 21:09:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:26.481 21:09:42 -- common/autotest_common.sh@10 -- # set +x 00:16:26.738 21:09:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:26.738 21:09:42 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:26.738 21:09:42 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:26.738 00:16:26.996 21:09:42 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:26.996 21:09:42 -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:26.996 21:09:42 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.996 21:09:42 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.996 21:09:42 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.996 21:09:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:26.996 21:09:42 -- common/autotest_common.sh@10 -- # set +x 00:16:26.996 21:09:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:26.996 21:09:42 -- target/auth.sh@44 -- # qpairs='[ 00:16:26.996 { 00:16:26.996 "cntlid": 11, 00:16:26.996 "qid": 0, 00:16:26.996 "state": "enabled", 00:16:26.996 "listen_address": { 00:16:26.996 "trtype": "TCP", 00:16:26.996 "adrfam": "IPv4", 00:16:26.996 "traddr": "10.0.0.2", 00:16:26.996 "trsvcid": "4420" 00:16:26.996 }, 00:16:26.996 "peer_address": { 00:16:26.996 "trtype": "TCP", 00:16:26.996 "adrfam": "IPv4", 00:16:26.996 "traddr": "10.0.0.1", 00:16:26.996 "trsvcid": "48788" 00:16:26.996 }, 00:16:26.996 "auth": { 00:16:26.996 "state": "completed", 00:16:26.996 "digest": "sha256", 00:16:26.996 "dhgroup": "ffdhe3072" 00:16:26.996 } 00:16:26.996 } 00:16:26.996 ]' 00:16:26.996 21:09:42 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:26.996 21:09:42 -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:26.996 21:09:42 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:27.254 21:09:42 -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:27.254 21:09:42 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:27.254 21:09:42 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.254 21:09:42 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.254 21:09:42 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.254 21:09:43 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:16:27.254 21:09:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:27.254 21:09:43 -- common/autotest_common.sh@10 -- # set +x 00:16:27.511 21:09:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:27.511 21:09:43 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:16:27.511 21:09:43 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:27.511 21:09:43 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:27.511 21:09:43 -- target/auth.sh@85 -- # connect_authenticate sha256 ffdhe3072 3 00:16:27.511 21:09:43 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:27.511 21:09:43 -- target/auth.sh@36 -- # digest=sha256 00:16:27.511 21:09:43 -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:27.511 21:09:43 -- target/auth.sh@36 -- # key=key3 00:16:27.511 21:09:43 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key3 00:16:27.511 21:09:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:27.511 21:09:43 -- common/autotest_common.sh@10 -- # set +x 00:16:27.511 21:09:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:27.511 21:09:43 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:27.511 21:09:43 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:27.768 00:16:27.768 21:09:43 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:27.768 21:09:43 -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:27.768 21:09:43 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.025 21:09:43 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.025 21:09:43 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.025 21:09:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:28.025 21:09:43 -- common/autotest_common.sh@10 -- # set +x 00:16:28.025 21:09:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:28.025 21:09:43 -- target/auth.sh@44 -- # qpairs='[ 00:16:28.025 { 00:16:28.025 "cntlid": 12, 00:16:28.025 "qid": 0, 00:16:28.025 "state": "enabled", 00:16:28.025 "listen_address": { 00:16:28.025 "trtype": "TCP", 00:16:28.025 "adrfam": "IPv4", 00:16:28.025 "traddr": "10.0.0.2", 00:16:28.025 "trsvcid": "4420" 00:16:28.025 }, 00:16:28.025 "peer_address": { 00:16:28.025 "trtype": "TCP", 00:16:28.026 "adrfam": "IPv4", 00:16:28.026 "traddr": "10.0.0.1", 00:16:28.026 "trsvcid": "48796" 00:16:28.026 }, 00:16:28.026 "auth": { 00:16:28.026 "state": "completed", 00:16:28.026 "digest": "sha256", 00:16:28.026 "dhgroup": "ffdhe3072" 00:16:28.026 } 00:16:28.026 } 00:16:28.026 ]' 00:16:28.026 21:09:43 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:28.026 21:09:43 -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:28.026 21:09:43 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:28.026 21:09:43 -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:28.026 21:09:43 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:28.026 21:09:43 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.026 21:09:43 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.026 21:09:43 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.283 21:09:44 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:16:28.283 21:09:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:28.283 21:09:44 -- common/autotest_common.sh@10 -- # set +x 00:16:28.283 21:09:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:28.283 21:09:44 -- target/auth.sh@81 -- # for dhgroup in "${dhgroups[@]}" 00:16:28.283 21:09:44 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:16:28.283 21:09:44 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:28.283 21:09:44 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:28.541 21:09:44 -- target/auth.sh@85 -- # connect_authenticate sha256 ffdhe4096 0 00:16:28.541 21:09:44 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:28.541 21:09:44 -- target/auth.sh@36 -- # digest=sha256 00:16:28.541 21:09:44 -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:28.541 21:09:44 -- target/auth.sh@36 -- # key=key0 00:16:28.541 21:09:44 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key0 00:16:28.541 21:09:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:28.541 21:09:44 -- common/autotest_common.sh@10 -- # set +x 00:16:28.541 21:09:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:28.541 21:09:44 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:28.541 21:09:44 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:28.797 00:16:28.797 21:09:44 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:28.797 21:09:44 -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:28.797 21:09:44 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.055 21:09:44 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.055 21:09:44 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.055 21:09:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:29.055 21:09:44 -- common/autotest_common.sh@10 -- # set +x 00:16:29.055 21:09:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:29.055 21:09:44 -- target/auth.sh@44 -- # qpairs='[ 00:16:29.055 { 00:16:29.055 "cntlid": 13, 00:16:29.055 "qid": 0, 00:16:29.055 "state": "enabled", 00:16:29.055 "listen_address": { 00:16:29.055 "trtype": "TCP", 00:16:29.055 "adrfam": "IPv4", 00:16:29.055 "traddr": "10.0.0.2", 00:16:29.055 "trsvcid": "4420" 00:16:29.055 }, 00:16:29.055 "peer_address": { 00:16:29.055 "trtype": "TCP", 00:16:29.055 "adrfam": "IPv4", 00:16:29.055 "traddr": "10.0.0.1", 00:16:29.055 "trsvcid": "48802" 00:16:29.055 }, 00:16:29.055 "auth": { 00:16:29.055 "state": "completed", 00:16:29.055 "digest": "sha256", 00:16:29.055 "dhgroup": "ffdhe4096" 00:16:29.055 } 00:16:29.055 } 00:16:29.055 ]' 00:16:29.055 21:09:44 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:29.055 21:09:44 -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:29.055 21:09:44 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:29.055 21:09:44 -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:29.055 21:09:44 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:29.055 21:09:44 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.055 21:09:44 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.055 21:09:44 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.313 21:09:45 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:16:29.313 21:09:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:29.313 21:09:45 -- common/autotest_common.sh@10 -- # set +x 00:16:29.313 21:09:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:29.313 21:09:45 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:16:29.313 21:09:45 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:29.313 21:09:45 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:29.570 21:09:45 -- target/auth.sh@85 -- # connect_authenticate sha256 ffdhe4096 1 00:16:29.570 21:09:45 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:29.570 21:09:45 -- target/auth.sh@36 -- # digest=sha256 00:16:29.570 21:09:45 -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:29.570 21:09:45 -- target/auth.sh@36 -- # key=key1 00:16:29.570 21:09:45 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key1 00:16:29.570 21:09:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:29.570 21:09:45 -- common/autotest_common.sh@10 -- # set +x 00:16:29.570 21:09:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:29.570 21:09:45 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:29.570 21:09:45 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:29.828 00:16:29.828 21:09:45 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:29.828 21:09:45 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.828 21:09:45 -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:29.828 21:09:45 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.828 21:09:45 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.828 21:09:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:29.828 21:09:45 -- common/autotest_common.sh@10 -- # set +x 00:16:29.828 21:09:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:29.828 21:09:45 -- target/auth.sh@44 -- # qpairs='[ 00:16:29.828 { 00:16:29.828 "cntlid": 14, 00:16:29.828 "qid": 0, 00:16:29.828 "state": "enabled", 00:16:29.828 "listen_address": { 00:16:29.828 "trtype": "TCP", 00:16:29.828 "adrfam": "IPv4", 00:16:29.828 "traddr": "10.0.0.2", 00:16:29.828 "trsvcid": "4420" 00:16:29.828 }, 00:16:29.828 "peer_address": { 00:16:29.828 "trtype": "TCP", 00:16:29.828 "adrfam": "IPv4", 00:16:29.828 "traddr": "10.0.0.1", 00:16:29.828 "trsvcid": "48810" 00:16:29.828 }, 00:16:29.828 "auth": { 00:16:29.828 "state": "completed", 00:16:29.828 "digest": "sha256", 00:16:29.828 "dhgroup": "ffdhe4096" 00:16:29.828 } 00:16:29.828 } 00:16:29.828 ]' 00:16:29.828 21:09:45 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:30.086 21:09:45 -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:30.086 21:09:45 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:30.086 21:09:45 -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:30.086 21:09:45 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:30.086 21:09:45 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.086 21:09:45 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.086 21:09:45 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.344 21:09:46 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:16:30.344 21:09:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:30.344 21:09:46 -- common/autotest_common.sh@10 -- # set +x 00:16:30.344 21:09:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:30.344 21:09:46 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:16:30.344 21:09:46 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:30.344 21:09:46 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:30.344 21:09:46 -- target/auth.sh@85 -- # connect_authenticate sha256 ffdhe4096 2 00:16:30.344 21:09:46 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:30.344 21:09:46 -- target/auth.sh@36 -- # digest=sha256 00:16:30.344 21:09:46 -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:30.344 21:09:46 -- target/auth.sh@36 -- # key=key2 00:16:30.344 21:09:46 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key2 00:16:30.344 21:09:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:30.344 21:09:46 -- common/autotest_common.sh@10 -- # set +x 00:16:30.344 21:09:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:30.344 21:09:46 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:30.344 21:09:46 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:30.602 00:16:30.602 21:09:46 -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:30.602 21:09:46 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:30.602 21:09:46 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.859 21:09:46 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.859 21:09:46 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.859 21:09:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:30.859 21:09:46 -- common/autotest_common.sh@10 -- # set +x 00:16:30.859 21:09:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:30.859 21:09:46 -- target/auth.sh@44 -- # qpairs='[ 00:16:30.859 { 00:16:30.859 "cntlid": 15, 00:16:30.859 "qid": 0, 00:16:30.859 "state": "enabled", 00:16:30.859 "listen_address": { 00:16:30.859 "trtype": "TCP", 00:16:30.859 "adrfam": "IPv4", 00:16:30.860 "traddr": "10.0.0.2", 00:16:30.860 "trsvcid": "4420" 00:16:30.860 }, 00:16:30.860 "peer_address": { 00:16:30.860 "trtype": "TCP", 00:16:30.860 "adrfam": "IPv4", 00:16:30.860 "traddr": "10.0.0.1", 00:16:30.860 "trsvcid": "36522" 00:16:30.860 }, 00:16:30.860 "auth": { 00:16:30.860 "state": "completed", 00:16:30.860 "digest": "sha256", 00:16:30.860 "dhgroup": "ffdhe4096" 00:16:30.860 } 00:16:30.860 } 00:16:30.860 ]' 00:16:30.860 21:09:46 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:30.860 21:09:46 -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:30.860 21:09:46 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:30.860 21:09:46 -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:30.860 21:09:46 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:31.118 21:09:46 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.118 21:09:46 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.118 21:09:46 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.118 21:09:46 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:16:31.118 21:09:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:31.118 21:09:46 -- common/autotest_common.sh@10 -- # set +x 00:16:31.118 21:09:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:31.118 21:09:46 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:16:31.118 21:09:46 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:31.118 21:09:46 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:31.376 21:09:47 -- target/auth.sh@85 -- # connect_authenticate sha256 ffdhe4096 3 00:16:31.376 21:09:47 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:31.376 21:09:47 -- target/auth.sh@36 -- # digest=sha256 00:16:31.376 21:09:47 -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:31.376 21:09:47 -- target/auth.sh@36 -- # key=key3 00:16:31.376 21:09:47 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key3 00:16:31.376 21:09:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:31.376 21:09:47 -- common/autotest_common.sh@10 -- # set +x 00:16:31.376 21:09:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:31.376 21:09:47 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:31.376 21:09:47 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:31.633 00:16:31.633 21:09:47 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:31.633 21:09:47 -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:31.633 21:09:47 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.892 21:09:47 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.892 21:09:47 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.892 21:09:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:31.892 21:09:47 -- common/autotest_common.sh@10 -- # set +x 00:16:31.892 21:09:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:31.892 21:09:47 -- target/auth.sh@44 -- # qpairs='[ 00:16:31.892 { 00:16:31.892 "cntlid": 16, 00:16:31.892 "qid": 0, 00:16:31.892 "state": "enabled", 00:16:31.892 "listen_address": { 00:16:31.892 "trtype": "TCP", 00:16:31.892 "adrfam": "IPv4", 00:16:31.892 "traddr": "10.0.0.2", 00:16:31.892 "trsvcid": "4420" 00:16:31.892 }, 00:16:31.892 "peer_address": { 00:16:31.892 "trtype": "TCP", 00:16:31.892 "adrfam": "IPv4", 00:16:31.892 "traddr": "10.0.0.1", 00:16:31.892 "trsvcid": "36532" 00:16:31.892 }, 00:16:31.892 "auth": { 00:16:31.892 "state": "completed", 00:16:31.892 "digest": "sha256", 00:16:31.892 "dhgroup": "ffdhe4096" 00:16:31.892 } 00:16:31.892 } 00:16:31.892 ]' 00:16:31.892 21:09:47 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:31.892 21:09:47 -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:31.892 21:09:47 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:31.892 21:09:47 -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:31.892 21:09:47 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:31.892 21:09:47 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.892 21:09:47 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.892 21:09:47 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.150 21:09:47 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:16:32.150 21:09:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:32.150 21:09:47 -- common/autotest_common.sh@10 -- # set +x 00:16:32.150 21:09:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:32.150 21:09:47 -- target/auth.sh@81 -- # for dhgroup in "${dhgroups[@]}" 00:16:32.150 21:09:47 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:16:32.150 21:09:47 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:32.150 21:09:47 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:32.408 21:09:48 -- target/auth.sh@85 -- # connect_authenticate sha256 ffdhe6144 0 00:16:32.408 21:09:48 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:32.408 21:09:48 -- target/auth.sh@36 -- # digest=sha256 00:16:32.408 21:09:48 -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:32.408 21:09:48 -- target/auth.sh@36 -- # key=key0 00:16:32.408 21:09:48 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key0 00:16:32.408 21:09:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:32.408 21:09:48 -- common/autotest_common.sh@10 -- # set +x 00:16:32.408 21:09:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:32.408 21:09:48 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:32.409 21:09:48 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:32.707 00:16:32.707 21:09:48 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:32.707 21:09:48 -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:32.707 21:09:48 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.002 21:09:48 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.002 21:09:48 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.002 21:09:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:33.002 21:09:48 -- common/autotest_common.sh@10 -- # set +x 00:16:33.002 21:09:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:33.002 21:09:48 -- target/auth.sh@44 -- # qpairs='[ 00:16:33.002 { 00:16:33.002 "cntlid": 17, 00:16:33.002 "qid": 0, 00:16:33.002 "state": "enabled", 00:16:33.002 "listen_address": { 00:16:33.002 "trtype": "TCP", 00:16:33.002 "adrfam": "IPv4", 00:16:33.002 "traddr": "10.0.0.2", 00:16:33.002 "trsvcid": "4420" 00:16:33.002 }, 00:16:33.002 "peer_address": { 00:16:33.002 "trtype": "TCP", 00:16:33.002 "adrfam": "IPv4", 00:16:33.002 "traddr": "10.0.0.1", 00:16:33.002 "trsvcid": "36540" 00:16:33.002 }, 00:16:33.002 "auth": { 00:16:33.002 "state": "completed", 00:16:33.002 "digest": "sha256", 00:16:33.002 "dhgroup": "ffdhe6144" 00:16:33.002 } 00:16:33.002 } 00:16:33.002 ]' 00:16:33.002 21:09:48 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:33.002 21:09:48 -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:33.002 21:09:48 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:33.002 21:09:48 -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:33.002 21:09:48 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:33.002 21:09:48 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.002 21:09:48 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.002 21:09:48 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.260 21:09:48 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:16:33.260 21:09:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:33.260 21:09:48 -- common/autotest_common.sh@10 -- # set +x 00:16:33.260 21:09:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:33.260 21:09:49 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:16:33.260 21:09:49 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:33.260 21:09:49 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:33.260 21:09:49 -- target/auth.sh@85 -- # connect_authenticate sha256 ffdhe6144 1 00:16:33.260 21:09:49 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:33.260 21:09:49 -- target/auth.sh@36 -- # digest=sha256 00:16:33.260 21:09:49 -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:33.260 21:09:49 -- target/auth.sh@36 -- # key=key1 00:16:33.260 21:09:49 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key1 00:16:33.260 21:09:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:33.260 21:09:49 -- common/autotest_common.sh@10 -- # set +x 00:16:33.260 21:09:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:33.260 21:09:49 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:33.260 21:09:49 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:33.826 00:16:33.826 21:09:49 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:33.826 21:09:49 -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:33.826 21:09:49 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.826 21:09:49 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.826 21:09:49 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.826 21:09:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:33.826 21:09:49 -- common/autotest_common.sh@10 -- # set +x 00:16:33.826 21:09:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:33.826 21:09:49 -- target/auth.sh@44 -- # qpairs='[ 00:16:33.826 { 00:16:33.826 "cntlid": 18, 00:16:33.826 "qid": 0, 00:16:33.826 "state": "enabled", 00:16:33.826 "listen_address": { 00:16:33.826 "trtype": "TCP", 00:16:33.826 "adrfam": "IPv4", 00:16:33.826 "traddr": "10.0.0.2", 00:16:33.826 "trsvcid": "4420" 00:16:33.826 }, 00:16:33.826 "peer_address": { 00:16:33.826 "trtype": "TCP", 00:16:33.826 "adrfam": "IPv4", 00:16:33.826 "traddr": "10.0.0.1", 00:16:33.826 "trsvcid": "36544" 00:16:33.826 }, 00:16:33.826 "auth": { 00:16:33.826 "state": "completed", 00:16:33.826 "digest": "sha256", 00:16:33.826 "dhgroup": "ffdhe6144" 00:16:33.826 } 00:16:33.826 } 00:16:33.826 ]' 00:16:33.826 21:09:49 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:33.826 21:09:49 -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:33.826 21:09:49 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:34.085 21:09:49 -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:34.085 21:09:49 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:34.085 21:09:49 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.085 21:09:49 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.085 21:09:49 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.085 21:09:49 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:16:34.085 21:09:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.085 21:09:49 -- common/autotest_common.sh@10 -- # set +x 00:16:34.085 21:09:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.085 21:09:50 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:16:34.085 21:09:50 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:34.085 21:09:50 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:34.348 21:09:50 -- target/auth.sh@85 -- # connect_authenticate sha256 ffdhe6144 2 00:16:34.348 21:09:50 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:34.348 21:09:50 -- target/auth.sh@36 -- # digest=sha256 00:16:34.348 21:09:50 -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:34.348 21:09:50 -- target/auth.sh@36 -- # key=key2 00:16:34.348 21:09:50 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key2 00:16:34.348 21:09:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.348 21:09:50 -- common/autotest_common.sh@10 -- # set +x 00:16:34.348 21:09:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.348 21:09:50 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:34.348 21:09:50 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:34.605 00:16:34.863 21:09:50 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:34.863 21:09:50 -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:34.863 21:09:50 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.863 21:09:50 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.863 21:09:50 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.863 21:09:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.863 21:09:50 -- common/autotest_common.sh@10 -- # set +x 00:16:34.863 21:09:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.863 21:09:50 -- target/auth.sh@44 -- # qpairs='[ 00:16:34.863 { 00:16:34.863 "cntlid": 19, 00:16:34.863 "qid": 0, 00:16:34.863 "state": "enabled", 00:16:34.863 "listen_address": { 00:16:34.863 "trtype": "TCP", 00:16:34.863 "adrfam": "IPv4", 00:16:34.863 "traddr": "10.0.0.2", 00:16:34.863 "trsvcid": "4420" 00:16:34.863 }, 00:16:34.863 "peer_address": { 00:16:34.863 "trtype": "TCP", 00:16:34.863 "adrfam": "IPv4", 00:16:34.863 "traddr": "10.0.0.1", 00:16:34.863 "trsvcid": "36556" 00:16:34.863 }, 00:16:34.863 "auth": { 00:16:34.863 "state": "completed", 00:16:34.863 "digest": "sha256", 00:16:34.863 "dhgroup": "ffdhe6144" 00:16:34.863 } 00:16:34.863 } 00:16:34.863 ]' 00:16:34.863 21:09:50 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:34.863 21:09:50 -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:34.863 21:09:50 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:35.121 21:09:50 -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:35.121 21:09:50 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:35.121 21:09:50 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.121 21:09:50 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.121 21:09:50 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.121 21:09:51 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:16:35.121 21:09:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:35.121 21:09:51 -- common/autotest_common.sh@10 -- # set +x 00:16:35.121 21:09:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:35.121 21:09:51 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:16:35.121 21:09:51 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:35.121 21:09:51 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:35.378 21:09:51 -- target/auth.sh@85 -- # connect_authenticate sha256 ffdhe6144 3 00:16:35.378 21:09:51 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:35.378 21:09:51 -- target/auth.sh@36 -- # digest=sha256 00:16:35.378 21:09:51 -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:35.378 21:09:51 -- target/auth.sh@36 -- # key=key3 00:16:35.378 21:09:51 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key3 00:16:35.378 21:09:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:35.378 21:09:51 -- common/autotest_common.sh@10 -- # set +x 00:16:35.378 21:09:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:35.379 21:09:51 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:35.379 21:09:51 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:35.636 00:16:35.894 21:09:51 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:35.894 21:09:51 -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:35.894 21:09:51 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.894 21:09:51 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.894 21:09:51 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.894 21:09:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:35.894 21:09:51 -- common/autotest_common.sh@10 -- # set +x 00:16:35.894 21:09:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:35.894 21:09:51 -- target/auth.sh@44 -- # qpairs='[ 00:16:35.894 { 00:16:35.894 "cntlid": 20, 00:16:35.894 "qid": 0, 00:16:35.894 "state": "enabled", 00:16:35.894 "listen_address": { 00:16:35.894 "trtype": "TCP", 00:16:35.894 "adrfam": "IPv4", 00:16:35.894 "traddr": "10.0.0.2", 00:16:35.894 "trsvcid": "4420" 00:16:35.894 }, 00:16:35.894 "peer_address": { 00:16:35.894 "trtype": "TCP", 00:16:35.894 "adrfam": "IPv4", 00:16:35.894 "traddr": "10.0.0.1", 00:16:35.894 "trsvcid": "36562" 00:16:35.894 }, 00:16:35.894 "auth": { 00:16:35.894 "state": "completed", 00:16:35.894 "digest": "sha256", 00:16:35.894 "dhgroup": "ffdhe6144" 00:16:35.894 } 00:16:35.894 } 00:16:35.894 ]' 00:16:35.894 21:09:51 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:35.894 21:09:51 -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:35.894 21:09:51 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:36.151 21:09:51 -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:36.151 21:09:51 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:36.151 21:09:51 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.151 21:09:51 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.151 21:09:51 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.151 21:09:52 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:16:36.151 21:09:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:36.151 21:09:52 -- common/autotest_common.sh@10 -- # set +x 00:16:36.151 21:09:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:36.151 21:09:52 -- target/auth.sh@81 -- # for dhgroup in "${dhgroups[@]}" 00:16:36.151 21:09:52 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:16:36.151 21:09:52 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:36.151 21:09:52 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:36.409 21:09:52 -- target/auth.sh@85 -- # connect_authenticate sha256 ffdhe8192 0 00:16:36.409 21:09:52 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:36.409 21:09:52 -- target/auth.sh@36 -- # digest=sha256 00:16:36.409 21:09:52 -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:36.409 21:09:52 -- target/auth.sh@36 -- # key=key0 00:16:36.409 21:09:52 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key0 00:16:36.409 21:09:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:36.409 21:09:52 -- common/autotest_common.sh@10 -- # set +x 00:16:36.409 21:09:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:36.409 21:09:52 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:36.409 21:09:52 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:36.974 00:16:36.974 21:09:52 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:36.974 21:09:52 -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:36.974 21:09:52 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.974 21:09:52 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.974 21:09:52 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.974 21:09:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:36.974 21:09:52 -- common/autotest_common.sh@10 -- # set +x 00:16:37.232 21:09:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:37.232 21:09:52 -- target/auth.sh@44 -- # qpairs='[ 00:16:37.232 { 00:16:37.232 "cntlid": 21, 00:16:37.232 "qid": 0, 00:16:37.232 "state": "enabled", 00:16:37.232 "listen_address": { 00:16:37.232 "trtype": "TCP", 00:16:37.232 "adrfam": "IPv4", 00:16:37.232 "traddr": "10.0.0.2", 00:16:37.232 "trsvcid": "4420" 00:16:37.232 }, 00:16:37.232 "peer_address": { 00:16:37.232 "trtype": "TCP", 00:16:37.232 "adrfam": "IPv4", 00:16:37.232 "traddr": "10.0.0.1", 00:16:37.232 "trsvcid": "36574" 00:16:37.232 }, 00:16:37.232 "auth": { 00:16:37.232 "state": "completed", 00:16:37.232 "digest": "sha256", 00:16:37.232 "dhgroup": "ffdhe8192" 00:16:37.232 } 00:16:37.232 } 00:16:37.232 ]' 00:16:37.232 21:09:52 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:37.232 21:09:52 -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:37.232 21:09:52 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:37.232 21:09:53 -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:37.232 21:09:53 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:37.232 21:09:53 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.232 21:09:53 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.232 21:09:53 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.489 21:09:53 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:16:37.489 21:09:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:37.489 21:09:53 -- common/autotest_common.sh@10 -- # set +x 00:16:37.489 21:09:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:37.489 21:09:53 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:16:37.489 21:09:53 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:37.489 21:09:53 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:37.489 21:09:53 -- target/auth.sh@85 -- # connect_authenticate sha256 ffdhe8192 1 00:16:37.489 21:09:53 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:37.489 21:09:53 -- target/auth.sh@36 -- # digest=sha256 00:16:37.489 21:09:53 -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:37.489 21:09:53 -- target/auth.sh@36 -- # key=key1 00:16:37.489 21:09:53 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key1 00:16:37.489 21:09:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:37.489 21:09:53 -- common/autotest_common.sh@10 -- # set +x 00:16:37.745 21:09:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:37.745 21:09:53 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:37.745 21:09:53 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:38.002 00:16:38.002 21:09:53 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:38.002 21:09:53 -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:38.002 21:09:53 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.259 21:09:54 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.259 21:09:54 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.259 21:09:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:38.259 21:09:54 -- common/autotest_common.sh@10 -- # set +x 00:16:38.259 21:09:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:38.259 21:09:54 -- target/auth.sh@44 -- # qpairs='[ 00:16:38.259 { 00:16:38.259 "cntlid": 22, 00:16:38.259 "qid": 0, 00:16:38.259 "state": "enabled", 00:16:38.259 "listen_address": { 00:16:38.259 "trtype": "TCP", 00:16:38.259 "adrfam": "IPv4", 00:16:38.259 "traddr": "10.0.0.2", 00:16:38.259 "trsvcid": "4420" 00:16:38.259 }, 00:16:38.259 "peer_address": { 00:16:38.259 "trtype": "TCP", 00:16:38.259 "adrfam": "IPv4", 00:16:38.259 "traddr": "10.0.0.1", 00:16:38.259 "trsvcid": "36578" 00:16:38.259 }, 00:16:38.259 "auth": { 00:16:38.259 "state": "completed", 00:16:38.259 "digest": "sha256", 00:16:38.259 "dhgroup": "ffdhe8192" 00:16:38.259 } 00:16:38.259 } 00:16:38.259 ]' 00:16:38.259 21:09:54 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:38.259 21:09:54 -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:38.259 21:09:54 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:38.259 21:09:54 -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:38.259 21:09:54 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:38.517 21:09:54 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.517 21:09:54 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.517 21:09:54 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.517 21:09:54 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:16:38.517 21:09:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:38.517 21:09:54 -- common/autotest_common.sh@10 -- # set +x 00:16:38.517 21:09:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:38.517 21:09:54 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:16:38.517 21:09:54 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:38.517 21:09:54 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:38.775 21:09:54 -- target/auth.sh@85 -- # connect_authenticate sha256 ffdhe8192 2 00:16:38.775 21:09:54 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:38.775 21:09:54 -- target/auth.sh@36 -- # digest=sha256 00:16:38.775 21:09:54 -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:38.775 21:09:54 -- target/auth.sh@36 -- # key=key2 00:16:38.775 21:09:54 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key2 00:16:38.775 21:09:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:38.775 21:09:54 -- common/autotest_common.sh@10 -- # set +x 00:16:38.775 21:09:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:38.775 21:09:54 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:38.775 21:09:54 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:39.340 00:16:39.340 21:09:55 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:39.340 21:09:55 -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:39.340 21:09:55 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.340 21:09:55 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.340 21:09:55 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.340 21:09:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:39.340 21:09:55 -- common/autotest_common.sh@10 -- # set +x 00:16:39.340 21:09:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:39.340 21:09:55 -- target/auth.sh@44 -- # qpairs='[ 00:16:39.340 { 00:16:39.340 "cntlid": 23, 00:16:39.340 "qid": 0, 00:16:39.340 "state": "enabled", 00:16:39.340 "listen_address": { 00:16:39.340 "trtype": "TCP", 00:16:39.340 "adrfam": "IPv4", 00:16:39.340 "traddr": "10.0.0.2", 00:16:39.340 "trsvcid": "4420" 00:16:39.340 }, 00:16:39.340 "peer_address": { 00:16:39.340 "trtype": "TCP", 00:16:39.340 "adrfam": "IPv4", 00:16:39.340 "traddr": "10.0.0.1", 00:16:39.340 "trsvcid": "36592" 00:16:39.340 }, 00:16:39.340 "auth": { 00:16:39.340 "state": "completed", 00:16:39.340 "digest": "sha256", 00:16:39.340 "dhgroup": "ffdhe8192" 00:16:39.340 } 00:16:39.340 } 00:16:39.340 ]' 00:16:39.340 21:09:55 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:39.597 21:09:55 -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:39.597 21:09:55 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:39.597 21:09:55 -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:39.597 21:09:55 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:39.597 21:09:55 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.597 21:09:55 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.597 21:09:55 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.855 21:09:55 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:16:39.855 21:09:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:39.855 21:09:55 -- common/autotest_common.sh@10 -- # set +x 00:16:39.855 21:09:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:39.855 21:09:55 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:16:39.855 21:09:55 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:39.855 21:09:55 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:39.855 21:09:55 -- target/auth.sh@85 -- # connect_authenticate sha256 ffdhe8192 3 00:16:39.855 21:09:55 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:39.855 21:09:55 -- target/auth.sh@36 -- # digest=sha256 00:16:39.855 21:09:55 -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:39.855 21:09:55 -- target/auth.sh@36 -- # key=key3 00:16:39.855 21:09:55 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key3 00:16:39.855 21:09:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:39.855 21:09:55 -- common/autotest_common.sh@10 -- # set +x 00:16:39.855 21:09:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:39.855 21:09:55 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:39.855 21:09:55 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:40.421 00:16:40.421 21:09:56 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:40.421 21:09:56 -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:40.421 21:09:56 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.679 21:09:56 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.679 21:09:56 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.679 21:09:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:40.679 21:09:56 -- common/autotest_common.sh@10 -- # set +x 00:16:40.679 21:09:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:40.679 21:09:56 -- target/auth.sh@44 -- # qpairs='[ 00:16:40.679 { 00:16:40.679 "cntlid": 24, 00:16:40.679 "qid": 0, 00:16:40.679 "state": "enabled", 00:16:40.679 "listen_address": { 00:16:40.679 "trtype": "TCP", 00:16:40.679 "adrfam": "IPv4", 00:16:40.679 "traddr": "10.0.0.2", 00:16:40.679 "trsvcid": "4420" 00:16:40.679 }, 00:16:40.679 "peer_address": { 00:16:40.679 "trtype": "TCP", 00:16:40.679 "adrfam": "IPv4", 00:16:40.679 "traddr": "10.0.0.1", 00:16:40.679 "trsvcid": "36600" 00:16:40.679 }, 00:16:40.679 "auth": { 00:16:40.679 "state": "completed", 00:16:40.679 "digest": "sha256", 00:16:40.679 "dhgroup": "ffdhe8192" 00:16:40.679 } 00:16:40.679 } 00:16:40.679 ]' 00:16:40.679 21:09:56 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:40.679 21:09:56 -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:40.679 21:09:56 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:40.679 21:09:56 -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:40.679 21:09:56 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:40.679 21:09:56 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.679 21:09:56 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.679 21:09:56 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.937 21:09:56 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:16:40.937 21:09:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:40.937 21:09:56 -- common/autotest_common.sh@10 -- # set +x 00:16:40.937 21:09:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:40.937 21:09:56 -- target/auth.sh@80 -- # for digest in "${digests[@]}" 00:16:40.937 21:09:56 -- target/auth.sh@81 -- # for dhgroup in "${dhgroups[@]}" 00:16:40.937 21:09:56 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:16:40.937 21:09:56 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:40.937 21:09:56 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:41.195 21:09:56 -- target/auth.sh@85 -- # connect_authenticate sha384 null 0 00:16:41.195 21:09:56 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:41.195 21:09:56 -- target/auth.sh@36 -- # digest=sha384 00:16:41.195 21:09:56 -- target/auth.sh@36 -- # dhgroup=null 00:16:41.195 21:09:56 -- target/auth.sh@36 -- # key=key0 00:16:41.195 21:09:56 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key0 00:16:41.195 21:09:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:41.195 21:09:56 -- common/autotest_common.sh@10 -- # set +x 00:16:41.195 21:09:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:41.195 21:09:56 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:41.195 21:09:56 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:41.195 00:16:41.452 21:09:57 -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:41.452 21:09:57 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:41.452 21:09:57 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.452 21:09:57 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.453 21:09:57 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.453 21:09:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:41.453 21:09:57 -- common/autotest_common.sh@10 -- # set +x 00:16:41.453 21:09:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:41.453 21:09:57 -- target/auth.sh@44 -- # qpairs='[ 00:16:41.453 { 00:16:41.453 "cntlid": 25, 00:16:41.453 "qid": 0, 00:16:41.453 "state": "enabled", 00:16:41.453 "listen_address": { 00:16:41.453 "trtype": "TCP", 00:16:41.453 "adrfam": "IPv4", 00:16:41.453 "traddr": "10.0.0.2", 00:16:41.453 "trsvcid": "4420" 00:16:41.453 }, 00:16:41.453 "peer_address": { 00:16:41.453 "trtype": "TCP", 00:16:41.453 "adrfam": "IPv4", 00:16:41.453 "traddr": "10.0.0.1", 00:16:41.453 "trsvcid": "55590" 00:16:41.453 }, 00:16:41.453 "auth": { 00:16:41.453 "state": "completed", 00:16:41.453 "digest": "sha384", 00:16:41.453 "dhgroup": "null" 00:16:41.453 } 00:16:41.453 } 00:16:41.453 ]' 00:16:41.453 21:09:57 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:41.453 21:09:57 -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:41.453 21:09:57 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:41.710 21:09:57 -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:16:41.710 21:09:57 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:41.710 21:09:57 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.710 21:09:57 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.710 21:09:57 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.710 21:09:57 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:16:41.710 21:09:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:41.710 21:09:57 -- common/autotest_common.sh@10 -- # set +x 00:16:41.710 21:09:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:41.710 21:09:57 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:16:41.710 21:09:57 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:41.710 21:09:57 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:41.967 21:09:57 -- target/auth.sh@85 -- # connect_authenticate sha384 null 1 00:16:41.967 21:09:57 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:41.967 21:09:57 -- target/auth.sh@36 -- # digest=sha384 00:16:41.967 21:09:57 -- target/auth.sh@36 -- # dhgroup=null 00:16:41.967 21:09:57 -- target/auth.sh@36 -- # key=key1 00:16:41.967 21:09:57 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key1 00:16:41.967 21:09:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:41.967 21:09:57 -- common/autotest_common.sh@10 -- # set +x 00:16:41.967 21:09:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:41.967 21:09:57 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:41.967 21:09:57 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:42.225 00:16:42.225 21:09:57 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:42.225 21:09:57 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.225 21:09:57 -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:42.483 21:09:58 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.483 21:09:58 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.483 21:09:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:42.483 21:09:58 -- common/autotest_common.sh@10 -- # set +x 00:16:42.483 21:09:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:42.483 21:09:58 -- target/auth.sh@44 -- # qpairs='[ 00:16:42.483 { 00:16:42.483 "cntlid": 26, 00:16:42.483 "qid": 0, 00:16:42.483 "state": "enabled", 00:16:42.483 "listen_address": { 00:16:42.483 "trtype": "TCP", 00:16:42.483 "adrfam": "IPv4", 00:16:42.483 "traddr": "10.0.0.2", 00:16:42.483 "trsvcid": "4420" 00:16:42.483 }, 00:16:42.483 "peer_address": { 00:16:42.483 "trtype": "TCP", 00:16:42.483 "adrfam": "IPv4", 00:16:42.483 "traddr": "10.0.0.1", 00:16:42.483 "trsvcid": "55604" 00:16:42.483 }, 00:16:42.483 "auth": { 00:16:42.483 "state": "completed", 00:16:42.483 "digest": "sha384", 00:16:42.483 "dhgroup": "null" 00:16:42.483 } 00:16:42.483 } 00:16:42.483 ]' 00:16:42.483 21:09:58 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:42.483 21:09:58 -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:42.483 21:09:58 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:42.483 21:09:58 -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:16:42.483 21:09:58 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:42.483 21:09:58 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.483 21:09:58 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.483 21:09:58 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.741 21:09:58 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:16:42.741 21:09:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:42.741 21:09:58 -- common/autotest_common.sh@10 -- # set +x 00:16:42.741 21:09:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:42.741 21:09:58 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:16:42.741 21:09:58 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:42.741 21:09:58 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:42.741 21:09:58 -- target/auth.sh@85 -- # connect_authenticate sha384 null 2 00:16:42.741 21:09:58 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:42.741 21:09:58 -- target/auth.sh@36 -- # digest=sha384 00:16:42.741 21:09:58 -- target/auth.sh@36 -- # dhgroup=null 00:16:42.741 21:09:58 -- target/auth.sh@36 -- # key=key2 00:16:42.741 21:09:58 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key2 00:16:42.741 21:09:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:42.741 21:09:58 -- common/autotest_common.sh@10 -- # set +x 00:16:42.999 21:09:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:42.999 21:09:58 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:42.999 21:09:58 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:42.999 00:16:42.999 21:09:58 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:42.999 21:09:58 -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:42.999 21:09:58 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.256 21:09:59 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.256 21:09:59 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.256 21:09:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:43.256 21:09:59 -- common/autotest_common.sh@10 -- # set +x 00:16:43.256 21:09:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:43.256 21:09:59 -- target/auth.sh@44 -- # qpairs='[ 00:16:43.256 { 00:16:43.256 "cntlid": 27, 00:16:43.256 "qid": 0, 00:16:43.256 "state": "enabled", 00:16:43.256 "listen_address": { 00:16:43.256 "trtype": "TCP", 00:16:43.256 "adrfam": "IPv4", 00:16:43.256 "traddr": "10.0.0.2", 00:16:43.256 "trsvcid": "4420" 00:16:43.256 }, 00:16:43.256 "peer_address": { 00:16:43.256 "trtype": "TCP", 00:16:43.256 "adrfam": "IPv4", 00:16:43.256 "traddr": "10.0.0.1", 00:16:43.256 "trsvcid": "55608" 00:16:43.257 }, 00:16:43.257 "auth": { 00:16:43.257 "state": "completed", 00:16:43.257 "digest": "sha384", 00:16:43.257 "dhgroup": "null" 00:16:43.257 } 00:16:43.257 } 00:16:43.257 ]' 00:16:43.257 21:09:59 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:43.257 21:09:59 -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:43.257 21:09:59 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:43.257 21:09:59 -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:16:43.257 21:09:59 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:43.514 21:09:59 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.514 21:09:59 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.514 21:09:59 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.514 21:09:59 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:16:43.514 21:09:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:43.514 21:09:59 -- common/autotest_common.sh@10 -- # set +x 00:16:43.514 21:09:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:43.514 21:09:59 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:16:43.514 21:09:59 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:43.514 21:09:59 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:43.772 21:09:59 -- target/auth.sh@85 -- # connect_authenticate sha384 null 3 00:16:43.772 21:09:59 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:43.772 21:09:59 -- target/auth.sh@36 -- # digest=sha384 00:16:43.772 21:09:59 -- target/auth.sh@36 -- # dhgroup=null 00:16:43.772 21:09:59 -- target/auth.sh@36 -- # key=key3 00:16:43.772 21:09:59 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key3 00:16:43.772 21:09:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:43.772 21:09:59 -- common/autotest_common.sh@10 -- # set +x 00:16:43.772 21:09:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:43.773 21:09:59 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:43.773 21:09:59 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:44.030 00:16:44.030 21:09:59 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:44.031 21:09:59 -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:44.031 21:09:59 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.288 21:09:59 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.288 21:09:59 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.288 21:09:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:44.288 21:09:59 -- common/autotest_common.sh@10 -- # set +x 00:16:44.288 21:09:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:44.288 21:09:59 -- target/auth.sh@44 -- # qpairs='[ 00:16:44.288 { 00:16:44.288 "cntlid": 28, 00:16:44.288 "qid": 0, 00:16:44.288 "state": "enabled", 00:16:44.288 "listen_address": { 00:16:44.288 "trtype": "TCP", 00:16:44.288 "adrfam": "IPv4", 00:16:44.288 "traddr": "10.0.0.2", 00:16:44.288 "trsvcid": "4420" 00:16:44.288 }, 00:16:44.288 "peer_address": { 00:16:44.288 "trtype": "TCP", 00:16:44.288 "adrfam": "IPv4", 00:16:44.288 "traddr": "10.0.0.1", 00:16:44.288 "trsvcid": "55614" 00:16:44.288 }, 00:16:44.289 "auth": { 00:16:44.289 "state": "completed", 00:16:44.289 "digest": "sha384", 00:16:44.289 "dhgroup": "null" 00:16:44.289 } 00:16:44.289 } 00:16:44.289 ]' 00:16:44.289 21:10:00 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:44.289 21:10:00 -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:44.289 21:10:00 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:44.289 21:10:00 -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:16:44.289 21:10:00 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:44.289 21:10:00 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.289 21:10:00 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.289 21:10:00 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.546 21:10:00 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:16:44.546 21:10:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:44.546 21:10:00 -- common/autotest_common.sh@10 -- # set +x 00:16:44.546 21:10:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:44.546 21:10:00 -- target/auth.sh@81 -- # for dhgroup in "${dhgroups[@]}" 00:16:44.546 21:10:00 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:16:44.546 21:10:00 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:44.546 21:10:00 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:44.804 21:10:00 -- target/auth.sh@85 -- # connect_authenticate sha384 ffdhe2048 0 00:16:44.804 21:10:00 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:44.804 21:10:00 -- target/auth.sh@36 -- # digest=sha384 00:16:44.804 21:10:00 -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:44.804 21:10:00 -- target/auth.sh@36 -- # key=key0 00:16:44.804 21:10:00 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key0 00:16:44.804 21:10:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:44.804 21:10:00 -- common/autotest_common.sh@10 -- # set +x 00:16:44.804 21:10:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:44.804 21:10:00 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:44.804 21:10:00 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:44.804 00:16:45.061 21:10:00 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:45.061 21:10:00 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.061 21:10:00 -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:45.061 21:10:00 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.061 21:10:00 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.061 21:10:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:45.061 21:10:00 -- common/autotest_common.sh@10 -- # set +x 00:16:45.061 21:10:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:45.061 21:10:00 -- target/auth.sh@44 -- # qpairs='[ 00:16:45.061 { 00:16:45.061 "cntlid": 29, 00:16:45.061 "qid": 0, 00:16:45.061 "state": "enabled", 00:16:45.061 "listen_address": { 00:16:45.061 "trtype": "TCP", 00:16:45.061 "adrfam": "IPv4", 00:16:45.061 "traddr": "10.0.0.2", 00:16:45.061 "trsvcid": "4420" 00:16:45.061 }, 00:16:45.061 "peer_address": { 00:16:45.061 "trtype": "TCP", 00:16:45.061 "adrfam": "IPv4", 00:16:45.061 "traddr": "10.0.0.1", 00:16:45.061 "trsvcid": "55622" 00:16:45.061 }, 00:16:45.061 "auth": { 00:16:45.061 "state": "completed", 00:16:45.061 "digest": "sha384", 00:16:45.061 "dhgroup": "ffdhe2048" 00:16:45.061 } 00:16:45.061 } 00:16:45.061 ]' 00:16:45.061 21:10:00 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:45.061 21:10:00 -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:45.061 21:10:00 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:45.318 21:10:01 -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:45.318 21:10:01 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:45.318 21:10:01 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.319 21:10:01 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.319 21:10:01 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.319 21:10:01 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:16:45.319 21:10:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:45.319 21:10:01 -- common/autotest_common.sh@10 -- # set +x 00:16:45.319 21:10:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:45.319 21:10:01 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:16:45.319 21:10:01 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:45.319 21:10:01 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:45.576 21:10:01 -- target/auth.sh@85 -- # connect_authenticate sha384 ffdhe2048 1 00:16:45.576 21:10:01 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:45.576 21:10:01 -- target/auth.sh@36 -- # digest=sha384 00:16:45.576 21:10:01 -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:45.576 21:10:01 -- target/auth.sh@36 -- # key=key1 00:16:45.576 21:10:01 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key1 00:16:45.576 21:10:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:45.576 21:10:01 -- common/autotest_common.sh@10 -- # set +x 00:16:45.576 21:10:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:45.576 21:10:01 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:45.576 21:10:01 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:45.833 00:16:45.833 21:10:01 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:45.833 21:10:01 -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:45.833 21:10:01 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.091 21:10:01 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.091 21:10:01 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.091 21:10:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:46.091 21:10:01 -- common/autotest_common.sh@10 -- # set +x 00:16:46.091 21:10:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:46.091 21:10:01 -- target/auth.sh@44 -- # qpairs='[ 00:16:46.091 { 00:16:46.091 "cntlid": 30, 00:16:46.091 "qid": 0, 00:16:46.091 "state": "enabled", 00:16:46.091 "listen_address": { 00:16:46.091 "trtype": "TCP", 00:16:46.091 "adrfam": "IPv4", 00:16:46.091 "traddr": "10.0.0.2", 00:16:46.091 "trsvcid": "4420" 00:16:46.091 }, 00:16:46.091 "peer_address": { 00:16:46.091 "trtype": "TCP", 00:16:46.091 "adrfam": "IPv4", 00:16:46.091 "traddr": "10.0.0.1", 00:16:46.091 "trsvcid": "55638" 00:16:46.091 }, 00:16:46.091 "auth": { 00:16:46.091 "state": "completed", 00:16:46.091 "digest": "sha384", 00:16:46.091 "dhgroup": "ffdhe2048" 00:16:46.091 } 00:16:46.091 } 00:16:46.091 ]' 00:16:46.091 21:10:01 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:46.091 21:10:01 -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:46.091 21:10:01 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:46.091 21:10:01 -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:46.091 21:10:01 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:46.091 21:10:02 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.091 21:10:02 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.091 21:10:02 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.350 21:10:02 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:16:46.350 21:10:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:46.350 21:10:02 -- common/autotest_common.sh@10 -- # set +x 00:16:46.350 21:10:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:46.350 21:10:02 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:16:46.350 21:10:02 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:46.350 21:10:02 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:46.667 21:10:02 -- target/auth.sh@85 -- # connect_authenticate sha384 ffdhe2048 2 00:16:46.667 21:10:02 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:46.667 21:10:02 -- target/auth.sh@36 -- # digest=sha384 00:16:46.667 21:10:02 -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:46.667 21:10:02 -- target/auth.sh@36 -- # key=key2 00:16:46.667 21:10:02 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key2 00:16:46.667 21:10:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:46.667 21:10:02 -- common/autotest_common.sh@10 -- # set +x 00:16:46.667 21:10:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:46.667 21:10:02 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:46.667 21:10:02 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:46.925 00:16:46.925 21:10:02 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:46.925 21:10:02 -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:46.925 21:10:02 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.925 21:10:02 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.925 21:10:02 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.925 21:10:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:46.925 21:10:02 -- common/autotest_common.sh@10 -- # set +x 00:16:46.925 21:10:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:46.925 21:10:02 -- target/auth.sh@44 -- # qpairs='[ 00:16:46.925 { 00:16:46.925 "cntlid": 31, 00:16:46.925 "qid": 0, 00:16:46.925 "state": "enabled", 00:16:46.925 "listen_address": { 00:16:46.925 "trtype": "TCP", 00:16:46.925 "adrfam": "IPv4", 00:16:46.925 "traddr": "10.0.0.2", 00:16:46.925 "trsvcid": "4420" 00:16:46.925 }, 00:16:46.925 "peer_address": { 00:16:46.925 "trtype": "TCP", 00:16:46.925 "adrfam": "IPv4", 00:16:46.925 "traddr": "10.0.0.1", 00:16:46.925 "trsvcid": "55644" 00:16:46.925 }, 00:16:46.925 "auth": { 00:16:46.925 "state": "completed", 00:16:46.925 "digest": "sha384", 00:16:46.925 "dhgroup": "ffdhe2048" 00:16:46.925 } 00:16:46.925 } 00:16:46.925 ]' 00:16:46.925 21:10:02 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:46.925 21:10:02 -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:46.925 21:10:02 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:47.183 21:10:02 -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:47.183 21:10:02 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:47.183 21:10:02 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.183 21:10:02 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.183 21:10:02 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.439 21:10:03 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:16:47.439 21:10:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:47.439 21:10:03 -- common/autotest_common.sh@10 -- # set +x 00:16:47.439 21:10:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:47.439 21:10:03 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:16:47.439 21:10:03 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:47.439 21:10:03 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:47.439 21:10:03 -- target/auth.sh@85 -- # connect_authenticate sha384 ffdhe2048 3 00:16:47.439 21:10:03 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:47.439 21:10:03 -- target/auth.sh@36 -- # digest=sha384 00:16:47.439 21:10:03 -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:47.439 21:10:03 -- target/auth.sh@36 -- # key=key3 00:16:47.439 21:10:03 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key3 00:16:47.439 21:10:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:47.439 21:10:03 -- common/autotest_common.sh@10 -- # set +x 00:16:47.439 21:10:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:47.440 21:10:03 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:47.440 21:10:03 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:47.696 00:16:47.696 21:10:03 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:47.696 21:10:03 -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:47.696 21:10:03 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.954 21:10:03 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.954 21:10:03 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.954 21:10:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:47.954 21:10:03 -- common/autotest_common.sh@10 -- # set +x 00:16:47.954 21:10:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:47.954 21:10:03 -- target/auth.sh@44 -- # qpairs='[ 00:16:47.954 { 00:16:47.954 "cntlid": 32, 00:16:47.954 "qid": 0, 00:16:47.954 "state": "enabled", 00:16:47.954 "listen_address": { 00:16:47.954 "trtype": "TCP", 00:16:47.954 "adrfam": "IPv4", 00:16:47.954 "traddr": "10.0.0.2", 00:16:47.954 "trsvcid": "4420" 00:16:47.954 }, 00:16:47.954 "peer_address": { 00:16:47.954 "trtype": "TCP", 00:16:47.954 "adrfam": "IPv4", 00:16:47.954 "traddr": "10.0.0.1", 00:16:47.954 "trsvcid": "55654" 00:16:47.954 }, 00:16:47.954 "auth": { 00:16:47.954 "state": "completed", 00:16:47.954 "digest": "sha384", 00:16:47.954 "dhgroup": "ffdhe2048" 00:16:47.954 } 00:16:47.954 } 00:16:47.954 ]' 00:16:47.954 21:10:03 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:47.954 21:10:03 -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:47.954 21:10:03 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:47.954 21:10:03 -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:47.954 21:10:03 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:47.954 21:10:03 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.954 21:10:03 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.954 21:10:03 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.212 21:10:04 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:16:48.212 21:10:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:48.212 21:10:04 -- common/autotest_common.sh@10 -- # set +x 00:16:48.212 21:10:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:48.212 21:10:04 -- target/auth.sh@81 -- # for dhgroup in "${dhgroups[@]}" 00:16:48.212 21:10:04 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:16:48.212 21:10:04 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:48.212 21:10:04 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:48.470 21:10:04 -- target/auth.sh@85 -- # connect_authenticate sha384 ffdhe3072 0 00:16:48.470 21:10:04 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:48.470 21:10:04 -- target/auth.sh@36 -- # digest=sha384 00:16:48.470 21:10:04 -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:48.470 21:10:04 -- target/auth.sh@36 -- # key=key0 00:16:48.470 21:10:04 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key0 00:16:48.470 21:10:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:48.470 21:10:04 -- common/autotest_common.sh@10 -- # set +x 00:16:48.470 21:10:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:48.470 21:10:04 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:48.470 21:10:04 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:48.728 00:16:48.728 21:10:04 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:48.728 21:10:04 -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:48.728 21:10:04 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.728 21:10:04 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.728 21:10:04 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.728 21:10:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:48.728 21:10:04 -- common/autotest_common.sh@10 -- # set +x 00:16:48.986 21:10:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:48.986 21:10:04 -- target/auth.sh@44 -- # qpairs='[ 00:16:48.986 { 00:16:48.986 "cntlid": 33, 00:16:48.986 "qid": 0, 00:16:48.986 "state": "enabled", 00:16:48.986 "listen_address": { 00:16:48.986 "trtype": "TCP", 00:16:48.986 "adrfam": "IPv4", 00:16:48.986 "traddr": "10.0.0.2", 00:16:48.986 "trsvcid": "4420" 00:16:48.986 }, 00:16:48.986 "peer_address": { 00:16:48.986 "trtype": "TCP", 00:16:48.986 "adrfam": "IPv4", 00:16:48.986 "traddr": "10.0.0.1", 00:16:48.986 "trsvcid": "55670" 00:16:48.986 }, 00:16:48.986 "auth": { 00:16:48.986 "state": "completed", 00:16:48.986 "digest": "sha384", 00:16:48.986 "dhgroup": "ffdhe3072" 00:16:48.986 } 00:16:48.986 } 00:16:48.986 ]' 00:16:48.986 21:10:04 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:48.986 21:10:04 -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:48.986 21:10:04 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:48.986 21:10:04 -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:48.986 21:10:04 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:48.986 21:10:04 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.986 21:10:04 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.986 21:10:04 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.244 21:10:04 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:16:49.244 21:10:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:49.244 21:10:04 -- common/autotest_common.sh@10 -- # set +x 00:16:49.244 21:10:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:49.244 21:10:04 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:16:49.244 21:10:04 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:49.244 21:10:04 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:49.244 21:10:05 -- target/auth.sh@85 -- # connect_authenticate sha384 ffdhe3072 1 00:16:49.244 21:10:05 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:49.244 21:10:05 -- target/auth.sh@36 -- # digest=sha384 00:16:49.244 21:10:05 -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:49.244 21:10:05 -- target/auth.sh@36 -- # key=key1 00:16:49.244 21:10:05 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key1 00:16:49.244 21:10:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:49.244 21:10:05 -- common/autotest_common.sh@10 -- # set +x 00:16:49.244 21:10:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:49.244 21:10:05 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:49.244 21:10:05 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:49.501 00:16:49.501 21:10:05 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:49.501 21:10:05 -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:49.501 21:10:05 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.759 21:10:05 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.759 21:10:05 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.759 21:10:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:49.759 21:10:05 -- common/autotest_common.sh@10 -- # set +x 00:16:49.759 21:10:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:49.759 21:10:05 -- target/auth.sh@44 -- # qpairs='[ 00:16:49.759 { 00:16:49.759 "cntlid": 34, 00:16:49.759 "qid": 0, 00:16:49.759 "state": "enabled", 00:16:49.759 "listen_address": { 00:16:49.759 "trtype": "TCP", 00:16:49.759 "adrfam": "IPv4", 00:16:49.759 "traddr": "10.0.0.2", 00:16:49.759 "trsvcid": "4420" 00:16:49.759 }, 00:16:49.759 "peer_address": { 00:16:49.759 "trtype": "TCP", 00:16:49.759 "adrfam": "IPv4", 00:16:49.759 "traddr": "10.0.0.1", 00:16:49.759 "trsvcid": "55674" 00:16:49.759 }, 00:16:49.759 "auth": { 00:16:49.759 "state": "completed", 00:16:49.759 "digest": "sha384", 00:16:49.759 "dhgroup": "ffdhe3072" 00:16:49.759 } 00:16:49.759 } 00:16:49.759 ]' 00:16:49.759 21:10:05 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:49.759 21:10:05 -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:49.759 21:10:05 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:50.017 21:10:05 -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:50.017 21:10:05 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:50.017 21:10:05 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.017 21:10:05 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.017 21:10:05 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.017 21:10:05 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:16:50.017 21:10:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.017 21:10:05 -- common/autotest_common.sh@10 -- # set +x 00:16:50.017 21:10:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.017 21:10:05 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:16:50.017 21:10:05 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:50.017 21:10:05 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:50.275 21:10:06 -- target/auth.sh@85 -- # connect_authenticate sha384 ffdhe3072 2 00:16:50.275 21:10:06 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:50.275 21:10:06 -- target/auth.sh@36 -- # digest=sha384 00:16:50.275 21:10:06 -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:50.275 21:10:06 -- target/auth.sh@36 -- # key=key2 00:16:50.275 21:10:06 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key2 00:16:50.275 21:10:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.275 21:10:06 -- common/autotest_common.sh@10 -- # set +x 00:16:50.275 21:10:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.275 21:10:06 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:50.275 21:10:06 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:50.538 00:16:50.538 21:10:06 -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:50.538 21:10:06 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:50.538 21:10:06 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.799 21:10:06 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.799 21:10:06 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.799 21:10:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.799 21:10:06 -- common/autotest_common.sh@10 -- # set +x 00:16:50.799 21:10:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.799 21:10:06 -- target/auth.sh@44 -- # qpairs='[ 00:16:50.799 { 00:16:50.799 "cntlid": 35, 00:16:50.799 "qid": 0, 00:16:50.799 "state": "enabled", 00:16:50.799 "listen_address": { 00:16:50.799 "trtype": "TCP", 00:16:50.799 "adrfam": "IPv4", 00:16:50.799 "traddr": "10.0.0.2", 00:16:50.799 "trsvcid": "4420" 00:16:50.799 }, 00:16:50.799 "peer_address": { 00:16:50.799 "trtype": "TCP", 00:16:50.799 "adrfam": "IPv4", 00:16:50.799 "traddr": "10.0.0.1", 00:16:50.799 "trsvcid": "36860" 00:16:50.799 }, 00:16:50.799 "auth": { 00:16:50.799 "state": "completed", 00:16:50.799 "digest": "sha384", 00:16:50.799 "dhgroup": "ffdhe3072" 00:16:50.799 } 00:16:50.799 } 00:16:50.799 ]' 00:16:50.799 21:10:06 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:50.799 21:10:06 -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:50.799 21:10:06 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:50.799 21:10:06 -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:50.799 21:10:06 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:50.799 21:10:06 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.799 21:10:06 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.799 21:10:06 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.057 21:10:06 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:16:51.057 21:10:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.057 21:10:06 -- common/autotest_common.sh@10 -- # set +x 00:16:51.057 21:10:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.057 21:10:06 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:16:51.057 21:10:06 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:51.057 21:10:06 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:51.315 21:10:07 -- target/auth.sh@85 -- # connect_authenticate sha384 ffdhe3072 3 00:16:51.315 21:10:07 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:51.315 21:10:07 -- target/auth.sh@36 -- # digest=sha384 00:16:51.315 21:10:07 -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:51.315 21:10:07 -- target/auth.sh@36 -- # key=key3 00:16:51.315 21:10:07 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key3 00:16:51.315 21:10:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.315 21:10:07 -- common/autotest_common.sh@10 -- # set +x 00:16:51.315 21:10:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.315 21:10:07 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:51.315 21:10:07 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:51.572 00:16:51.572 21:10:07 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:51.572 21:10:07 -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:51.572 21:10:07 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.572 21:10:07 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.572 21:10:07 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.572 21:10:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.572 21:10:07 -- common/autotest_common.sh@10 -- # set +x 00:16:51.572 21:10:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.572 21:10:07 -- target/auth.sh@44 -- # qpairs='[ 00:16:51.572 { 00:16:51.572 "cntlid": 36, 00:16:51.572 "qid": 0, 00:16:51.572 "state": "enabled", 00:16:51.572 "listen_address": { 00:16:51.572 "trtype": "TCP", 00:16:51.572 "adrfam": "IPv4", 00:16:51.572 "traddr": "10.0.0.2", 00:16:51.572 "trsvcid": "4420" 00:16:51.572 }, 00:16:51.572 "peer_address": { 00:16:51.572 "trtype": "TCP", 00:16:51.573 "adrfam": "IPv4", 00:16:51.573 "traddr": "10.0.0.1", 00:16:51.573 "trsvcid": "36864" 00:16:51.573 }, 00:16:51.573 "auth": { 00:16:51.573 "state": "completed", 00:16:51.573 "digest": "sha384", 00:16:51.573 "dhgroup": "ffdhe3072" 00:16:51.573 } 00:16:51.573 } 00:16:51.573 ]' 00:16:51.573 21:10:07 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:51.573 21:10:07 -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:51.830 21:10:07 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:51.830 21:10:07 -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:51.830 21:10:07 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:51.830 21:10:07 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.830 21:10:07 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.830 21:10:07 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.830 21:10:07 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:16:51.830 21:10:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.830 21:10:07 -- common/autotest_common.sh@10 -- # set +x 00:16:52.088 21:10:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:52.088 21:10:07 -- target/auth.sh@81 -- # for dhgroup in "${dhgroups[@]}" 00:16:52.088 21:10:07 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:16:52.088 21:10:07 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:52.088 21:10:07 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:52.088 21:10:07 -- target/auth.sh@85 -- # connect_authenticate sha384 ffdhe4096 0 00:16:52.088 21:10:07 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:52.088 21:10:07 -- target/auth.sh@36 -- # digest=sha384 00:16:52.088 21:10:07 -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:52.088 21:10:07 -- target/auth.sh@36 -- # key=key0 00:16:52.088 21:10:07 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key0 00:16:52.088 21:10:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:52.088 21:10:07 -- common/autotest_common.sh@10 -- # set +x 00:16:52.088 21:10:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:52.088 21:10:07 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:52.088 21:10:07 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:52.346 00:16:52.346 21:10:08 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:52.346 21:10:08 -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:52.346 21:10:08 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.603 21:10:08 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.603 21:10:08 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.603 21:10:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:52.603 21:10:08 -- common/autotest_common.sh@10 -- # set +x 00:16:52.603 21:10:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:52.603 21:10:08 -- target/auth.sh@44 -- # qpairs='[ 00:16:52.603 { 00:16:52.603 "cntlid": 37, 00:16:52.603 "qid": 0, 00:16:52.603 "state": "enabled", 00:16:52.603 "listen_address": { 00:16:52.603 "trtype": "TCP", 00:16:52.603 "adrfam": "IPv4", 00:16:52.603 "traddr": "10.0.0.2", 00:16:52.603 "trsvcid": "4420" 00:16:52.603 }, 00:16:52.603 "peer_address": { 00:16:52.603 "trtype": "TCP", 00:16:52.603 "adrfam": "IPv4", 00:16:52.603 "traddr": "10.0.0.1", 00:16:52.603 "trsvcid": "36874" 00:16:52.603 }, 00:16:52.603 "auth": { 00:16:52.603 "state": "completed", 00:16:52.603 "digest": "sha384", 00:16:52.603 "dhgroup": "ffdhe4096" 00:16:52.603 } 00:16:52.603 } 00:16:52.603 ]' 00:16:52.603 21:10:08 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:52.603 21:10:08 -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:52.603 21:10:08 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:52.603 21:10:08 -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:52.603 21:10:08 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:52.603 21:10:08 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.603 21:10:08 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.603 21:10:08 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.861 21:10:08 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:16:52.861 21:10:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:52.861 21:10:08 -- common/autotest_common.sh@10 -- # set +x 00:16:52.861 21:10:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:52.861 21:10:08 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:16:52.861 21:10:08 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:52.861 21:10:08 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:53.119 21:10:08 -- target/auth.sh@85 -- # connect_authenticate sha384 ffdhe4096 1 00:16:53.119 21:10:08 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:53.119 21:10:08 -- target/auth.sh@36 -- # digest=sha384 00:16:53.119 21:10:08 -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:53.119 21:10:08 -- target/auth.sh@36 -- # key=key1 00:16:53.119 21:10:08 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key1 00:16:53.119 21:10:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.119 21:10:08 -- common/autotest_common.sh@10 -- # set +x 00:16:53.119 21:10:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.119 21:10:08 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:53.119 21:10:08 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:53.377 00:16:53.377 21:10:09 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:53.377 21:10:09 -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:53.377 21:10:09 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.635 21:10:09 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.635 21:10:09 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.635 21:10:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.635 21:10:09 -- common/autotest_common.sh@10 -- # set +x 00:16:53.635 21:10:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.635 21:10:09 -- target/auth.sh@44 -- # qpairs='[ 00:16:53.635 { 00:16:53.635 "cntlid": 38, 00:16:53.635 "qid": 0, 00:16:53.635 "state": "enabled", 00:16:53.635 "listen_address": { 00:16:53.635 "trtype": "TCP", 00:16:53.635 "adrfam": "IPv4", 00:16:53.635 "traddr": "10.0.0.2", 00:16:53.635 "trsvcid": "4420" 00:16:53.635 }, 00:16:53.635 "peer_address": { 00:16:53.635 "trtype": "TCP", 00:16:53.635 "adrfam": "IPv4", 00:16:53.635 "traddr": "10.0.0.1", 00:16:53.635 "trsvcid": "36882" 00:16:53.635 }, 00:16:53.635 "auth": { 00:16:53.635 "state": "completed", 00:16:53.635 "digest": "sha384", 00:16:53.635 "dhgroup": "ffdhe4096" 00:16:53.635 } 00:16:53.635 } 00:16:53.635 ]' 00:16:53.635 21:10:09 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:53.635 21:10:09 -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:53.635 21:10:09 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:53.635 21:10:09 -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:53.635 21:10:09 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:53.635 21:10:09 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.635 21:10:09 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.635 21:10:09 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.893 21:10:09 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:16:53.893 21:10:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.893 21:10:09 -- common/autotest_common.sh@10 -- # set +x 00:16:53.893 21:10:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.893 21:10:09 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:16:53.893 21:10:09 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:53.893 21:10:09 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:54.150 21:10:09 -- target/auth.sh@85 -- # connect_authenticate sha384 ffdhe4096 2 00:16:54.150 21:10:09 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:54.150 21:10:09 -- target/auth.sh@36 -- # digest=sha384 00:16:54.150 21:10:09 -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:54.150 21:10:09 -- target/auth.sh@36 -- # key=key2 00:16:54.150 21:10:09 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key2 00:16:54.150 21:10:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:54.150 21:10:09 -- common/autotest_common.sh@10 -- # set +x 00:16:54.150 21:10:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:54.150 21:10:09 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:54.150 21:10:09 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:54.408 00:16:54.408 21:10:10 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:54.408 21:10:10 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.408 21:10:10 -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:54.408 21:10:10 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.408 21:10:10 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.408 21:10:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:54.408 21:10:10 -- common/autotest_common.sh@10 -- # set +x 00:16:54.408 21:10:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:54.408 21:10:10 -- target/auth.sh@44 -- # qpairs='[ 00:16:54.408 { 00:16:54.408 "cntlid": 39, 00:16:54.408 "qid": 0, 00:16:54.408 "state": "enabled", 00:16:54.408 "listen_address": { 00:16:54.408 "trtype": "TCP", 00:16:54.408 "adrfam": "IPv4", 00:16:54.408 "traddr": "10.0.0.2", 00:16:54.408 "trsvcid": "4420" 00:16:54.408 }, 00:16:54.408 "peer_address": { 00:16:54.408 "trtype": "TCP", 00:16:54.408 "adrfam": "IPv4", 00:16:54.408 "traddr": "10.0.0.1", 00:16:54.408 "trsvcid": "36894" 00:16:54.408 }, 00:16:54.408 "auth": { 00:16:54.408 "state": "completed", 00:16:54.408 "digest": "sha384", 00:16:54.408 "dhgroup": "ffdhe4096" 00:16:54.408 } 00:16:54.408 } 00:16:54.408 ]' 00:16:54.408 21:10:10 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:54.666 21:10:10 -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:54.666 21:10:10 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:54.666 21:10:10 -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:54.666 21:10:10 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:54.666 21:10:10 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.666 21:10:10 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.666 21:10:10 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.666 21:10:10 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:16:54.666 21:10:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:54.666 21:10:10 -- common/autotest_common.sh@10 -- # set +x 00:16:54.924 21:10:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:54.924 21:10:10 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:16:54.924 21:10:10 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:54.924 21:10:10 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:54.924 21:10:10 -- target/auth.sh@85 -- # connect_authenticate sha384 ffdhe4096 3 00:16:54.924 21:10:10 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:54.924 21:10:10 -- target/auth.sh@36 -- # digest=sha384 00:16:54.924 21:10:10 -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:54.924 21:10:10 -- target/auth.sh@36 -- # key=key3 00:16:54.924 21:10:10 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key3 00:16:54.924 21:10:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:54.924 21:10:10 -- common/autotest_common.sh@10 -- # set +x 00:16:54.924 21:10:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:54.924 21:10:10 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:54.924 21:10:10 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:55.182 00:16:55.182 21:10:11 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:55.182 21:10:11 -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:55.182 21:10:11 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.440 21:10:11 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.440 21:10:11 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.440 21:10:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:55.440 21:10:11 -- common/autotest_common.sh@10 -- # set +x 00:16:55.440 21:10:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:55.440 21:10:11 -- target/auth.sh@44 -- # qpairs='[ 00:16:55.440 { 00:16:55.440 "cntlid": 40, 00:16:55.440 "qid": 0, 00:16:55.440 "state": "enabled", 00:16:55.440 "listen_address": { 00:16:55.440 "trtype": "TCP", 00:16:55.440 "adrfam": "IPv4", 00:16:55.440 "traddr": "10.0.0.2", 00:16:55.440 "trsvcid": "4420" 00:16:55.440 }, 00:16:55.440 "peer_address": { 00:16:55.440 "trtype": "TCP", 00:16:55.440 "adrfam": "IPv4", 00:16:55.440 "traddr": "10.0.0.1", 00:16:55.440 "trsvcid": "36902" 00:16:55.440 }, 00:16:55.440 "auth": { 00:16:55.440 "state": "completed", 00:16:55.440 "digest": "sha384", 00:16:55.440 "dhgroup": "ffdhe4096" 00:16:55.440 } 00:16:55.440 } 00:16:55.440 ]' 00:16:55.440 21:10:11 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:55.440 21:10:11 -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:55.440 21:10:11 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:55.440 21:10:11 -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:55.440 21:10:11 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:55.440 21:10:11 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.440 21:10:11 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.440 21:10:11 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.698 21:10:11 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:16:55.698 21:10:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:55.698 21:10:11 -- common/autotest_common.sh@10 -- # set +x 00:16:55.698 21:10:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:55.698 21:10:11 -- target/auth.sh@81 -- # for dhgroup in "${dhgroups[@]}" 00:16:55.698 21:10:11 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:16:55.698 21:10:11 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:55.698 21:10:11 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:55.956 21:10:11 -- target/auth.sh@85 -- # connect_authenticate sha384 ffdhe6144 0 00:16:55.956 21:10:11 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:55.956 21:10:11 -- target/auth.sh@36 -- # digest=sha384 00:16:55.956 21:10:11 -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:55.956 21:10:11 -- target/auth.sh@36 -- # key=key0 00:16:55.956 21:10:11 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key0 00:16:55.956 21:10:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:55.956 21:10:11 -- common/autotest_common.sh@10 -- # set +x 00:16:55.956 21:10:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:55.956 21:10:11 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:55.956 21:10:11 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:56.213 00:16:56.213 21:10:12 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:56.213 21:10:12 -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:56.213 21:10:12 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.471 21:10:12 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.471 21:10:12 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.471 21:10:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:56.471 21:10:12 -- common/autotest_common.sh@10 -- # set +x 00:16:56.471 21:10:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:56.471 21:10:12 -- target/auth.sh@44 -- # qpairs='[ 00:16:56.471 { 00:16:56.471 "cntlid": 41, 00:16:56.471 "qid": 0, 00:16:56.471 "state": "enabled", 00:16:56.471 "listen_address": { 00:16:56.471 "trtype": "TCP", 00:16:56.471 "adrfam": "IPv4", 00:16:56.471 "traddr": "10.0.0.2", 00:16:56.471 "trsvcid": "4420" 00:16:56.471 }, 00:16:56.471 "peer_address": { 00:16:56.471 "trtype": "TCP", 00:16:56.471 "adrfam": "IPv4", 00:16:56.471 "traddr": "10.0.0.1", 00:16:56.471 "trsvcid": "36908" 00:16:56.471 }, 00:16:56.471 "auth": { 00:16:56.471 "state": "completed", 00:16:56.471 "digest": "sha384", 00:16:56.471 "dhgroup": "ffdhe6144" 00:16:56.471 } 00:16:56.471 } 00:16:56.471 ]' 00:16:56.471 21:10:12 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:56.471 21:10:12 -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:56.471 21:10:12 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:56.471 21:10:12 -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:56.471 21:10:12 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:56.471 21:10:12 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.471 21:10:12 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.471 21:10:12 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.729 21:10:12 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:16:56.729 21:10:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:56.729 21:10:12 -- common/autotest_common.sh@10 -- # set +x 00:16:56.729 21:10:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:56.729 21:10:12 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:16:56.729 21:10:12 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:56.729 21:10:12 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:56.987 21:10:12 -- target/auth.sh@85 -- # connect_authenticate sha384 ffdhe6144 1 00:16:56.987 21:10:12 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:56.987 21:10:12 -- target/auth.sh@36 -- # digest=sha384 00:16:56.987 21:10:12 -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:56.987 21:10:12 -- target/auth.sh@36 -- # key=key1 00:16:56.987 21:10:12 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key1 00:16:56.987 21:10:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:56.987 21:10:12 -- common/autotest_common.sh@10 -- # set +x 00:16:56.987 21:10:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:56.987 21:10:12 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:56.987 21:10:12 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:57.245 00:16:57.245 21:10:13 -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:57.245 21:10:13 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:57.245 21:10:13 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.502 21:10:13 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.502 21:10:13 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.502 21:10:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:57.502 21:10:13 -- common/autotest_common.sh@10 -- # set +x 00:16:57.502 21:10:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:57.502 21:10:13 -- target/auth.sh@44 -- # qpairs='[ 00:16:57.502 { 00:16:57.502 "cntlid": 42, 00:16:57.502 "qid": 0, 00:16:57.502 "state": "enabled", 00:16:57.502 "listen_address": { 00:16:57.502 "trtype": "TCP", 00:16:57.502 "adrfam": "IPv4", 00:16:57.502 "traddr": "10.0.0.2", 00:16:57.502 "trsvcid": "4420" 00:16:57.502 }, 00:16:57.502 "peer_address": { 00:16:57.502 "trtype": "TCP", 00:16:57.502 "adrfam": "IPv4", 00:16:57.502 "traddr": "10.0.0.1", 00:16:57.502 "trsvcid": "36922" 00:16:57.502 }, 00:16:57.502 "auth": { 00:16:57.502 "state": "completed", 00:16:57.502 "digest": "sha384", 00:16:57.502 "dhgroup": "ffdhe6144" 00:16:57.502 } 00:16:57.502 } 00:16:57.502 ]' 00:16:57.502 21:10:13 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:57.502 21:10:13 -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:57.502 21:10:13 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:57.502 21:10:13 -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:57.502 21:10:13 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:57.502 21:10:13 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.502 21:10:13 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.502 21:10:13 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.760 21:10:13 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:16:57.760 21:10:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:57.760 21:10:13 -- common/autotest_common.sh@10 -- # set +x 00:16:57.760 21:10:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:57.760 21:10:13 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:16:57.760 21:10:13 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:57.760 21:10:13 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:58.018 21:10:13 -- target/auth.sh@85 -- # connect_authenticate sha384 ffdhe6144 2 00:16:58.018 21:10:13 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:58.018 21:10:13 -- target/auth.sh@36 -- # digest=sha384 00:16:58.018 21:10:13 -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:58.018 21:10:13 -- target/auth.sh@36 -- # key=key2 00:16:58.018 21:10:13 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key2 00:16:58.018 21:10:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:58.018 21:10:13 -- common/autotest_common.sh@10 -- # set +x 00:16:58.018 21:10:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:58.018 21:10:13 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:58.018 21:10:13 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:58.276 00:16:58.276 21:10:14 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:58.276 21:10:14 -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:58.276 21:10:14 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.533 21:10:14 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.533 21:10:14 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.534 21:10:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:58.534 21:10:14 -- common/autotest_common.sh@10 -- # set +x 00:16:58.534 21:10:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:58.534 21:10:14 -- target/auth.sh@44 -- # qpairs='[ 00:16:58.534 { 00:16:58.534 "cntlid": 43, 00:16:58.534 "qid": 0, 00:16:58.534 "state": "enabled", 00:16:58.534 "listen_address": { 00:16:58.534 "trtype": "TCP", 00:16:58.534 "adrfam": "IPv4", 00:16:58.534 "traddr": "10.0.0.2", 00:16:58.534 "trsvcid": "4420" 00:16:58.534 }, 00:16:58.534 "peer_address": { 00:16:58.534 "trtype": "TCP", 00:16:58.534 "adrfam": "IPv4", 00:16:58.534 "traddr": "10.0.0.1", 00:16:58.534 "trsvcid": "36932" 00:16:58.534 }, 00:16:58.534 "auth": { 00:16:58.534 "state": "completed", 00:16:58.534 "digest": "sha384", 00:16:58.534 "dhgroup": "ffdhe6144" 00:16:58.534 } 00:16:58.534 } 00:16:58.534 ]' 00:16:58.534 21:10:14 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:58.534 21:10:14 -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:58.534 21:10:14 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:58.534 21:10:14 -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:58.534 21:10:14 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:58.534 21:10:14 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.534 21:10:14 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.534 21:10:14 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.792 21:10:14 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:16:58.792 21:10:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:58.792 21:10:14 -- common/autotest_common.sh@10 -- # set +x 00:16:58.792 21:10:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:58.792 21:10:14 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:16:58.792 21:10:14 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:58.792 21:10:14 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:59.050 21:10:14 -- target/auth.sh@85 -- # connect_authenticate sha384 ffdhe6144 3 00:16:59.050 21:10:14 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:59.050 21:10:14 -- target/auth.sh@36 -- # digest=sha384 00:16:59.050 21:10:14 -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:59.050 21:10:14 -- target/auth.sh@36 -- # key=key3 00:16:59.050 21:10:14 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key3 00:16:59.050 21:10:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:59.050 21:10:14 -- common/autotest_common.sh@10 -- # set +x 00:16:59.050 21:10:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:59.050 21:10:14 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:59.050 21:10:14 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:59.308 00:16:59.308 21:10:15 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:59.308 21:10:15 -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:59.308 21:10:15 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.566 21:10:15 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.566 21:10:15 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.566 21:10:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:59.566 21:10:15 -- common/autotest_common.sh@10 -- # set +x 00:16:59.566 21:10:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:59.566 21:10:15 -- target/auth.sh@44 -- # qpairs='[ 00:16:59.566 { 00:16:59.566 "cntlid": 44, 00:16:59.566 "qid": 0, 00:16:59.566 "state": "enabled", 00:16:59.566 "listen_address": { 00:16:59.566 "trtype": "TCP", 00:16:59.566 "adrfam": "IPv4", 00:16:59.566 "traddr": "10.0.0.2", 00:16:59.566 "trsvcid": "4420" 00:16:59.566 }, 00:16:59.566 "peer_address": { 00:16:59.566 "trtype": "TCP", 00:16:59.566 "adrfam": "IPv4", 00:16:59.566 "traddr": "10.0.0.1", 00:16:59.566 "trsvcid": "36946" 00:16:59.566 }, 00:16:59.566 "auth": { 00:16:59.566 "state": "completed", 00:16:59.566 "digest": "sha384", 00:16:59.566 "dhgroup": "ffdhe6144" 00:16:59.566 } 00:16:59.566 } 00:16:59.566 ]' 00:16:59.566 21:10:15 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:59.566 21:10:15 -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:59.566 21:10:15 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:59.566 21:10:15 -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:59.566 21:10:15 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:59.566 21:10:15 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.566 21:10:15 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.566 21:10:15 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.825 21:10:15 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:16:59.825 21:10:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:59.825 21:10:15 -- common/autotest_common.sh@10 -- # set +x 00:16:59.825 21:10:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:59.825 21:10:15 -- target/auth.sh@81 -- # for dhgroup in "${dhgroups[@]}" 00:16:59.825 21:10:15 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:16:59.825 21:10:15 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:59.825 21:10:15 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:00.117 21:10:15 -- target/auth.sh@85 -- # connect_authenticate sha384 ffdhe8192 0 00:17:00.117 21:10:15 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:00.117 21:10:15 -- target/auth.sh@36 -- # digest=sha384 00:17:00.117 21:10:15 -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:00.117 21:10:15 -- target/auth.sh@36 -- # key=key0 00:17:00.117 21:10:15 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key0 00:17:00.117 21:10:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:00.117 21:10:15 -- common/autotest_common.sh@10 -- # set +x 00:17:00.117 21:10:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:00.117 21:10:15 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:00.118 21:10:15 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:00.683 00:17:00.683 21:10:16 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:00.683 21:10:16 -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:00.683 21:10:16 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.683 21:10:16 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.683 21:10:16 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.683 21:10:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:00.683 21:10:16 -- common/autotest_common.sh@10 -- # set +x 00:17:00.683 21:10:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:00.683 21:10:16 -- target/auth.sh@44 -- # qpairs='[ 00:17:00.683 { 00:17:00.683 "cntlid": 45, 00:17:00.683 "qid": 0, 00:17:00.683 "state": "enabled", 00:17:00.683 "listen_address": { 00:17:00.683 "trtype": "TCP", 00:17:00.683 "adrfam": "IPv4", 00:17:00.683 "traddr": "10.0.0.2", 00:17:00.683 "trsvcid": "4420" 00:17:00.683 }, 00:17:00.683 "peer_address": { 00:17:00.683 "trtype": "TCP", 00:17:00.683 "adrfam": "IPv4", 00:17:00.683 "traddr": "10.0.0.1", 00:17:00.683 "trsvcid": "53860" 00:17:00.683 }, 00:17:00.683 "auth": { 00:17:00.683 "state": "completed", 00:17:00.683 "digest": "sha384", 00:17:00.683 "dhgroup": "ffdhe8192" 00:17:00.683 } 00:17:00.683 } 00:17:00.683 ]' 00:17:00.683 21:10:16 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:00.683 21:10:16 -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:00.683 21:10:16 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:00.941 21:10:16 -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:00.941 21:10:16 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:00.941 21:10:16 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.941 21:10:16 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.941 21:10:16 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.941 21:10:16 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:17:00.941 21:10:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:00.941 21:10:16 -- common/autotest_common.sh@10 -- # set +x 00:17:00.941 21:10:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:00.941 21:10:16 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:17:00.941 21:10:16 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:00.941 21:10:16 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:01.198 21:10:17 -- target/auth.sh@85 -- # connect_authenticate sha384 ffdhe8192 1 00:17:01.198 21:10:17 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:01.198 21:10:17 -- target/auth.sh@36 -- # digest=sha384 00:17:01.198 21:10:17 -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:01.198 21:10:17 -- target/auth.sh@36 -- # key=key1 00:17:01.198 21:10:17 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key1 00:17:01.198 21:10:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:01.198 21:10:17 -- common/autotest_common.sh@10 -- # set +x 00:17:01.198 21:10:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:01.198 21:10:17 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:01.198 21:10:17 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:01.763 00:17:01.764 21:10:17 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:01.764 21:10:17 -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:01.764 21:10:17 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.764 21:10:17 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.764 21:10:17 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.764 21:10:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:01.764 21:10:17 -- common/autotest_common.sh@10 -- # set +x 00:17:02.022 21:10:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:02.022 21:10:17 -- target/auth.sh@44 -- # qpairs='[ 00:17:02.022 { 00:17:02.022 "cntlid": 46, 00:17:02.022 "qid": 0, 00:17:02.022 "state": "enabled", 00:17:02.022 "listen_address": { 00:17:02.022 "trtype": "TCP", 00:17:02.022 "adrfam": "IPv4", 00:17:02.022 "traddr": "10.0.0.2", 00:17:02.022 "trsvcid": "4420" 00:17:02.022 }, 00:17:02.022 "peer_address": { 00:17:02.022 "trtype": "TCP", 00:17:02.022 "adrfam": "IPv4", 00:17:02.022 "traddr": "10.0.0.1", 00:17:02.022 "trsvcid": "53870" 00:17:02.022 }, 00:17:02.022 "auth": { 00:17:02.022 "state": "completed", 00:17:02.022 "digest": "sha384", 00:17:02.022 "dhgroup": "ffdhe8192" 00:17:02.022 } 00:17:02.022 } 00:17:02.022 ]' 00:17:02.022 21:10:17 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:02.022 21:10:17 -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:02.022 21:10:17 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:02.022 21:10:17 -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:02.022 21:10:17 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:02.022 21:10:17 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.022 21:10:17 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.022 21:10:17 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.280 21:10:17 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:17:02.280 21:10:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:02.280 21:10:17 -- common/autotest_common.sh@10 -- # set +x 00:17:02.280 21:10:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:02.280 21:10:18 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:17:02.280 21:10:18 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:02.280 21:10:18 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:02.280 21:10:18 -- target/auth.sh@85 -- # connect_authenticate sha384 ffdhe8192 2 00:17:02.280 21:10:18 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:02.280 21:10:18 -- target/auth.sh@36 -- # digest=sha384 00:17:02.280 21:10:18 -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:02.280 21:10:18 -- target/auth.sh@36 -- # key=key2 00:17:02.280 21:10:18 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key2 00:17:02.280 21:10:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:02.280 21:10:18 -- common/autotest_common.sh@10 -- # set +x 00:17:02.280 21:10:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:02.280 21:10:18 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:02.280 21:10:18 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:02.847 00:17:02.847 21:10:18 -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:02.847 21:10:18 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:02.847 21:10:18 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.105 21:10:18 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.106 21:10:18 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.106 21:10:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:03.106 21:10:18 -- common/autotest_common.sh@10 -- # set +x 00:17:03.106 21:10:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:03.106 21:10:18 -- target/auth.sh@44 -- # qpairs='[ 00:17:03.106 { 00:17:03.106 "cntlid": 47, 00:17:03.106 "qid": 0, 00:17:03.106 "state": "enabled", 00:17:03.106 "listen_address": { 00:17:03.106 "trtype": "TCP", 00:17:03.106 "adrfam": "IPv4", 00:17:03.106 "traddr": "10.0.0.2", 00:17:03.106 "trsvcid": "4420" 00:17:03.106 }, 00:17:03.106 "peer_address": { 00:17:03.106 "trtype": "TCP", 00:17:03.106 "adrfam": "IPv4", 00:17:03.106 "traddr": "10.0.0.1", 00:17:03.106 "trsvcid": "53872" 00:17:03.106 }, 00:17:03.106 "auth": { 00:17:03.106 "state": "completed", 00:17:03.106 "digest": "sha384", 00:17:03.106 "dhgroup": "ffdhe8192" 00:17:03.106 } 00:17:03.106 } 00:17:03.106 ]' 00:17:03.106 21:10:18 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:03.106 21:10:18 -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:03.106 21:10:18 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:03.106 21:10:18 -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:03.106 21:10:18 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:03.106 21:10:18 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.106 21:10:18 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.106 21:10:18 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.364 21:10:19 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:17:03.364 21:10:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:03.364 21:10:19 -- common/autotest_common.sh@10 -- # set +x 00:17:03.364 21:10:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:03.364 21:10:19 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:17:03.364 21:10:19 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:03.364 21:10:19 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:03.622 21:10:19 -- target/auth.sh@85 -- # connect_authenticate sha384 ffdhe8192 3 00:17:03.622 21:10:19 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:03.622 21:10:19 -- target/auth.sh@36 -- # digest=sha384 00:17:03.622 21:10:19 -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:03.622 21:10:19 -- target/auth.sh@36 -- # key=key3 00:17:03.622 21:10:19 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key3 00:17:03.622 21:10:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:03.622 21:10:19 -- common/autotest_common.sh@10 -- # set +x 00:17:03.622 21:10:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:03.622 21:10:19 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:03.622 21:10:19 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:04.189 00:17:04.189 21:10:19 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:04.189 21:10:19 -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:04.189 21:10:19 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.189 21:10:20 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.189 21:10:20 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.189 21:10:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:04.189 21:10:20 -- common/autotest_common.sh@10 -- # set +x 00:17:04.189 21:10:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:04.189 21:10:20 -- target/auth.sh@44 -- # qpairs='[ 00:17:04.189 { 00:17:04.189 "cntlid": 48, 00:17:04.189 "qid": 0, 00:17:04.189 "state": "enabled", 00:17:04.189 "listen_address": { 00:17:04.189 "trtype": "TCP", 00:17:04.189 "adrfam": "IPv4", 00:17:04.189 "traddr": "10.0.0.2", 00:17:04.189 "trsvcid": "4420" 00:17:04.189 }, 00:17:04.189 "peer_address": { 00:17:04.189 "trtype": "TCP", 00:17:04.189 "adrfam": "IPv4", 00:17:04.189 "traddr": "10.0.0.1", 00:17:04.189 "trsvcid": "53878" 00:17:04.189 }, 00:17:04.189 "auth": { 00:17:04.189 "state": "completed", 00:17:04.189 "digest": "sha384", 00:17:04.189 "dhgroup": "ffdhe8192" 00:17:04.189 } 00:17:04.189 } 00:17:04.189 ]' 00:17:04.189 21:10:20 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:04.189 21:10:20 -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:04.189 21:10:20 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:04.189 21:10:20 -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:04.189 21:10:20 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:04.447 21:10:20 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.447 21:10:20 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.447 21:10:20 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.447 21:10:20 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:17:04.447 21:10:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:04.447 21:10:20 -- common/autotest_common.sh@10 -- # set +x 00:17:04.447 21:10:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:04.447 21:10:20 -- target/auth.sh@80 -- # for digest in "${digests[@]}" 00:17:04.447 21:10:20 -- target/auth.sh@81 -- # for dhgroup in "${dhgroups[@]}" 00:17:04.447 21:10:20 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:17:04.447 21:10:20 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:04.447 21:10:20 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:04.706 21:10:20 -- target/auth.sh@85 -- # connect_authenticate sha512 null 0 00:17:04.706 21:10:20 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:04.706 21:10:20 -- target/auth.sh@36 -- # digest=sha512 00:17:04.706 21:10:20 -- target/auth.sh@36 -- # dhgroup=null 00:17:04.706 21:10:20 -- target/auth.sh@36 -- # key=key0 00:17:04.706 21:10:20 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key0 00:17:04.706 21:10:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:04.706 21:10:20 -- common/autotest_common.sh@10 -- # set +x 00:17:04.706 21:10:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:04.706 21:10:20 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:04.706 21:10:20 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:04.964 00:17:04.964 21:10:20 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:04.964 21:10:20 -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:04.964 21:10:20 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.222 21:10:20 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.222 21:10:20 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.222 21:10:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.222 21:10:20 -- common/autotest_common.sh@10 -- # set +x 00:17:05.222 21:10:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.222 21:10:20 -- target/auth.sh@44 -- # qpairs='[ 00:17:05.222 { 00:17:05.222 "cntlid": 49, 00:17:05.222 "qid": 0, 00:17:05.222 "state": "enabled", 00:17:05.222 "listen_address": { 00:17:05.222 "trtype": "TCP", 00:17:05.222 "adrfam": "IPv4", 00:17:05.222 "traddr": "10.0.0.2", 00:17:05.222 "trsvcid": "4420" 00:17:05.222 }, 00:17:05.222 "peer_address": { 00:17:05.222 "trtype": "TCP", 00:17:05.222 "adrfam": "IPv4", 00:17:05.222 "traddr": "10.0.0.1", 00:17:05.222 "trsvcid": "53886" 00:17:05.222 }, 00:17:05.222 "auth": { 00:17:05.222 "state": "completed", 00:17:05.222 "digest": "sha512", 00:17:05.222 "dhgroup": "null" 00:17:05.222 } 00:17:05.222 } 00:17:05.222 ]' 00:17:05.222 21:10:20 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:05.222 21:10:20 -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:05.222 21:10:20 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:05.222 21:10:21 -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:17:05.222 21:10:21 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:05.222 21:10:21 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.222 21:10:21 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.222 21:10:21 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.480 21:10:21 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:17:05.481 21:10:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.481 21:10:21 -- common/autotest_common.sh@10 -- # set +x 00:17:05.481 21:10:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.481 21:10:21 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:17:05.481 21:10:21 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:05.481 21:10:21 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:05.738 21:10:21 -- target/auth.sh@85 -- # connect_authenticate sha512 null 1 00:17:05.739 21:10:21 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:05.739 21:10:21 -- target/auth.sh@36 -- # digest=sha512 00:17:05.739 21:10:21 -- target/auth.sh@36 -- # dhgroup=null 00:17:05.739 21:10:21 -- target/auth.sh@36 -- # key=key1 00:17:05.739 21:10:21 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key1 00:17:05.739 21:10:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.739 21:10:21 -- common/autotest_common.sh@10 -- # set +x 00:17:05.739 21:10:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.739 21:10:21 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:05.739 21:10:21 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:05.739 00:17:05.995 21:10:21 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:05.995 21:10:21 -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:05.995 21:10:21 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.995 21:10:21 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.995 21:10:21 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.995 21:10:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.995 21:10:21 -- common/autotest_common.sh@10 -- # set +x 00:17:05.995 21:10:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.995 21:10:21 -- target/auth.sh@44 -- # qpairs='[ 00:17:05.995 { 00:17:05.995 "cntlid": 50, 00:17:05.995 "qid": 0, 00:17:05.995 "state": "enabled", 00:17:05.995 "listen_address": { 00:17:05.995 "trtype": "TCP", 00:17:05.995 "adrfam": "IPv4", 00:17:05.995 "traddr": "10.0.0.2", 00:17:05.995 "trsvcid": "4420" 00:17:05.995 }, 00:17:05.995 "peer_address": { 00:17:05.995 "trtype": "TCP", 00:17:05.995 "adrfam": "IPv4", 00:17:05.995 "traddr": "10.0.0.1", 00:17:05.995 "trsvcid": "53900" 00:17:05.995 }, 00:17:05.995 "auth": { 00:17:05.995 "state": "completed", 00:17:05.995 "digest": "sha512", 00:17:05.995 "dhgroup": "null" 00:17:05.995 } 00:17:05.995 } 00:17:05.995 ]' 00:17:05.995 21:10:21 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:05.995 21:10:21 -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:05.995 21:10:21 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:06.252 21:10:21 -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:17:06.252 21:10:21 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:06.252 21:10:21 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.252 21:10:21 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.252 21:10:21 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.252 21:10:22 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:17:06.252 21:10:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.252 21:10:22 -- common/autotest_common.sh@10 -- # set +x 00:17:06.252 21:10:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.252 21:10:22 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:17:06.252 21:10:22 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:06.510 21:10:22 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:06.510 21:10:22 -- target/auth.sh@85 -- # connect_authenticate sha512 null 2 00:17:06.510 21:10:22 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:06.510 21:10:22 -- target/auth.sh@36 -- # digest=sha512 00:17:06.510 21:10:22 -- target/auth.sh@36 -- # dhgroup=null 00:17:06.510 21:10:22 -- target/auth.sh@36 -- # key=key2 00:17:06.510 21:10:22 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key2 00:17:06.510 21:10:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.510 21:10:22 -- common/autotest_common.sh@10 -- # set +x 00:17:06.510 21:10:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.510 21:10:22 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:06.510 21:10:22 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:06.768 00:17:06.768 21:10:22 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:06.769 21:10:22 -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:06.769 21:10:22 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.027 21:10:22 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.027 21:10:22 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.027 21:10:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.027 21:10:22 -- common/autotest_common.sh@10 -- # set +x 00:17:07.027 21:10:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.027 21:10:22 -- target/auth.sh@44 -- # qpairs='[ 00:17:07.027 { 00:17:07.027 "cntlid": 51, 00:17:07.027 "qid": 0, 00:17:07.027 "state": "enabled", 00:17:07.027 "listen_address": { 00:17:07.027 "trtype": "TCP", 00:17:07.027 "adrfam": "IPv4", 00:17:07.027 "traddr": "10.0.0.2", 00:17:07.027 "trsvcid": "4420" 00:17:07.027 }, 00:17:07.027 "peer_address": { 00:17:07.027 "trtype": "TCP", 00:17:07.027 "adrfam": "IPv4", 00:17:07.027 "traddr": "10.0.0.1", 00:17:07.027 "trsvcid": "53908" 00:17:07.027 }, 00:17:07.027 "auth": { 00:17:07.027 "state": "completed", 00:17:07.027 "digest": "sha512", 00:17:07.027 "dhgroup": "null" 00:17:07.027 } 00:17:07.027 } 00:17:07.027 ]' 00:17:07.027 21:10:22 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:07.027 21:10:22 -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:07.027 21:10:22 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:07.027 21:10:22 -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:17:07.027 21:10:22 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:07.027 21:10:22 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.027 21:10:22 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.027 21:10:22 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.285 21:10:23 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:17:07.285 21:10:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.285 21:10:23 -- common/autotest_common.sh@10 -- # set +x 00:17:07.285 21:10:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.285 21:10:23 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:17:07.285 21:10:23 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:07.285 21:10:23 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:07.543 21:10:23 -- target/auth.sh@85 -- # connect_authenticate sha512 null 3 00:17:07.543 21:10:23 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:07.543 21:10:23 -- target/auth.sh@36 -- # digest=sha512 00:17:07.543 21:10:23 -- target/auth.sh@36 -- # dhgroup=null 00:17:07.543 21:10:23 -- target/auth.sh@36 -- # key=key3 00:17:07.543 21:10:23 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key3 00:17:07.543 21:10:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.543 21:10:23 -- common/autotest_common.sh@10 -- # set +x 00:17:07.543 21:10:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.543 21:10:23 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:07.543 21:10:23 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:07.801 00:17:07.801 21:10:23 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:07.801 21:10:23 -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:07.801 21:10:23 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.801 21:10:23 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.801 21:10:23 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.801 21:10:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.801 21:10:23 -- common/autotest_common.sh@10 -- # set +x 00:17:07.801 21:10:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.801 21:10:23 -- target/auth.sh@44 -- # qpairs='[ 00:17:07.801 { 00:17:07.801 "cntlid": 52, 00:17:07.801 "qid": 0, 00:17:07.801 "state": "enabled", 00:17:07.801 "listen_address": { 00:17:07.801 "trtype": "TCP", 00:17:07.801 "adrfam": "IPv4", 00:17:07.801 "traddr": "10.0.0.2", 00:17:07.801 "trsvcid": "4420" 00:17:07.801 }, 00:17:07.801 "peer_address": { 00:17:07.801 "trtype": "TCP", 00:17:07.801 "adrfam": "IPv4", 00:17:07.801 "traddr": "10.0.0.1", 00:17:07.801 "trsvcid": "53910" 00:17:07.801 }, 00:17:07.801 "auth": { 00:17:07.801 "state": "completed", 00:17:07.801 "digest": "sha512", 00:17:07.801 "dhgroup": "null" 00:17:07.801 } 00:17:07.801 } 00:17:07.801 ]' 00:17:07.801 21:10:23 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:08.059 21:10:23 -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:08.059 21:10:23 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:08.059 21:10:23 -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:17:08.059 21:10:23 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:08.059 21:10:23 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.059 21:10:23 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.059 21:10:23 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.317 21:10:24 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:17:08.317 21:10:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:08.317 21:10:24 -- common/autotest_common.sh@10 -- # set +x 00:17:08.317 21:10:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:08.317 21:10:24 -- target/auth.sh@81 -- # for dhgroup in "${dhgroups[@]}" 00:17:08.317 21:10:24 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:17:08.317 21:10:24 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:08.317 21:10:24 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:08.317 21:10:24 -- target/auth.sh@85 -- # connect_authenticate sha512 ffdhe2048 0 00:17:08.317 21:10:24 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:08.317 21:10:24 -- target/auth.sh@36 -- # digest=sha512 00:17:08.317 21:10:24 -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:08.317 21:10:24 -- target/auth.sh@36 -- # key=key0 00:17:08.317 21:10:24 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key0 00:17:08.317 21:10:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:08.317 21:10:24 -- common/autotest_common.sh@10 -- # set +x 00:17:08.317 21:10:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:08.317 21:10:24 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:08.317 21:10:24 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:08.575 00:17:08.575 21:10:24 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:08.575 21:10:24 -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:08.575 21:10:24 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.832 21:10:24 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.832 21:10:24 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.832 21:10:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:08.832 21:10:24 -- common/autotest_common.sh@10 -- # set +x 00:17:08.832 21:10:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:08.832 21:10:24 -- target/auth.sh@44 -- # qpairs='[ 00:17:08.832 { 00:17:08.832 "cntlid": 53, 00:17:08.832 "qid": 0, 00:17:08.832 "state": "enabled", 00:17:08.832 "listen_address": { 00:17:08.832 "trtype": "TCP", 00:17:08.832 "adrfam": "IPv4", 00:17:08.832 "traddr": "10.0.0.2", 00:17:08.832 "trsvcid": "4420" 00:17:08.832 }, 00:17:08.832 "peer_address": { 00:17:08.832 "trtype": "TCP", 00:17:08.832 "adrfam": "IPv4", 00:17:08.832 "traddr": "10.0.0.1", 00:17:08.832 "trsvcid": "53920" 00:17:08.832 }, 00:17:08.832 "auth": { 00:17:08.832 "state": "completed", 00:17:08.832 "digest": "sha512", 00:17:08.832 "dhgroup": "ffdhe2048" 00:17:08.832 } 00:17:08.832 } 00:17:08.832 ]' 00:17:08.832 21:10:24 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:08.832 21:10:24 -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:08.832 21:10:24 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:08.832 21:10:24 -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:08.832 21:10:24 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:08.832 21:10:24 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.832 21:10:24 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.832 21:10:24 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.090 21:10:24 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:17:09.090 21:10:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.090 21:10:24 -- common/autotest_common.sh@10 -- # set +x 00:17:09.090 21:10:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.090 21:10:24 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:17:09.090 21:10:24 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:09.090 21:10:24 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:09.348 21:10:25 -- target/auth.sh@85 -- # connect_authenticate sha512 ffdhe2048 1 00:17:09.348 21:10:25 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:09.348 21:10:25 -- target/auth.sh@36 -- # digest=sha512 00:17:09.348 21:10:25 -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:09.348 21:10:25 -- target/auth.sh@36 -- # key=key1 00:17:09.348 21:10:25 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key1 00:17:09.348 21:10:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.348 21:10:25 -- common/autotest_common.sh@10 -- # set +x 00:17:09.348 21:10:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.348 21:10:25 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:09.348 21:10:25 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:09.605 00:17:09.605 21:10:25 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:09.605 21:10:25 -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:09.605 21:10:25 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.605 21:10:25 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.863 21:10:25 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.863 21:10:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.863 21:10:25 -- common/autotest_common.sh@10 -- # set +x 00:17:09.863 21:10:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.863 21:10:25 -- target/auth.sh@44 -- # qpairs='[ 00:17:09.863 { 00:17:09.863 "cntlid": 54, 00:17:09.863 "qid": 0, 00:17:09.863 "state": "enabled", 00:17:09.863 "listen_address": { 00:17:09.863 "trtype": "TCP", 00:17:09.863 "adrfam": "IPv4", 00:17:09.863 "traddr": "10.0.0.2", 00:17:09.863 "trsvcid": "4420" 00:17:09.863 }, 00:17:09.863 "peer_address": { 00:17:09.863 "trtype": "TCP", 00:17:09.863 "adrfam": "IPv4", 00:17:09.863 "traddr": "10.0.0.1", 00:17:09.863 "trsvcid": "53934" 00:17:09.863 }, 00:17:09.863 "auth": { 00:17:09.863 "state": "completed", 00:17:09.863 "digest": "sha512", 00:17:09.863 "dhgroup": "ffdhe2048" 00:17:09.863 } 00:17:09.863 } 00:17:09.863 ]' 00:17:09.863 21:10:25 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:09.863 21:10:25 -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:09.863 21:10:25 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:09.863 21:10:25 -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:09.863 21:10:25 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:09.863 21:10:25 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.863 21:10:25 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.863 21:10:25 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.120 21:10:25 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:17:10.120 21:10:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:10.120 21:10:25 -- common/autotest_common.sh@10 -- # set +x 00:17:10.120 21:10:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:10.120 21:10:25 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:17:10.120 21:10:25 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:10.120 21:10:25 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:10.377 21:10:26 -- target/auth.sh@85 -- # connect_authenticate sha512 ffdhe2048 2 00:17:10.377 21:10:26 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:10.377 21:10:26 -- target/auth.sh@36 -- # digest=sha512 00:17:10.377 21:10:26 -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:10.377 21:10:26 -- target/auth.sh@36 -- # key=key2 00:17:10.377 21:10:26 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key2 00:17:10.377 21:10:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:10.377 21:10:26 -- common/autotest_common.sh@10 -- # set +x 00:17:10.377 21:10:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:10.377 21:10:26 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:10.377 21:10:26 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:10.377 00:17:10.634 21:10:26 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:10.634 21:10:26 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.634 21:10:26 -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:10.634 21:10:26 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.634 21:10:26 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.634 21:10:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:10.634 21:10:26 -- common/autotest_common.sh@10 -- # set +x 00:17:10.634 21:10:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:10.634 21:10:26 -- target/auth.sh@44 -- # qpairs='[ 00:17:10.634 { 00:17:10.634 "cntlid": 55, 00:17:10.634 "qid": 0, 00:17:10.634 "state": "enabled", 00:17:10.634 "listen_address": { 00:17:10.634 "trtype": "TCP", 00:17:10.634 "adrfam": "IPv4", 00:17:10.634 "traddr": "10.0.0.2", 00:17:10.634 "trsvcid": "4420" 00:17:10.634 }, 00:17:10.634 "peer_address": { 00:17:10.634 "trtype": "TCP", 00:17:10.634 "adrfam": "IPv4", 00:17:10.634 "traddr": "10.0.0.1", 00:17:10.634 "trsvcid": "35872" 00:17:10.634 }, 00:17:10.634 "auth": { 00:17:10.634 "state": "completed", 00:17:10.634 "digest": "sha512", 00:17:10.634 "dhgroup": "ffdhe2048" 00:17:10.634 } 00:17:10.634 } 00:17:10.634 ]' 00:17:10.634 21:10:26 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:10.634 21:10:26 -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:10.634 21:10:26 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:10.892 21:10:26 -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:10.892 21:10:26 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:10.892 21:10:26 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.892 21:10:26 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.892 21:10:26 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.892 21:10:26 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:17:10.892 21:10:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:10.892 21:10:26 -- common/autotest_common.sh@10 -- # set +x 00:17:10.892 21:10:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:10.892 21:10:26 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:17:10.892 21:10:26 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:10.892 21:10:26 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:11.150 21:10:26 -- target/auth.sh@85 -- # connect_authenticate sha512 ffdhe2048 3 00:17:11.150 21:10:26 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:11.150 21:10:26 -- target/auth.sh@36 -- # digest=sha512 00:17:11.150 21:10:26 -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:11.150 21:10:26 -- target/auth.sh@36 -- # key=key3 00:17:11.150 21:10:26 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key3 00:17:11.150 21:10:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.150 21:10:26 -- common/autotest_common.sh@10 -- # set +x 00:17:11.150 21:10:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.150 21:10:27 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:11.150 21:10:27 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:11.409 00:17:11.409 21:10:27 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:11.409 21:10:27 -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:11.409 21:10:27 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.667 21:10:27 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.667 21:10:27 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.667 21:10:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.667 21:10:27 -- common/autotest_common.sh@10 -- # set +x 00:17:11.667 21:10:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.667 21:10:27 -- target/auth.sh@44 -- # qpairs='[ 00:17:11.667 { 00:17:11.667 "cntlid": 56, 00:17:11.667 "qid": 0, 00:17:11.667 "state": "enabled", 00:17:11.667 "listen_address": { 00:17:11.667 "trtype": "TCP", 00:17:11.667 "adrfam": "IPv4", 00:17:11.667 "traddr": "10.0.0.2", 00:17:11.667 "trsvcid": "4420" 00:17:11.667 }, 00:17:11.667 "peer_address": { 00:17:11.667 "trtype": "TCP", 00:17:11.667 "adrfam": "IPv4", 00:17:11.667 "traddr": "10.0.0.1", 00:17:11.667 "trsvcid": "35886" 00:17:11.667 }, 00:17:11.667 "auth": { 00:17:11.667 "state": "completed", 00:17:11.667 "digest": "sha512", 00:17:11.667 "dhgroup": "ffdhe2048" 00:17:11.667 } 00:17:11.667 } 00:17:11.667 ]' 00:17:11.667 21:10:27 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:11.667 21:10:27 -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:11.667 21:10:27 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:11.667 21:10:27 -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:11.667 21:10:27 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:11.667 21:10:27 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.667 21:10:27 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.667 21:10:27 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.925 21:10:27 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:17:11.925 21:10:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.925 21:10:27 -- common/autotest_common.sh@10 -- # set +x 00:17:11.925 21:10:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.925 21:10:27 -- target/auth.sh@81 -- # for dhgroup in "${dhgroups[@]}" 00:17:11.925 21:10:27 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:17:11.925 21:10:27 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:11.925 21:10:27 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:12.182 21:10:27 -- target/auth.sh@85 -- # connect_authenticate sha512 ffdhe3072 0 00:17:12.182 21:10:27 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:12.182 21:10:27 -- target/auth.sh@36 -- # digest=sha512 00:17:12.182 21:10:27 -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:12.182 21:10:27 -- target/auth.sh@36 -- # key=key0 00:17:12.182 21:10:27 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key0 00:17:12.182 21:10:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:12.182 21:10:27 -- common/autotest_common.sh@10 -- # set +x 00:17:12.182 21:10:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:12.182 21:10:27 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:12.182 21:10:27 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:12.439 00:17:12.439 21:10:28 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:12.439 21:10:28 -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:12.439 21:10:28 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.439 21:10:28 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.439 21:10:28 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.439 21:10:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:12.439 21:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:12.439 21:10:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:12.439 21:10:28 -- target/auth.sh@44 -- # qpairs='[ 00:17:12.439 { 00:17:12.439 "cntlid": 57, 00:17:12.439 "qid": 0, 00:17:12.439 "state": "enabled", 00:17:12.439 "listen_address": { 00:17:12.439 "trtype": "TCP", 00:17:12.439 "adrfam": "IPv4", 00:17:12.439 "traddr": "10.0.0.2", 00:17:12.439 "trsvcid": "4420" 00:17:12.439 }, 00:17:12.439 "peer_address": { 00:17:12.439 "trtype": "TCP", 00:17:12.439 "adrfam": "IPv4", 00:17:12.439 "traddr": "10.0.0.1", 00:17:12.439 "trsvcid": "35892" 00:17:12.439 }, 00:17:12.439 "auth": { 00:17:12.439 "state": "completed", 00:17:12.439 "digest": "sha512", 00:17:12.439 "dhgroup": "ffdhe3072" 00:17:12.439 } 00:17:12.439 } 00:17:12.439 ]' 00:17:12.439 21:10:28 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:12.697 21:10:28 -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:12.697 21:10:28 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:12.697 21:10:28 -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:12.697 21:10:28 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:12.697 21:10:28 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.697 21:10:28 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.697 21:10:28 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.954 21:10:28 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:17:12.954 21:10:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:12.954 21:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:12.954 21:10:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:12.954 21:10:28 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:17:12.954 21:10:28 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:12.954 21:10:28 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:12.954 21:10:28 -- target/auth.sh@85 -- # connect_authenticate sha512 ffdhe3072 1 00:17:12.954 21:10:28 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:12.954 21:10:28 -- target/auth.sh@36 -- # digest=sha512 00:17:12.954 21:10:28 -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:12.954 21:10:28 -- target/auth.sh@36 -- # key=key1 00:17:12.954 21:10:28 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key1 00:17:12.954 21:10:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:12.954 21:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:12.954 21:10:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:12.954 21:10:28 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:12.954 21:10:28 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:13.212 00:17:13.212 21:10:29 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:13.212 21:10:29 -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:13.212 21:10:29 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.470 21:10:29 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.470 21:10:29 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.470 21:10:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:13.470 21:10:29 -- common/autotest_common.sh@10 -- # set +x 00:17:13.470 21:10:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:13.470 21:10:29 -- target/auth.sh@44 -- # qpairs='[ 00:17:13.470 { 00:17:13.470 "cntlid": 58, 00:17:13.470 "qid": 0, 00:17:13.470 "state": "enabled", 00:17:13.470 "listen_address": { 00:17:13.470 "trtype": "TCP", 00:17:13.470 "adrfam": "IPv4", 00:17:13.470 "traddr": "10.0.0.2", 00:17:13.470 "trsvcid": "4420" 00:17:13.470 }, 00:17:13.470 "peer_address": { 00:17:13.470 "trtype": "TCP", 00:17:13.470 "adrfam": "IPv4", 00:17:13.470 "traddr": "10.0.0.1", 00:17:13.470 "trsvcid": "35902" 00:17:13.470 }, 00:17:13.470 "auth": { 00:17:13.470 "state": "completed", 00:17:13.470 "digest": "sha512", 00:17:13.470 "dhgroup": "ffdhe3072" 00:17:13.470 } 00:17:13.470 } 00:17:13.470 ]' 00:17:13.470 21:10:29 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:13.470 21:10:29 -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:13.470 21:10:29 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:13.470 21:10:29 -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:13.470 21:10:29 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:13.757 21:10:29 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.757 21:10:29 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.757 21:10:29 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.757 21:10:29 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:17:13.757 21:10:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:13.757 21:10:29 -- common/autotest_common.sh@10 -- # set +x 00:17:13.757 21:10:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:13.757 21:10:29 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:17:13.757 21:10:29 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:13.757 21:10:29 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:14.020 21:10:29 -- target/auth.sh@85 -- # connect_authenticate sha512 ffdhe3072 2 00:17:14.020 21:10:29 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:14.020 21:10:29 -- target/auth.sh@36 -- # digest=sha512 00:17:14.020 21:10:29 -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:14.020 21:10:29 -- target/auth.sh@36 -- # key=key2 00:17:14.020 21:10:29 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key2 00:17:14.020 21:10:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:14.020 21:10:29 -- common/autotest_common.sh@10 -- # set +x 00:17:14.020 21:10:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:14.020 21:10:29 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:14.020 21:10:29 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:14.278 00:17:14.278 21:10:30 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:14.278 21:10:30 -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:14.278 21:10:30 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.536 21:10:30 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.536 21:10:30 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.536 21:10:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:14.536 21:10:30 -- common/autotest_common.sh@10 -- # set +x 00:17:14.536 21:10:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:14.536 21:10:30 -- target/auth.sh@44 -- # qpairs='[ 00:17:14.536 { 00:17:14.536 "cntlid": 59, 00:17:14.536 "qid": 0, 00:17:14.536 "state": "enabled", 00:17:14.536 "listen_address": { 00:17:14.536 "trtype": "TCP", 00:17:14.536 "adrfam": "IPv4", 00:17:14.536 "traddr": "10.0.0.2", 00:17:14.536 "trsvcid": "4420" 00:17:14.536 }, 00:17:14.536 "peer_address": { 00:17:14.536 "trtype": "TCP", 00:17:14.536 "adrfam": "IPv4", 00:17:14.536 "traddr": "10.0.0.1", 00:17:14.536 "trsvcid": "35916" 00:17:14.536 }, 00:17:14.536 "auth": { 00:17:14.536 "state": "completed", 00:17:14.536 "digest": "sha512", 00:17:14.536 "dhgroup": "ffdhe3072" 00:17:14.536 } 00:17:14.536 } 00:17:14.536 ]' 00:17:14.536 21:10:30 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:14.536 21:10:30 -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:14.536 21:10:30 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:14.536 21:10:30 -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:14.536 21:10:30 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:14.536 21:10:30 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.536 21:10:30 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.536 21:10:30 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.793 21:10:30 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:17:14.793 21:10:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:14.793 21:10:30 -- common/autotest_common.sh@10 -- # set +x 00:17:14.793 21:10:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:14.793 21:10:30 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:17:14.793 21:10:30 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:14.793 21:10:30 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:15.051 21:10:30 -- target/auth.sh@85 -- # connect_authenticate sha512 ffdhe3072 3 00:17:15.051 21:10:30 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:15.051 21:10:30 -- target/auth.sh@36 -- # digest=sha512 00:17:15.051 21:10:30 -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:15.051 21:10:30 -- target/auth.sh@36 -- # key=key3 00:17:15.051 21:10:30 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key3 00:17:15.051 21:10:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:15.051 21:10:30 -- common/autotest_common.sh@10 -- # set +x 00:17:15.051 21:10:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:15.051 21:10:30 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:15.051 21:10:30 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:15.051 00:17:15.051 21:10:30 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:15.051 21:10:30 -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:15.051 21:10:30 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.309 21:10:31 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.309 21:10:31 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.309 21:10:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:15.309 21:10:31 -- common/autotest_common.sh@10 -- # set +x 00:17:15.309 21:10:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:15.309 21:10:31 -- target/auth.sh@44 -- # qpairs='[ 00:17:15.309 { 00:17:15.309 "cntlid": 60, 00:17:15.309 "qid": 0, 00:17:15.309 "state": "enabled", 00:17:15.309 "listen_address": { 00:17:15.309 "trtype": "TCP", 00:17:15.309 "adrfam": "IPv4", 00:17:15.309 "traddr": "10.0.0.2", 00:17:15.309 "trsvcid": "4420" 00:17:15.309 }, 00:17:15.309 "peer_address": { 00:17:15.309 "trtype": "TCP", 00:17:15.309 "adrfam": "IPv4", 00:17:15.309 "traddr": "10.0.0.1", 00:17:15.309 "trsvcid": "35922" 00:17:15.309 }, 00:17:15.309 "auth": { 00:17:15.309 "state": "completed", 00:17:15.309 "digest": "sha512", 00:17:15.309 "dhgroup": "ffdhe3072" 00:17:15.309 } 00:17:15.309 } 00:17:15.309 ]' 00:17:15.309 21:10:31 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:15.309 21:10:31 -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:15.309 21:10:31 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:15.567 21:10:31 -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:15.567 21:10:31 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:15.567 21:10:31 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.567 21:10:31 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.567 21:10:31 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.567 21:10:31 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:17:15.567 21:10:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:15.567 21:10:31 -- common/autotest_common.sh@10 -- # set +x 00:17:15.567 21:10:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:15.567 21:10:31 -- target/auth.sh@81 -- # for dhgroup in "${dhgroups[@]}" 00:17:15.567 21:10:31 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:17:15.567 21:10:31 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:15.567 21:10:31 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:15.824 21:10:31 -- target/auth.sh@85 -- # connect_authenticate sha512 ffdhe4096 0 00:17:15.824 21:10:31 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:15.824 21:10:31 -- target/auth.sh@36 -- # digest=sha512 00:17:15.824 21:10:31 -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:15.824 21:10:31 -- target/auth.sh@36 -- # key=key0 00:17:15.824 21:10:31 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key0 00:17:15.824 21:10:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:15.824 21:10:31 -- common/autotest_common.sh@10 -- # set +x 00:17:15.824 21:10:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:15.825 21:10:31 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:15.825 21:10:31 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:16.082 00:17:16.082 21:10:31 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:16.082 21:10:31 -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:16.082 21:10:31 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.340 21:10:32 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.340 21:10:32 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.340 21:10:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.340 21:10:32 -- common/autotest_common.sh@10 -- # set +x 00:17:16.340 21:10:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.340 21:10:32 -- target/auth.sh@44 -- # qpairs='[ 00:17:16.340 { 00:17:16.340 "cntlid": 61, 00:17:16.340 "qid": 0, 00:17:16.340 "state": "enabled", 00:17:16.340 "listen_address": { 00:17:16.340 "trtype": "TCP", 00:17:16.340 "adrfam": "IPv4", 00:17:16.340 "traddr": "10.0.0.2", 00:17:16.340 "trsvcid": "4420" 00:17:16.340 }, 00:17:16.340 "peer_address": { 00:17:16.340 "trtype": "TCP", 00:17:16.340 "adrfam": "IPv4", 00:17:16.340 "traddr": "10.0.0.1", 00:17:16.340 "trsvcid": "35928" 00:17:16.340 }, 00:17:16.340 "auth": { 00:17:16.340 "state": "completed", 00:17:16.340 "digest": "sha512", 00:17:16.340 "dhgroup": "ffdhe4096" 00:17:16.340 } 00:17:16.340 } 00:17:16.340 ]' 00:17:16.340 21:10:32 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:16.340 21:10:32 -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:16.340 21:10:32 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:16.340 21:10:32 -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:16.340 21:10:32 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:16.340 21:10:32 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.340 21:10:32 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.340 21:10:32 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.598 21:10:32 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:17:16.598 21:10:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.598 21:10:32 -- common/autotest_common.sh@10 -- # set +x 00:17:16.598 21:10:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.598 21:10:32 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:17:16.598 21:10:32 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:16.598 21:10:32 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:16.856 21:10:32 -- target/auth.sh@85 -- # connect_authenticate sha512 ffdhe4096 1 00:17:16.856 21:10:32 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:16.856 21:10:32 -- target/auth.sh@36 -- # digest=sha512 00:17:16.856 21:10:32 -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:16.856 21:10:32 -- target/auth.sh@36 -- # key=key1 00:17:16.856 21:10:32 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key1 00:17:16.856 21:10:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.856 21:10:32 -- common/autotest_common.sh@10 -- # set +x 00:17:16.856 21:10:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.856 21:10:32 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:16.856 21:10:32 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:17.113 00:17:17.114 21:10:32 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:17.114 21:10:32 -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:17.114 21:10:32 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.114 21:10:33 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.114 21:10:33 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.114 21:10:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:17.114 21:10:33 -- common/autotest_common.sh@10 -- # set +x 00:17:17.114 21:10:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.114 21:10:33 -- target/auth.sh@44 -- # qpairs='[ 00:17:17.114 { 00:17:17.114 "cntlid": 62, 00:17:17.114 "qid": 0, 00:17:17.114 "state": "enabled", 00:17:17.114 "listen_address": { 00:17:17.114 "trtype": "TCP", 00:17:17.114 "adrfam": "IPv4", 00:17:17.114 "traddr": "10.0.0.2", 00:17:17.114 "trsvcid": "4420" 00:17:17.114 }, 00:17:17.114 "peer_address": { 00:17:17.114 "trtype": "TCP", 00:17:17.114 "adrfam": "IPv4", 00:17:17.114 "traddr": "10.0.0.1", 00:17:17.114 "trsvcid": "35938" 00:17:17.114 }, 00:17:17.114 "auth": { 00:17:17.114 "state": "completed", 00:17:17.114 "digest": "sha512", 00:17:17.114 "dhgroup": "ffdhe4096" 00:17:17.114 } 00:17:17.114 } 00:17:17.114 ]' 00:17:17.114 21:10:33 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:17.371 21:10:33 -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:17.371 21:10:33 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:17.371 21:10:33 -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:17.371 21:10:33 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:17.371 21:10:33 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.371 21:10:33 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.371 21:10:33 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.631 21:10:33 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:17:17.631 21:10:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:17.631 21:10:33 -- common/autotest_common.sh@10 -- # set +x 00:17:17.631 21:10:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.631 21:10:33 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:17:17.631 21:10:33 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:17.631 21:10:33 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:17.631 21:10:33 -- target/auth.sh@85 -- # connect_authenticate sha512 ffdhe4096 2 00:17:17.631 21:10:33 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:17.631 21:10:33 -- target/auth.sh@36 -- # digest=sha512 00:17:17.631 21:10:33 -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:17.631 21:10:33 -- target/auth.sh@36 -- # key=key2 00:17:17.631 21:10:33 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key2 00:17:17.631 21:10:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:17.631 21:10:33 -- common/autotest_common.sh@10 -- # set +x 00:17:17.631 21:10:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.631 21:10:33 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:17.631 21:10:33 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:17.889 00:17:17.889 21:10:33 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:17.889 21:10:33 -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:17.889 21:10:33 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.146 21:10:33 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.146 21:10:33 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.146 21:10:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:18.146 21:10:33 -- common/autotest_common.sh@10 -- # set +x 00:17:18.146 21:10:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:18.146 21:10:33 -- target/auth.sh@44 -- # qpairs='[ 00:17:18.146 { 00:17:18.146 "cntlid": 63, 00:17:18.147 "qid": 0, 00:17:18.147 "state": "enabled", 00:17:18.147 "listen_address": { 00:17:18.147 "trtype": "TCP", 00:17:18.147 "adrfam": "IPv4", 00:17:18.147 "traddr": "10.0.0.2", 00:17:18.147 "trsvcid": "4420" 00:17:18.147 }, 00:17:18.147 "peer_address": { 00:17:18.147 "trtype": "TCP", 00:17:18.147 "adrfam": "IPv4", 00:17:18.147 "traddr": "10.0.0.1", 00:17:18.147 "trsvcid": "35944" 00:17:18.147 }, 00:17:18.147 "auth": { 00:17:18.147 "state": "completed", 00:17:18.147 "digest": "sha512", 00:17:18.147 "dhgroup": "ffdhe4096" 00:17:18.147 } 00:17:18.147 } 00:17:18.147 ]' 00:17:18.147 21:10:34 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:18.147 21:10:34 -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:18.147 21:10:34 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:18.405 21:10:34 -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:18.405 21:10:34 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:18.405 21:10:34 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.405 21:10:34 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.405 21:10:34 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.405 21:10:34 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:17:18.405 21:10:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:18.405 21:10:34 -- common/autotest_common.sh@10 -- # set +x 00:17:18.405 21:10:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:18.405 21:10:34 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:17:18.405 21:10:34 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:18.405 21:10:34 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:18.663 21:10:34 -- target/auth.sh@85 -- # connect_authenticate sha512 ffdhe4096 3 00:17:18.663 21:10:34 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:18.663 21:10:34 -- target/auth.sh@36 -- # digest=sha512 00:17:18.663 21:10:34 -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:18.663 21:10:34 -- target/auth.sh@36 -- # key=key3 00:17:18.663 21:10:34 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key3 00:17:18.663 21:10:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:18.663 21:10:34 -- common/autotest_common.sh@10 -- # set +x 00:17:18.663 21:10:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:18.663 21:10:34 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:18.663 21:10:34 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:18.921 00:17:18.921 21:10:34 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:18.921 21:10:34 -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:18.921 21:10:34 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.179 21:10:34 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.179 21:10:34 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.179 21:10:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.179 21:10:34 -- common/autotest_common.sh@10 -- # set +x 00:17:19.179 21:10:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.179 21:10:34 -- target/auth.sh@44 -- # qpairs='[ 00:17:19.179 { 00:17:19.179 "cntlid": 64, 00:17:19.179 "qid": 0, 00:17:19.179 "state": "enabled", 00:17:19.179 "listen_address": { 00:17:19.179 "trtype": "TCP", 00:17:19.179 "adrfam": "IPv4", 00:17:19.179 "traddr": "10.0.0.2", 00:17:19.179 "trsvcid": "4420" 00:17:19.179 }, 00:17:19.179 "peer_address": { 00:17:19.179 "trtype": "TCP", 00:17:19.179 "adrfam": "IPv4", 00:17:19.179 "traddr": "10.0.0.1", 00:17:19.179 "trsvcid": "35952" 00:17:19.179 }, 00:17:19.179 "auth": { 00:17:19.179 "state": "completed", 00:17:19.179 "digest": "sha512", 00:17:19.179 "dhgroup": "ffdhe4096" 00:17:19.179 } 00:17:19.179 } 00:17:19.179 ]' 00:17:19.179 21:10:34 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:19.179 21:10:34 -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:19.179 21:10:34 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:19.179 21:10:35 -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:19.179 21:10:35 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:19.179 21:10:35 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.179 21:10:35 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.179 21:10:35 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.436 21:10:35 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:17:19.436 21:10:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.436 21:10:35 -- common/autotest_common.sh@10 -- # set +x 00:17:19.436 21:10:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.436 21:10:35 -- target/auth.sh@81 -- # for dhgroup in "${dhgroups[@]}" 00:17:19.436 21:10:35 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:17:19.436 21:10:35 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:19.436 21:10:35 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:19.694 21:10:35 -- target/auth.sh@85 -- # connect_authenticate sha512 ffdhe6144 0 00:17:19.694 21:10:35 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:19.694 21:10:35 -- target/auth.sh@36 -- # digest=sha512 00:17:19.694 21:10:35 -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:19.694 21:10:35 -- target/auth.sh@36 -- # key=key0 00:17:19.694 21:10:35 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key0 00:17:19.694 21:10:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.694 21:10:35 -- common/autotest_common.sh@10 -- # set +x 00:17:19.694 21:10:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.694 21:10:35 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:19.694 21:10:35 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:19.952 00:17:19.953 21:10:35 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:19.953 21:10:35 -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:19.953 21:10:35 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.211 21:10:35 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.211 21:10:35 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.211 21:10:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:20.211 21:10:35 -- common/autotest_common.sh@10 -- # set +x 00:17:20.211 21:10:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:20.211 21:10:35 -- target/auth.sh@44 -- # qpairs='[ 00:17:20.211 { 00:17:20.211 "cntlid": 65, 00:17:20.211 "qid": 0, 00:17:20.211 "state": "enabled", 00:17:20.211 "listen_address": { 00:17:20.211 "trtype": "TCP", 00:17:20.211 "adrfam": "IPv4", 00:17:20.211 "traddr": "10.0.0.2", 00:17:20.211 "trsvcid": "4420" 00:17:20.211 }, 00:17:20.211 "peer_address": { 00:17:20.211 "trtype": "TCP", 00:17:20.211 "adrfam": "IPv4", 00:17:20.211 "traddr": "10.0.0.1", 00:17:20.211 "trsvcid": "35960" 00:17:20.211 }, 00:17:20.211 "auth": { 00:17:20.211 "state": "completed", 00:17:20.211 "digest": "sha512", 00:17:20.211 "dhgroup": "ffdhe6144" 00:17:20.211 } 00:17:20.211 } 00:17:20.211 ]' 00:17:20.211 21:10:35 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:20.211 21:10:36 -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:20.211 21:10:36 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:20.211 21:10:36 -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:20.211 21:10:36 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:20.211 21:10:36 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.211 21:10:36 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.211 21:10:36 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.469 21:10:36 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:17:20.469 21:10:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:20.469 21:10:36 -- common/autotest_common.sh@10 -- # set +x 00:17:20.469 21:10:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:20.469 21:10:36 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:17:20.469 21:10:36 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:20.469 21:10:36 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:20.727 21:10:36 -- target/auth.sh@85 -- # connect_authenticate sha512 ffdhe6144 1 00:17:20.727 21:10:36 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:20.727 21:10:36 -- target/auth.sh@36 -- # digest=sha512 00:17:20.727 21:10:36 -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:20.727 21:10:36 -- target/auth.sh@36 -- # key=key1 00:17:20.727 21:10:36 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key1 00:17:20.727 21:10:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:20.727 21:10:36 -- common/autotest_common.sh@10 -- # set +x 00:17:20.727 21:10:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:20.727 21:10:36 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:20.727 21:10:36 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:20.985 00:17:20.985 21:10:36 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:20.985 21:10:36 -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:20.985 21:10:36 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.243 21:10:36 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.243 21:10:36 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.244 21:10:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:21.244 21:10:36 -- common/autotest_common.sh@10 -- # set +x 00:17:21.244 21:10:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:21.244 21:10:37 -- target/auth.sh@44 -- # qpairs='[ 00:17:21.244 { 00:17:21.244 "cntlid": 66, 00:17:21.244 "qid": 0, 00:17:21.244 "state": "enabled", 00:17:21.244 "listen_address": { 00:17:21.244 "trtype": "TCP", 00:17:21.244 "adrfam": "IPv4", 00:17:21.244 "traddr": "10.0.0.2", 00:17:21.244 "trsvcid": "4420" 00:17:21.244 }, 00:17:21.244 "peer_address": { 00:17:21.244 "trtype": "TCP", 00:17:21.244 "adrfam": "IPv4", 00:17:21.244 "traddr": "10.0.0.1", 00:17:21.244 "trsvcid": "52496" 00:17:21.244 }, 00:17:21.244 "auth": { 00:17:21.244 "state": "completed", 00:17:21.244 "digest": "sha512", 00:17:21.244 "dhgroup": "ffdhe6144" 00:17:21.244 } 00:17:21.244 } 00:17:21.244 ]' 00:17:21.244 21:10:37 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:21.244 21:10:37 -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:21.244 21:10:37 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:21.244 21:10:37 -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:21.244 21:10:37 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:21.244 21:10:37 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.244 21:10:37 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.244 21:10:37 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.502 21:10:37 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:17:21.502 21:10:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:21.502 21:10:37 -- common/autotest_common.sh@10 -- # set +x 00:17:21.502 21:10:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:21.502 21:10:37 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:17:21.502 21:10:37 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:21.502 21:10:37 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:21.760 21:10:37 -- target/auth.sh@85 -- # connect_authenticate sha512 ffdhe6144 2 00:17:21.760 21:10:37 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:21.760 21:10:37 -- target/auth.sh@36 -- # digest=sha512 00:17:21.760 21:10:37 -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:21.760 21:10:37 -- target/auth.sh@36 -- # key=key2 00:17:21.760 21:10:37 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key2 00:17:21.760 21:10:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:21.760 21:10:37 -- common/autotest_common.sh@10 -- # set +x 00:17:21.760 21:10:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:21.760 21:10:37 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:21.760 21:10:37 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:22.019 00:17:22.019 21:10:37 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:22.019 21:10:37 -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:22.019 21:10:37 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.277 21:10:38 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.277 21:10:38 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.277 21:10:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:22.277 21:10:38 -- common/autotest_common.sh@10 -- # set +x 00:17:22.277 21:10:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:22.277 21:10:38 -- target/auth.sh@44 -- # qpairs='[ 00:17:22.277 { 00:17:22.277 "cntlid": 67, 00:17:22.277 "qid": 0, 00:17:22.277 "state": "enabled", 00:17:22.277 "listen_address": { 00:17:22.277 "trtype": "TCP", 00:17:22.277 "adrfam": "IPv4", 00:17:22.277 "traddr": "10.0.0.2", 00:17:22.277 "trsvcid": "4420" 00:17:22.277 }, 00:17:22.277 "peer_address": { 00:17:22.277 "trtype": "TCP", 00:17:22.277 "adrfam": "IPv4", 00:17:22.277 "traddr": "10.0.0.1", 00:17:22.277 "trsvcid": "52508" 00:17:22.277 }, 00:17:22.277 "auth": { 00:17:22.277 "state": "completed", 00:17:22.277 "digest": "sha512", 00:17:22.277 "dhgroup": "ffdhe6144" 00:17:22.277 } 00:17:22.277 } 00:17:22.277 ]' 00:17:22.277 21:10:38 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:22.277 21:10:38 -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:22.277 21:10:38 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:22.277 21:10:38 -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:22.277 21:10:38 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:22.277 21:10:38 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.277 21:10:38 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.277 21:10:38 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.536 21:10:38 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:17:22.536 21:10:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:22.536 21:10:38 -- common/autotest_common.sh@10 -- # set +x 00:17:22.536 21:10:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:22.536 21:10:38 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:17:22.536 21:10:38 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:22.536 21:10:38 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:22.794 21:10:38 -- target/auth.sh@85 -- # connect_authenticate sha512 ffdhe6144 3 00:17:22.794 21:10:38 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:22.794 21:10:38 -- target/auth.sh@36 -- # digest=sha512 00:17:22.794 21:10:38 -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:22.794 21:10:38 -- target/auth.sh@36 -- # key=key3 00:17:22.794 21:10:38 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key3 00:17:22.794 21:10:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:22.794 21:10:38 -- common/autotest_common.sh@10 -- # set +x 00:17:22.794 21:10:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:22.794 21:10:38 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:22.794 21:10:38 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:23.052 00:17:23.052 21:10:38 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:23.052 21:10:38 -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:23.052 21:10:38 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.309 21:10:39 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.309 21:10:39 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.309 21:10:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.309 21:10:39 -- common/autotest_common.sh@10 -- # set +x 00:17:23.309 21:10:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.309 21:10:39 -- target/auth.sh@44 -- # qpairs='[ 00:17:23.309 { 00:17:23.309 "cntlid": 68, 00:17:23.309 "qid": 0, 00:17:23.309 "state": "enabled", 00:17:23.310 "listen_address": { 00:17:23.310 "trtype": "TCP", 00:17:23.310 "adrfam": "IPv4", 00:17:23.310 "traddr": "10.0.0.2", 00:17:23.310 "trsvcid": "4420" 00:17:23.310 }, 00:17:23.310 "peer_address": { 00:17:23.310 "trtype": "TCP", 00:17:23.310 "adrfam": "IPv4", 00:17:23.310 "traddr": "10.0.0.1", 00:17:23.310 "trsvcid": "52520" 00:17:23.310 }, 00:17:23.310 "auth": { 00:17:23.310 "state": "completed", 00:17:23.310 "digest": "sha512", 00:17:23.310 "dhgroup": "ffdhe6144" 00:17:23.310 } 00:17:23.310 } 00:17:23.310 ]' 00:17:23.310 21:10:39 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:23.310 21:10:39 -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:23.310 21:10:39 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:23.310 21:10:39 -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:23.310 21:10:39 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:23.310 21:10:39 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.310 21:10:39 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.310 21:10:39 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.568 21:10:39 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:17:23.568 21:10:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.568 21:10:39 -- common/autotest_common.sh@10 -- # set +x 00:17:23.568 21:10:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.568 21:10:39 -- target/auth.sh@81 -- # for dhgroup in "${dhgroups[@]}" 00:17:23.568 21:10:39 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:17:23.568 21:10:39 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:23.568 21:10:39 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:23.825 21:10:39 -- target/auth.sh@85 -- # connect_authenticate sha512 ffdhe8192 0 00:17:23.825 21:10:39 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:23.825 21:10:39 -- target/auth.sh@36 -- # digest=sha512 00:17:23.825 21:10:39 -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:23.825 21:10:39 -- target/auth.sh@36 -- # key=key0 00:17:23.825 21:10:39 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key0 00:17:23.825 21:10:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.825 21:10:39 -- common/autotest_common.sh@10 -- # set +x 00:17:23.825 21:10:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.825 21:10:39 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:23.825 21:10:39 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:24.390 00:17:24.391 21:10:40 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:24.391 21:10:40 -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:24.391 21:10:40 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.391 21:10:40 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.391 21:10:40 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.391 21:10:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:24.391 21:10:40 -- common/autotest_common.sh@10 -- # set +x 00:17:24.391 21:10:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:24.391 21:10:40 -- target/auth.sh@44 -- # qpairs='[ 00:17:24.391 { 00:17:24.391 "cntlid": 69, 00:17:24.391 "qid": 0, 00:17:24.391 "state": "enabled", 00:17:24.391 "listen_address": { 00:17:24.391 "trtype": "TCP", 00:17:24.391 "adrfam": "IPv4", 00:17:24.391 "traddr": "10.0.0.2", 00:17:24.391 "trsvcid": "4420" 00:17:24.391 }, 00:17:24.391 "peer_address": { 00:17:24.391 "trtype": "TCP", 00:17:24.391 "adrfam": "IPv4", 00:17:24.391 "traddr": "10.0.0.1", 00:17:24.391 "trsvcid": "52530" 00:17:24.391 }, 00:17:24.391 "auth": { 00:17:24.391 "state": "completed", 00:17:24.391 "digest": "sha512", 00:17:24.391 "dhgroup": "ffdhe8192" 00:17:24.391 } 00:17:24.391 } 00:17:24.391 ]' 00:17:24.391 21:10:40 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:24.647 21:10:40 -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:24.648 21:10:40 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:24.648 21:10:40 -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:24.648 21:10:40 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:24.648 21:10:40 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.648 21:10:40 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.648 21:10:40 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.905 21:10:40 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:17:24.905 21:10:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:24.905 21:10:40 -- common/autotest_common.sh@10 -- # set +x 00:17:24.905 21:10:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:24.905 21:10:40 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:17:24.905 21:10:40 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:24.905 21:10:40 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:24.906 21:10:40 -- target/auth.sh@85 -- # connect_authenticate sha512 ffdhe8192 1 00:17:24.906 21:10:40 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:24.906 21:10:40 -- target/auth.sh@36 -- # digest=sha512 00:17:24.906 21:10:40 -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:24.906 21:10:40 -- target/auth.sh@36 -- # key=key1 00:17:24.906 21:10:40 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key1 00:17:24.906 21:10:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:24.906 21:10:40 -- common/autotest_common.sh@10 -- # set +x 00:17:24.906 21:10:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:24.906 21:10:40 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:24.906 21:10:40 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:25.472 00:17:25.472 21:10:41 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:25.472 21:10:41 -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:25.472 21:10:41 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.730 21:10:41 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.730 21:10:41 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.730 21:10:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:25.730 21:10:41 -- common/autotest_common.sh@10 -- # set +x 00:17:25.730 21:10:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:25.730 21:10:41 -- target/auth.sh@44 -- # qpairs='[ 00:17:25.730 { 00:17:25.730 "cntlid": 70, 00:17:25.730 "qid": 0, 00:17:25.730 "state": "enabled", 00:17:25.730 "listen_address": { 00:17:25.730 "trtype": "TCP", 00:17:25.730 "adrfam": "IPv4", 00:17:25.730 "traddr": "10.0.0.2", 00:17:25.730 "trsvcid": "4420" 00:17:25.730 }, 00:17:25.730 "peer_address": { 00:17:25.730 "trtype": "TCP", 00:17:25.730 "adrfam": "IPv4", 00:17:25.730 "traddr": "10.0.0.1", 00:17:25.730 "trsvcid": "52536" 00:17:25.730 }, 00:17:25.730 "auth": { 00:17:25.730 "state": "completed", 00:17:25.730 "digest": "sha512", 00:17:25.730 "dhgroup": "ffdhe8192" 00:17:25.730 } 00:17:25.730 } 00:17:25.730 ]' 00:17:25.730 21:10:41 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:25.730 21:10:41 -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:25.730 21:10:41 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:25.730 21:10:41 -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:25.730 21:10:41 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:25.730 21:10:41 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.730 21:10:41 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.730 21:10:41 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.988 21:10:41 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:17:25.988 21:10:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:25.988 21:10:41 -- common/autotest_common.sh@10 -- # set +x 00:17:25.988 21:10:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:25.988 21:10:41 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:17:25.988 21:10:41 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:25.988 21:10:41 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:26.245 21:10:41 -- target/auth.sh@85 -- # connect_authenticate sha512 ffdhe8192 2 00:17:26.245 21:10:41 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:26.245 21:10:41 -- target/auth.sh@36 -- # digest=sha512 00:17:26.245 21:10:41 -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:26.245 21:10:41 -- target/auth.sh@36 -- # key=key2 00:17:26.245 21:10:41 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key2 00:17:26.245 21:10:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:26.245 21:10:41 -- common/autotest_common.sh@10 -- # set +x 00:17:26.245 21:10:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:26.245 21:10:41 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:26.246 21:10:41 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:26.503 00:17:26.761 21:10:42 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:26.761 21:10:42 -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:26.761 21:10:42 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.761 21:10:42 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.761 21:10:42 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.761 21:10:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:26.761 21:10:42 -- common/autotest_common.sh@10 -- # set +x 00:17:26.761 21:10:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:26.761 21:10:42 -- target/auth.sh@44 -- # qpairs='[ 00:17:26.761 { 00:17:26.761 "cntlid": 71, 00:17:26.761 "qid": 0, 00:17:26.761 "state": "enabled", 00:17:26.761 "listen_address": { 00:17:26.761 "trtype": "TCP", 00:17:26.761 "adrfam": "IPv4", 00:17:26.761 "traddr": "10.0.0.2", 00:17:26.761 "trsvcid": "4420" 00:17:26.761 }, 00:17:26.761 "peer_address": { 00:17:26.761 "trtype": "TCP", 00:17:26.761 "adrfam": "IPv4", 00:17:26.761 "traddr": "10.0.0.1", 00:17:26.761 "trsvcid": "52542" 00:17:26.761 }, 00:17:26.761 "auth": { 00:17:26.761 "state": "completed", 00:17:26.761 "digest": "sha512", 00:17:26.761 "dhgroup": "ffdhe8192" 00:17:26.761 } 00:17:26.761 } 00:17:26.761 ]' 00:17:26.761 21:10:42 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:26.761 21:10:42 -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:26.761 21:10:42 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:27.019 21:10:42 -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:27.019 21:10:42 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:27.019 21:10:42 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.019 21:10:42 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.019 21:10:42 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.019 21:10:42 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:17:27.019 21:10:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:27.019 21:10:42 -- common/autotest_common.sh@10 -- # set +x 00:17:27.019 21:10:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:27.019 21:10:42 -- target/auth.sh@82 -- # for keyid in "${!keys[@]}" 00:17:27.019 21:10:42 -- target/auth.sh@83 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:27.019 21:10:42 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:27.276 21:10:43 -- target/auth.sh@85 -- # connect_authenticate sha512 ffdhe8192 3 00:17:27.276 21:10:43 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:27.277 21:10:43 -- target/auth.sh@36 -- # digest=sha512 00:17:27.277 21:10:43 -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:27.277 21:10:43 -- target/auth.sh@36 -- # key=key3 00:17:27.277 21:10:43 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key3 00:17:27.277 21:10:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:27.277 21:10:43 -- common/autotest_common.sh@10 -- # set +x 00:17:27.277 21:10:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:27.277 21:10:43 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:27.277 21:10:43 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:27.874 00:17:27.874 21:10:43 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:27.874 21:10:43 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.874 21:10:43 -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:27.874 21:10:43 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.874 21:10:43 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.874 21:10:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:27.874 21:10:43 -- common/autotest_common.sh@10 -- # set +x 00:17:27.874 21:10:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:27.874 21:10:43 -- target/auth.sh@44 -- # qpairs='[ 00:17:27.874 { 00:17:27.874 "cntlid": 72, 00:17:27.874 "qid": 0, 00:17:27.874 "state": "enabled", 00:17:27.874 "listen_address": { 00:17:27.874 "trtype": "TCP", 00:17:27.874 "adrfam": "IPv4", 00:17:27.874 "traddr": "10.0.0.2", 00:17:27.874 "trsvcid": "4420" 00:17:27.874 }, 00:17:27.874 "peer_address": { 00:17:27.874 "trtype": "TCP", 00:17:27.874 "adrfam": "IPv4", 00:17:27.874 "traddr": "10.0.0.1", 00:17:27.874 "trsvcid": "52550" 00:17:27.874 }, 00:17:27.874 "auth": { 00:17:27.874 "state": "completed", 00:17:27.874 "digest": "sha512", 00:17:27.874 "dhgroup": "ffdhe8192" 00:17:27.874 } 00:17:27.874 } 00:17:27.874 ]' 00:17:27.874 21:10:43 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:28.131 21:10:43 -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:28.131 21:10:43 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:28.131 21:10:43 -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:28.131 21:10:43 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:28.131 21:10:43 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.131 21:10:43 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.131 21:10:43 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.389 21:10:44 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:17:28.389 21:10:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:28.389 21:10:44 -- common/autotest_common.sh@10 -- # set +x 00:17:28.389 21:10:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:28.389 21:10:44 -- target/auth.sh@91 -- # IFS=, 00:17:28.389 21:10:44 -- target/auth.sh@92 -- # printf %s sha256,sha384,sha512 00:17:28.389 21:10:44 -- target/auth.sh@91 -- # IFS=, 00:17:28.389 21:10:44 -- target/auth.sh@92 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:28.389 21:10:44 -- target/auth.sh@91 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:28.389 21:10:44 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:28.389 21:10:44 -- target/auth.sh@103 -- # connect_authenticate sha512 ffdhe8192 0 00:17:28.389 21:10:44 -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:28.389 21:10:44 -- target/auth.sh@36 -- # digest=sha512 00:17:28.389 21:10:44 -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:28.389 21:10:44 -- target/auth.sh@36 -- # key=key0 00:17:28.389 21:10:44 -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key0 00:17:28.389 21:10:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:28.389 21:10:44 -- common/autotest_common.sh@10 -- # set +x 00:17:28.389 21:10:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:28.389 21:10:44 -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:28.389 21:10:44 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:28.955 00:17:28.955 21:10:44 -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:28.955 21:10:44 -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:28.955 21:10:44 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.213 21:10:44 -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.213 21:10:44 -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.213 21:10:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:29.213 21:10:44 -- common/autotest_common.sh@10 -- # set +x 00:17:29.213 21:10:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:29.213 21:10:44 -- target/auth.sh@44 -- # qpairs='[ 00:17:29.213 { 00:17:29.213 "cntlid": 73, 00:17:29.213 "qid": 0, 00:17:29.213 "state": "enabled", 00:17:29.213 "listen_address": { 00:17:29.213 "trtype": "TCP", 00:17:29.213 "adrfam": "IPv4", 00:17:29.213 "traddr": "10.0.0.2", 00:17:29.213 "trsvcid": "4420" 00:17:29.213 }, 00:17:29.213 "peer_address": { 00:17:29.213 "trtype": "TCP", 00:17:29.213 "adrfam": "IPv4", 00:17:29.213 "traddr": "10.0.0.1", 00:17:29.213 "trsvcid": "52564" 00:17:29.213 }, 00:17:29.213 "auth": { 00:17:29.213 "state": "completed", 00:17:29.213 "digest": "sha512", 00:17:29.213 "dhgroup": "ffdhe8192" 00:17:29.213 } 00:17:29.213 } 00:17:29.213 ]' 00:17:29.213 21:10:44 -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:29.213 21:10:45 -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:29.213 21:10:45 -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:29.213 21:10:45 -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:29.213 21:10:45 -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:29.213 21:10:45 -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.213 21:10:45 -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.213 21:10:45 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.471 21:10:45 -- target/auth.sh@50 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:17:29.471 21:10:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:29.471 21:10:45 -- common/autotest_common.sh@10 -- # set +x 00:17:29.471 21:10:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:29.471 21:10:45 -- target/auth.sh@106 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 --dhchap-key key1 00:17:29.471 21:10:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:29.471 21:10:45 -- common/autotest_common.sh@10 -- # set +x 00:17:29.471 21:10:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:29.471 21:10:45 -- target/auth.sh@107 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:29.471 21:10:45 -- common/autotest_common.sh@638 -- # local es=0 00:17:29.471 21:10:45 -- common/autotest_common.sh@640 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:29.471 21:10:45 -- common/autotest_common.sh@626 -- # local arg=hostrpc 00:17:29.471 21:10:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:29.471 21:10:45 -- common/autotest_common.sh@630 -- # type -t hostrpc 00:17:29.471 21:10:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:29.471 21:10:45 -- common/autotest_common.sh@641 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:29.471 21:10:45 -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-03.io.spdk:host0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:30.037 request: 00:17:30.037 { 00:17:30.037 "name": "nvme0", 00:17:30.037 "trtype": "tcp", 00:17:30.037 "traddr": "10.0.0.2", 00:17:30.037 "hostnqn": "nqn.2024-03.io.spdk:host0", 00:17:30.037 "adrfam": "ipv4", 00:17:30.037 "trsvcid": "4420", 00:17:30.037 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:30.037 "dhchap_key": "key2", 00:17:30.037 "method": "bdev_nvme_attach_controller", 00:17:30.037 "req_id": 1 00:17:30.037 } 00:17:30.037 Got JSON-RPC error response 00:17:30.037 response: 00:17:30.037 { 00:17:30.037 "code": -32602, 00:17:30.037 "message": "Invalid parameters" 00:17:30.037 } 00:17:30.037 21:10:45 -- common/autotest_common.sh@641 -- # es=1 00:17:30.037 21:10:45 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:30.037 21:10:45 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:30.037 21:10:45 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:30.037 21:10:45 -- target/auth.sh@110 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2024-03.io.spdk:host0 00:17:30.037 21:10:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:30.037 21:10:45 -- common/autotest_common.sh@10 -- # set +x 00:17:30.037 21:10:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:30.037 21:10:45 -- target/auth.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:17:30.037 21:10:45 -- target/auth.sh@113 -- # cleanup 00:17:30.037 21:10:45 -- target/auth.sh@21 -- # killprocess 3048801 00:17:30.037 21:10:45 -- common/autotest_common.sh@936 -- # '[' -z 3048801 ']' 00:17:30.037 21:10:45 -- common/autotest_common.sh@940 -- # kill -0 3048801 00:17:30.037 21:10:45 -- common/autotest_common.sh@941 -- # uname 00:17:30.037 21:10:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:30.037 21:10:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3048801 00:17:30.037 21:10:45 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:30.037 21:10:45 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:30.037 21:10:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3048801' 00:17:30.037 killing process with pid 3048801 00:17:30.037 21:10:45 -- common/autotest_common.sh@955 -- # kill 3048801 00:17:30.037 21:10:45 -- common/autotest_common.sh@960 -- # wait 3048801 00:17:30.295 21:10:46 -- target/auth.sh@22 -- # nvmftestfini 00:17:30.295 21:10:46 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:30.295 21:10:46 -- nvmf/common.sh@117 -- # sync 00:17:30.295 21:10:46 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:30.295 21:10:46 -- nvmf/common.sh@120 -- # set +e 00:17:30.295 21:10:46 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:30.295 21:10:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:30.295 rmmod nvme_tcp 00:17:30.295 rmmod nvme_fabrics 00:17:30.295 rmmod nvme_keyring 00:17:30.295 21:10:46 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:30.295 21:10:46 -- nvmf/common.sh@124 -- # set -e 00:17:30.295 21:10:46 -- nvmf/common.sh@125 -- # return 0 00:17:30.295 21:10:46 -- nvmf/common.sh@478 -- # '[' -n 3048765 ']' 00:17:30.295 21:10:46 -- nvmf/common.sh@479 -- # killprocess 3048765 00:17:30.295 21:10:46 -- common/autotest_common.sh@936 -- # '[' -z 3048765 ']' 00:17:30.295 21:10:46 -- common/autotest_common.sh@940 -- # kill -0 3048765 00:17:30.295 21:10:46 -- common/autotest_common.sh@941 -- # uname 00:17:30.295 21:10:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:30.295 21:10:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3048765 00:17:30.295 21:10:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:30.295 21:10:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:30.295 21:10:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3048765' 00:17:30.295 killing process with pid 3048765 00:17:30.295 21:10:46 -- common/autotest_common.sh@955 -- # kill 3048765 00:17:30.295 21:10:46 -- common/autotest_common.sh@960 -- # wait 3048765 00:17:30.554 21:10:46 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:30.554 21:10:46 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:30.554 21:10:46 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:30.554 21:10:46 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:30.554 21:10:46 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:30.554 21:10:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.554 21:10:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:30.554 21:10:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.087 21:10:48 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:33.087 21:10:48 -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.pZ1 /tmp/spdk.key-sha256.4en /tmp/spdk.key-sha384.HBs /tmp/spdk.key-sha512.oz4 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:33.087 00:17:33.087 real 1m24.213s 00:17:33.087 user 3m23.297s 00:17:33.087 sys 0m16.357s 00:17:33.087 21:10:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:33.087 21:10:48 -- common/autotest_common.sh@10 -- # set +x 00:17:33.087 ************************************ 00:17:33.087 END TEST nvmf_auth_target 00:17:33.087 ************************************ 00:17:33.087 21:10:48 -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:17:33.087 21:10:48 -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:33.087 21:10:48 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:33.087 21:10:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:33.087 21:10:48 -- common/autotest_common.sh@10 -- # set +x 00:17:33.087 ************************************ 00:17:33.087 START TEST nvmf_bdevio_no_huge 00:17:33.087 ************************************ 00:17:33.087 21:10:48 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:33.087 * Looking for test storage... 00:17:33.087 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:33.087 21:10:48 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:33.087 21:10:48 -- nvmf/common.sh@7 -- # uname -s 00:17:33.087 21:10:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:33.087 21:10:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:33.087 21:10:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:33.087 21:10:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:33.087 21:10:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:33.087 21:10:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:33.087 21:10:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:33.087 21:10:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:33.087 21:10:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:33.087 21:10:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:33.087 21:10:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:33.087 21:10:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:33.087 21:10:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:33.087 21:10:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:33.087 21:10:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:33.087 21:10:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:33.087 21:10:48 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:33.087 21:10:48 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:33.087 21:10:48 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:33.087 21:10:48 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:33.087 21:10:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.087 21:10:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.087 21:10:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.087 21:10:48 -- paths/export.sh@5 -- # export PATH 00:17:33.087 21:10:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.087 21:10:48 -- nvmf/common.sh@47 -- # : 0 00:17:33.087 21:10:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:33.087 21:10:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:33.087 21:10:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:33.087 21:10:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:33.087 21:10:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:33.087 21:10:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:33.087 21:10:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:33.087 21:10:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:33.087 21:10:48 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:33.087 21:10:48 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:33.087 21:10:48 -- target/bdevio.sh@14 -- # nvmftestinit 00:17:33.087 21:10:48 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:33.087 21:10:48 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:33.087 21:10:48 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:33.087 21:10:48 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:33.087 21:10:48 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:33.087 21:10:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.087 21:10:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:33.087 21:10:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.087 21:10:48 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:33.087 21:10:48 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:33.087 21:10:48 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:33.087 21:10:48 -- common/autotest_common.sh@10 -- # set +x 00:17:39.652 21:10:54 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:39.652 21:10:54 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:39.652 21:10:54 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:39.652 21:10:54 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:39.652 21:10:54 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:39.652 21:10:54 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:39.652 21:10:54 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:39.652 21:10:54 -- nvmf/common.sh@295 -- # net_devs=() 00:17:39.652 21:10:54 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:39.652 21:10:54 -- nvmf/common.sh@296 -- # e810=() 00:17:39.652 21:10:54 -- nvmf/common.sh@296 -- # local -ga e810 00:17:39.652 21:10:54 -- nvmf/common.sh@297 -- # x722=() 00:17:39.652 21:10:54 -- nvmf/common.sh@297 -- # local -ga x722 00:17:39.652 21:10:54 -- nvmf/common.sh@298 -- # mlx=() 00:17:39.652 21:10:54 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:39.652 21:10:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:39.652 21:10:54 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:39.652 21:10:54 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:39.652 21:10:54 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:39.652 21:10:54 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:39.652 21:10:54 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:39.652 21:10:54 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:39.652 21:10:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:39.652 21:10:54 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:39.652 21:10:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:39.652 21:10:54 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:39.652 21:10:54 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:39.652 21:10:54 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:39.652 21:10:54 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:39.652 21:10:54 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:39.652 21:10:54 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:39.652 21:10:54 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:39.652 21:10:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:39.652 21:10:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:39.652 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:39.652 21:10:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:39.652 21:10:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:39.652 21:10:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:39.652 21:10:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:39.652 21:10:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:39.652 21:10:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:39.652 21:10:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:39.652 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:39.652 21:10:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:39.652 21:10:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:39.652 21:10:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:39.652 21:10:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:39.652 21:10:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:39.652 21:10:54 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:39.652 21:10:54 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:39.652 21:10:54 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:39.652 21:10:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:39.652 21:10:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:39.652 21:10:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:39.652 21:10:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:39.652 21:10:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:39.652 Found net devices under 0000:86:00.0: cvl_0_0 00:17:39.652 21:10:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:39.652 21:10:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:39.652 21:10:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:39.652 21:10:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:39.652 21:10:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:39.652 21:10:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:39.652 Found net devices under 0000:86:00.1: cvl_0_1 00:17:39.652 21:10:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:39.652 21:10:54 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:39.652 21:10:54 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:39.652 21:10:54 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:39.652 21:10:54 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:39.652 21:10:54 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:39.652 21:10:54 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:39.652 21:10:54 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:39.652 21:10:54 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:39.652 21:10:54 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:39.652 21:10:54 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:39.652 21:10:54 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:39.652 21:10:54 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:39.652 21:10:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:39.652 21:10:54 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:39.652 21:10:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:39.652 21:10:54 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:39.652 21:10:54 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:39.652 21:10:54 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:39.652 21:10:54 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:39.652 21:10:54 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:39.652 21:10:54 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:39.652 21:10:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:39.652 21:10:54 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:39.652 21:10:54 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:39.652 21:10:54 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:39.652 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:39.652 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:17:39.652 00:17:39.652 --- 10.0.0.2 ping statistics --- 00:17:39.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.652 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:17:39.652 21:10:54 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:39.652 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:39.652 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:17:39.652 00:17:39.652 --- 10.0.0.1 ping statistics --- 00:17:39.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.652 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:17:39.652 21:10:54 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:39.652 21:10:54 -- nvmf/common.sh@411 -- # return 0 00:17:39.652 21:10:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:39.652 21:10:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:39.652 21:10:54 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:39.652 21:10:54 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:39.653 21:10:54 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:39.653 21:10:54 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:39.653 21:10:54 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:39.653 21:10:54 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:39.653 21:10:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:39.653 21:10:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:39.653 21:10:54 -- common/autotest_common.sh@10 -- # set +x 00:17:39.653 21:10:54 -- nvmf/common.sh@470 -- # nvmfpid=3065998 00:17:39.653 21:10:54 -- nvmf/common.sh@471 -- # waitforlisten 3065998 00:17:39.653 21:10:54 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:39.653 21:10:54 -- common/autotest_common.sh@817 -- # '[' -z 3065998 ']' 00:17:39.653 21:10:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.653 21:10:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:39.653 21:10:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.653 21:10:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:39.653 21:10:54 -- common/autotest_common.sh@10 -- # set +x 00:17:39.653 [2024-04-18 21:10:55.035190] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:17:39.653 [2024-04-18 21:10:55.035240] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:39.653 [2024-04-18 21:10:55.104809] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:39.653 [2024-04-18 21:10:55.188365] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:39.653 [2024-04-18 21:10:55.188398] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:39.653 [2024-04-18 21:10:55.188405] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:39.653 [2024-04-18 21:10:55.188411] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:39.653 [2024-04-18 21:10:55.188416] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:39.653 [2024-04-18 21:10:55.188567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:39.653 [2024-04-18 21:10:55.188672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:39.653 [2024-04-18 21:10:55.188781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:39.653 [2024-04-18 21:10:55.188782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:40.216 21:10:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:40.216 21:10:55 -- common/autotest_common.sh@850 -- # return 0 00:17:40.216 21:10:55 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:40.216 21:10:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:40.216 21:10:55 -- common/autotest_common.sh@10 -- # set +x 00:17:40.216 21:10:55 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:40.216 21:10:55 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:40.216 21:10:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:40.216 21:10:55 -- common/autotest_common.sh@10 -- # set +x 00:17:40.216 [2024-04-18 21:10:55.884410] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:40.216 21:10:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:40.216 21:10:55 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:40.216 21:10:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:40.216 21:10:55 -- common/autotest_common.sh@10 -- # set +x 00:17:40.216 Malloc0 00:17:40.216 21:10:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:40.216 21:10:55 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:40.216 21:10:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:40.216 21:10:55 -- common/autotest_common.sh@10 -- # set +x 00:17:40.216 21:10:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:40.216 21:10:55 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:40.216 21:10:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:40.216 21:10:55 -- common/autotest_common.sh@10 -- # set +x 00:17:40.216 21:10:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:40.216 21:10:55 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:40.216 21:10:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:40.216 21:10:55 -- common/autotest_common.sh@10 -- # set +x 00:17:40.216 [2024-04-18 21:10:55.920628] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:40.216 21:10:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:40.216 21:10:55 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:40.216 21:10:55 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:40.216 21:10:55 -- nvmf/common.sh@521 -- # config=() 00:17:40.216 21:10:55 -- nvmf/common.sh@521 -- # local subsystem config 00:17:40.216 21:10:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:40.216 21:10:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:40.216 { 00:17:40.216 "params": { 00:17:40.216 "name": "Nvme$subsystem", 00:17:40.216 "trtype": "$TEST_TRANSPORT", 00:17:40.216 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:40.216 "adrfam": "ipv4", 00:17:40.216 "trsvcid": "$NVMF_PORT", 00:17:40.216 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:40.216 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:40.216 "hdgst": ${hdgst:-false}, 00:17:40.216 "ddgst": ${ddgst:-false} 00:17:40.216 }, 00:17:40.216 "method": "bdev_nvme_attach_controller" 00:17:40.216 } 00:17:40.216 EOF 00:17:40.216 )") 00:17:40.216 21:10:55 -- nvmf/common.sh@543 -- # cat 00:17:40.216 21:10:55 -- nvmf/common.sh@545 -- # jq . 00:17:40.216 21:10:55 -- nvmf/common.sh@546 -- # IFS=, 00:17:40.216 21:10:55 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:40.216 "params": { 00:17:40.216 "name": "Nvme1", 00:17:40.216 "trtype": "tcp", 00:17:40.216 "traddr": "10.0.0.2", 00:17:40.216 "adrfam": "ipv4", 00:17:40.216 "trsvcid": "4420", 00:17:40.216 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:40.216 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:40.216 "hdgst": false, 00:17:40.216 "ddgst": false 00:17:40.216 }, 00:17:40.216 "method": "bdev_nvme_attach_controller" 00:17:40.216 }' 00:17:40.216 [2024-04-18 21:10:55.968093] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:17:40.216 [2024-04-18 21:10:55.968142] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3066126 ] 00:17:40.216 [2024-04-18 21:10:56.031647] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:40.216 [2024-04-18 21:10:56.115837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.216 [2024-04-18 21:10:56.115930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.216 [2024-04-18 21:10:56.115931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:40.474 I/O targets: 00:17:40.474 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:40.474 00:17:40.474 00:17:40.474 CUnit - A unit testing framework for C - Version 2.1-3 00:17:40.474 http://cunit.sourceforge.net/ 00:17:40.474 00:17:40.474 00:17:40.474 Suite: bdevio tests on: Nvme1n1 00:17:40.474 Test: blockdev write read block ...passed 00:17:40.474 Test: blockdev write zeroes read block ...passed 00:17:40.474 Test: blockdev write zeroes read no split ...passed 00:17:40.732 Test: blockdev write zeroes read split ...passed 00:17:40.732 Test: blockdev write zeroes read split partial ...passed 00:17:40.732 Test: blockdev reset ...[2024-04-18 21:10:56.488019] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:40.732 [2024-04-18 21:10:56.488081] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1363810 (9): Bad file descriptor 00:17:40.732 [2024-04-18 21:10:56.506922] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:40.732 passed 00:17:40.732 Test: blockdev write read 8 blocks ...passed 00:17:40.732 Test: blockdev write read size > 128k ...passed 00:17:40.732 Test: blockdev write read invalid size ...passed 00:17:40.732 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:40.732 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:40.732 Test: blockdev write read max offset ...passed 00:17:40.732 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:40.732 Test: blockdev writev readv 8 blocks ...passed 00:17:40.732 Test: blockdev writev readv 30 x 1block ...passed 00:17:40.990 Test: blockdev writev readv block ...passed 00:17:40.990 Test: blockdev writev readv size > 128k ...passed 00:17:40.990 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:40.990 Test: blockdev comparev and writev ...[2024-04-18 21:10:56.687551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:40.990 [2024-04-18 21:10:56.687579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.990 [2024-04-18 21:10:56.687592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:40.990 [2024-04-18 21:10:56.687600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:40.990 [2024-04-18 21:10:56.687980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:40.990 [2024-04-18 21:10:56.687993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:40.990 [2024-04-18 21:10:56.688005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:40.990 [2024-04-18 21:10:56.688013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:40.990 [2024-04-18 21:10:56.688393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:40.990 [2024-04-18 21:10:56.688408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:40.990 [2024-04-18 21:10:56.688420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:40.990 [2024-04-18 21:10:56.688428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:40.990 [2024-04-18 21:10:56.688815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:40.990 [2024-04-18 21:10:56.688826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:40.990 [2024-04-18 21:10:56.688838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:40.990 [2024-04-18 21:10:56.688846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:40.990 passed 00:17:40.990 Test: blockdev nvme passthru rw ...passed 00:17:40.990 Test: blockdev nvme passthru vendor specific ...[2024-04-18 21:10:56.773120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:40.990 [2024-04-18 21:10:56.773136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:40.990 [2024-04-18 21:10:56.773376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:40.990 [2024-04-18 21:10:56.773386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:40.990 [2024-04-18 21:10:56.773624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:40.990 [2024-04-18 21:10:56.773635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:40.990 [2024-04-18 21:10:56.773877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:40.990 [2024-04-18 21:10:56.773889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:40.990 passed 00:17:40.990 Test: blockdev nvme admin passthru ...passed 00:17:40.990 Test: blockdev copy ...passed 00:17:40.990 00:17:40.990 Run Summary: Type Total Ran Passed Failed Inactive 00:17:40.990 suites 1 1 n/a 0 0 00:17:40.990 tests 23 23 23 0 0 00:17:40.990 asserts 152 152 152 0 n/a 00:17:40.990 00:17:40.990 Elapsed time = 1.149 seconds 00:17:41.247 21:10:57 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:41.247 21:10:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:41.247 21:10:57 -- common/autotest_common.sh@10 -- # set +x 00:17:41.247 21:10:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:41.247 21:10:57 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:41.247 21:10:57 -- target/bdevio.sh@30 -- # nvmftestfini 00:17:41.248 21:10:57 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:41.248 21:10:57 -- nvmf/common.sh@117 -- # sync 00:17:41.248 21:10:57 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:41.248 21:10:57 -- nvmf/common.sh@120 -- # set +e 00:17:41.248 21:10:57 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:41.248 21:10:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:41.248 rmmod nvme_tcp 00:17:41.248 rmmod nvme_fabrics 00:17:41.248 rmmod nvme_keyring 00:17:41.505 21:10:57 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:41.505 21:10:57 -- nvmf/common.sh@124 -- # set -e 00:17:41.505 21:10:57 -- nvmf/common.sh@125 -- # return 0 00:17:41.505 21:10:57 -- nvmf/common.sh@478 -- # '[' -n 3065998 ']' 00:17:41.505 21:10:57 -- nvmf/common.sh@479 -- # killprocess 3065998 00:17:41.505 21:10:57 -- common/autotest_common.sh@936 -- # '[' -z 3065998 ']' 00:17:41.505 21:10:57 -- common/autotest_common.sh@940 -- # kill -0 3065998 00:17:41.505 21:10:57 -- common/autotest_common.sh@941 -- # uname 00:17:41.505 21:10:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:41.505 21:10:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3065998 00:17:41.505 21:10:57 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:17:41.505 21:10:57 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:17:41.505 21:10:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3065998' 00:17:41.505 killing process with pid 3065998 00:17:41.505 21:10:57 -- common/autotest_common.sh@955 -- # kill 3065998 00:17:41.505 21:10:57 -- common/autotest_common.sh@960 -- # wait 3065998 00:17:41.763 21:10:57 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:41.763 21:10:57 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:41.763 21:10:57 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:41.763 21:10:57 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:41.763 21:10:57 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:41.763 21:10:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:41.763 21:10:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:41.763 21:10:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.297 21:10:59 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:44.297 00:17:44.297 real 0m10.980s 00:17:44.297 user 0m12.974s 00:17:44.297 sys 0m5.524s 00:17:44.297 21:10:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:44.297 21:10:59 -- common/autotest_common.sh@10 -- # set +x 00:17:44.297 ************************************ 00:17:44.297 END TEST nvmf_bdevio_no_huge 00:17:44.297 ************************************ 00:17:44.297 21:10:59 -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:44.297 21:10:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:44.297 21:10:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:44.297 21:10:59 -- common/autotest_common.sh@10 -- # set +x 00:17:44.297 ************************************ 00:17:44.297 START TEST nvmf_tls 00:17:44.297 ************************************ 00:17:44.297 21:10:59 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:44.297 * Looking for test storage... 00:17:44.297 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:44.297 21:10:59 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:44.297 21:10:59 -- nvmf/common.sh@7 -- # uname -s 00:17:44.297 21:10:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:44.297 21:10:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:44.297 21:10:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:44.297 21:10:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:44.297 21:10:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:44.297 21:10:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:44.297 21:10:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:44.297 21:10:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:44.297 21:10:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:44.297 21:10:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:44.297 21:10:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:44.297 21:10:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:44.297 21:10:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:44.297 21:10:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:44.297 21:10:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:44.297 21:10:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:44.297 21:10:59 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:44.297 21:10:59 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:44.297 21:10:59 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:44.297 21:10:59 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:44.297 21:10:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.297 21:10:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.297 21:10:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.297 21:10:59 -- paths/export.sh@5 -- # export PATH 00:17:44.297 21:10:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.297 21:10:59 -- nvmf/common.sh@47 -- # : 0 00:17:44.297 21:10:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:44.297 21:10:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:44.297 21:10:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:44.297 21:10:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:44.297 21:10:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:44.297 21:10:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:44.297 21:10:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:44.297 21:10:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:44.297 21:10:59 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:44.297 21:10:59 -- target/tls.sh@62 -- # nvmftestinit 00:17:44.297 21:10:59 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:44.297 21:10:59 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:44.297 21:10:59 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:44.297 21:10:59 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:44.297 21:10:59 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:44.297 21:10:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.297 21:10:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:44.297 21:10:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.297 21:10:59 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:44.297 21:10:59 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:44.297 21:10:59 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:44.297 21:10:59 -- common/autotest_common.sh@10 -- # set +x 00:17:50.872 21:11:05 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:50.872 21:11:05 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:50.872 21:11:05 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:50.872 21:11:05 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:50.872 21:11:05 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:50.872 21:11:05 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:50.872 21:11:05 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:50.872 21:11:05 -- nvmf/common.sh@295 -- # net_devs=() 00:17:50.872 21:11:05 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:50.872 21:11:05 -- nvmf/common.sh@296 -- # e810=() 00:17:50.872 21:11:05 -- nvmf/common.sh@296 -- # local -ga e810 00:17:50.872 21:11:05 -- nvmf/common.sh@297 -- # x722=() 00:17:50.872 21:11:05 -- nvmf/common.sh@297 -- # local -ga x722 00:17:50.872 21:11:05 -- nvmf/common.sh@298 -- # mlx=() 00:17:50.872 21:11:05 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:50.872 21:11:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:50.872 21:11:05 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:50.872 21:11:05 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:50.872 21:11:05 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:50.872 21:11:05 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:50.872 21:11:05 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:50.872 21:11:05 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:50.872 21:11:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:50.872 21:11:05 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:50.872 21:11:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:50.872 21:11:05 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:50.872 21:11:05 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:50.872 21:11:05 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:50.872 21:11:05 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:50.872 21:11:05 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:50.872 21:11:05 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:50.872 21:11:05 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:50.872 21:11:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:50.872 21:11:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:50.872 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:50.872 21:11:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:50.872 21:11:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:50.872 21:11:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:50.872 21:11:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:50.872 21:11:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:50.872 21:11:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:50.872 21:11:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:50.872 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:50.872 21:11:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:50.872 21:11:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:50.872 21:11:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:50.872 21:11:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:50.872 21:11:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:50.872 21:11:05 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:50.872 21:11:05 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:50.872 21:11:05 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:50.872 21:11:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:50.872 21:11:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:50.872 21:11:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:50.872 21:11:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:50.872 21:11:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:50.872 Found net devices under 0000:86:00.0: cvl_0_0 00:17:50.872 21:11:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:50.872 21:11:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:50.872 21:11:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:50.872 21:11:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:50.873 21:11:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:50.873 21:11:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:50.873 Found net devices under 0000:86:00.1: cvl_0_1 00:17:50.873 21:11:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:50.873 21:11:05 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:50.873 21:11:05 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:50.873 21:11:05 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:50.873 21:11:05 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:50.873 21:11:05 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:50.873 21:11:05 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:50.873 21:11:05 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:50.873 21:11:05 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:50.873 21:11:05 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:50.873 21:11:05 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:50.873 21:11:05 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:50.873 21:11:05 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:50.873 21:11:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:50.873 21:11:05 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:50.873 21:11:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:50.873 21:11:05 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:50.873 21:11:05 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:50.873 21:11:05 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:50.873 21:11:05 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:50.873 21:11:05 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:50.873 21:11:05 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:50.873 21:11:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:50.873 21:11:05 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:50.873 21:11:05 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:50.873 21:11:05 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:50.873 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:50.873 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:17:50.873 00:17:50.873 --- 10.0.0.2 ping statistics --- 00:17:50.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.873 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:17:50.873 21:11:05 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:50.873 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:50.873 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.361 ms 00:17:50.873 00:17:50.873 --- 10.0.0.1 ping statistics --- 00:17:50.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.873 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:17:50.873 21:11:05 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:50.873 21:11:05 -- nvmf/common.sh@411 -- # return 0 00:17:50.873 21:11:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:50.873 21:11:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:50.873 21:11:05 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:50.873 21:11:05 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:50.873 21:11:05 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:50.873 21:11:05 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:50.873 21:11:05 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:50.873 21:11:05 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:50.873 21:11:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:50.873 21:11:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:50.873 21:11:05 -- common/autotest_common.sh@10 -- # set +x 00:17:50.873 21:11:06 -- nvmf/common.sh@470 -- # nvmfpid=3070232 00:17:50.873 21:11:06 -- nvmf/common.sh@471 -- # waitforlisten 3070232 00:17:50.873 21:11:06 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:50.873 21:11:06 -- common/autotest_common.sh@817 -- # '[' -z 3070232 ']' 00:17:50.873 21:11:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.873 21:11:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:50.873 21:11:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.873 21:11:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:50.873 21:11:06 -- common/autotest_common.sh@10 -- # set +x 00:17:50.873 [2024-04-18 21:11:06.050320] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:17:50.873 [2024-04-18 21:11:06.050366] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:50.873 EAL: No free 2048 kB hugepages reported on node 1 00:17:50.873 [2024-04-18 21:11:06.116056] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.873 [2024-04-18 21:11:06.193396] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:50.873 [2024-04-18 21:11:06.193431] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:50.873 [2024-04-18 21:11:06.193438] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:50.873 [2024-04-18 21:11:06.193444] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:50.873 [2024-04-18 21:11:06.193449] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:50.873 [2024-04-18 21:11:06.193473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.132 21:11:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:51.132 21:11:06 -- common/autotest_common.sh@850 -- # return 0 00:17:51.132 21:11:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:51.132 21:11:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:51.132 21:11:06 -- common/autotest_common.sh@10 -- # set +x 00:17:51.132 21:11:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:51.132 21:11:06 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:17:51.132 21:11:06 -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:51.132 true 00:17:51.132 21:11:07 -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:51.132 21:11:07 -- target/tls.sh@73 -- # jq -r .tls_version 00:17:51.419 21:11:07 -- target/tls.sh@73 -- # version=0 00:17:51.420 21:11:07 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:17:51.420 21:11:07 -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:51.686 21:11:07 -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:51.686 21:11:07 -- target/tls.sh@81 -- # jq -r .tls_version 00:17:51.686 21:11:07 -- target/tls.sh@81 -- # version=13 00:17:51.686 21:11:07 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:17:51.686 21:11:07 -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:51.945 21:11:07 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:51.945 21:11:07 -- target/tls.sh@89 -- # jq -r .tls_version 00:17:52.204 21:11:07 -- target/tls.sh@89 -- # version=7 00:17:52.204 21:11:07 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:17:52.204 21:11:07 -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:52.204 21:11:07 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:17:52.204 21:11:08 -- target/tls.sh@96 -- # ktls=false 00:17:52.204 21:11:08 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:17:52.204 21:11:08 -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:52.464 21:11:08 -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:52.464 21:11:08 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:17:52.724 21:11:08 -- target/tls.sh@104 -- # ktls=true 00:17:52.724 21:11:08 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:17:52.724 21:11:08 -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:52.724 21:11:08 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:52.724 21:11:08 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:17:52.983 21:11:08 -- target/tls.sh@112 -- # ktls=false 00:17:52.983 21:11:08 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:17:52.983 21:11:08 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:52.983 21:11:08 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:52.983 21:11:08 -- nvmf/common.sh@691 -- # local prefix key digest 00:17:52.983 21:11:08 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:17:52.983 21:11:08 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:17:52.983 21:11:08 -- nvmf/common.sh@693 -- # digest=1 00:17:52.983 21:11:08 -- nvmf/common.sh@694 -- # python - 00:17:52.983 21:11:08 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:52.983 21:11:08 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:52.983 21:11:08 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:52.983 21:11:08 -- nvmf/common.sh@691 -- # local prefix key digest 00:17:52.983 21:11:08 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:17:52.983 21:11:08 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:17:52.983 21:11:08 -- nvmf/common.sh@693 -- # digest=1 00:17:52.983 21:11:08 -- nvmf/common.sh@694 -- # python - 00:17:52.983 21:11:08 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:52.983 21:11:08 -- target/tls.sh@121 -- # mktemp 00:17:52.983 21:11:08 -- target/tls.sh@121 -- # key_path=/tmp/tmp.p9sszzdDoP 00:17:52.983 21:11:08 -- target/tls.sh@122 -- # mktemp 00:17:52.983 21:11:08 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.Lu9I4YB0Nk 00:17:52.983 21:11:08 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:52.983 21:11:08 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:52.983 21:11:08 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.p9sszzdDoP 00:17:52.983 21:11:08 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.Lu9I4YB0Nk 00:17:52.983 21:11:08 -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:53.243 21:11:09 -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:53.502 21:11:09 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.p9sszzdDoP 00:17:53.502 21:11:09 -- target/tls.sh@49 -- # local key=/tmp/tmp.p9sszzdDoP 00:17:53.502 21:11:09 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:53.502 [2024-04-18 21:11:09.410581] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:53.502 21:11:09 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:53.762 21:11:09 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:54.022 [2024-04-18 21:11:09.759473] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:54.022 [2024-04-18 21:11:09.759671] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:54.022 21:11:09 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:54.022 malloc0 00:17:54.281 21:11:09 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:54.281 21:11:10 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.p9sszzdDoP 00:17:54.541 [2024-04-18 21:11:10.309231] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:54.541 21:11:10 -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.p9sszzdDoP 00:17:54.541 EAL: No free 2048 kB hugepages reported on node 1 00:18:04.526 Initializing NVMe Controllers 00:18:04.526 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:04.526 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:04.526 Initialization complete. Launching workers. 00:18:04.526 ======================================================== 00:18:04.526 Latency(us) 00:18:04.526 Device Information : IOPS MiB/s Average min max 00:18:04.526 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16298.73 63.67 3927.14 878.66 5540.49 00:18:04.526 ======================================================== 00:18:04.526 Total : 16298.73 63.67 3927.14 878.66 5540.49 00:18:04.526 00:18:04.526 21:11:20 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.p9sszzdDoP 00:18:04.526 21:11:20 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:04.526 21:11:20 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:04.526 21:11:20 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:04.526 21:11:20 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.p9sszzdDoP' 00:18:04.526 21:11:20 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:04.526 21:11:20 -- target/tls.sh@28 -- # bdevperf_pid=3072643 00:18:04.526 21:11:20 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:04.526 21:11:20 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:04.526 21:11:20 -- target/tls.sh@31 -- # waitforlisten 3072643 /var/tmp/bdevperf.sock 00:18:04.526 21:11:20 -- common/autotest_common.sh@817 -- # '[' -z 3072643 ']' 00:18:04.526 21:11:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:04.526 21:11:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:04.526 21:11:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:04.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:04.526 21:11:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:04.526 21:11:20 -- common/autotest_common.sh@10 -- # set +x 00:18:04.785 [2024-04-18 21:11:20.475968] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:18:04.785 [2024-04-18 21:11:20.476015] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3072643 ] 00:18:04.785 EAL: No free 2048 kB hugepages reported on node 1 00:18:04.785 [2024-04-18 21:11:20.530042] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.785 [2024-04-18 21:11:20.601067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:05.354 21:11:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:05.354 21:11:21 -- common/autotest_common.sh@850 -- # return 0 00:18:05.354 21:11:21 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.p9sszzdDoP 00:18:05.614 [2024-04-18 21:11:21.424183] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:05.614 [2024-04-18 21:11:21.424254] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:05.614 TLSTESTn1 00:18:05.614 21:11:21 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:05.873 Running I/O for 10 seconds... 00:18:15.880 00:18:15.880 Latency(us) 00:18:15.880 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.880 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:15.880 Verification LBA range: start 0x0 length 0x2000 00:18:15.880 TLSTESTn1 : 10.04 1910.83 7.46 0.00 0.00 66849.71 7351.43 86621.50 00:18:15.880 =================================================================================================================== 00:18:15.880 Total : 1910.83 7.46 0.00 0.00 66849.71 7351.43 86621.50 00:18:15.880 0 00:18:15.880 21:11:31 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:15.880 21:11:31 -- target/tls.sh@45 -- # killprocess 3072643 00:18:15.880 21:11:31 -- common/autotest_common.sh@936 -- # '[' -z 3072643 ']' 00:18:15.880 21:11:31 -- common/autotest_common.sh@940 -- # kill -0 3072643 00:18:15.880 21:11:31 -- common/autotest_common.sh@941 -- # uname 00:18:15.880 21:11:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:15.880 21:11:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3072643 00:18:15.880 21:11:31 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:15.880 21:11:31 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:15.880 21:11:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3072643' 00:18:15.880 killing process with pid 3072643 00:18:15.880 21:11:31 -- common/autotest_common.sh@955 -- # kill 3072643 00:18:15.880 Received shutdown signal, test time was about 10.000000 seconds 00:18:15.880 00:18:15.880 Latency(us) 00:18:15.880 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.880 =================================================================================================================== 00:18:15.880 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:15.880 [2024-04-18 21:11:31.733222] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:15.880 21:11:31 -- common/autotest_common.sh@960 -- # wait 3072643 00:18:16.140 21:11:31 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Lu9I4YB0Nk 00:18:16.140 21:11:31 -- common/autotest_common.sh@638 -- # local es=0 00:18:16.140 21:11:31 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Lu9I4YB0Nk 00:18:16.140 21:11:31 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:18:16.140 21:11:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:16.140 21:11:31 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:18:16.140 21:11:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:16.140 21:11:31 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Lu9I4YB0Nk 00:18:16.140 21:11:31 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:16.140 21:11:31 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:16.140 21:11:31 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:16.140 21:11:31 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Lu9I4YB0Nk' 00:18:16.140 21:11:31 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:16.140 21:11:31 -- target/tls.sh@28 -- # bdevperf_pid=3074474 00:18:16.140 21:11:31 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:16.140 21:11:31 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:16.140 21:11:31 -- target/tls.sh@31 -- # waitforlisten 3074474 /var/tmp/bdevperf.sock 00:18:16.140 21:11:31 -- common/autotest_common.sh@817 -- # '[' -z 3074474 ']' 00:18:16.140 21:11:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:16.140 21:11:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:16.140 21:11:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:16.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:16.140 21:11:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:16.140 21:11:31 -- common/autotest_common.sh@10 -- # set +x 00:18:16.140 [2024-04-18 21:11:31.990372] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:18:16.140 [2024-04-18 21:11:31.990420] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3074474 ] 00:18:16.140 EAL: No free 2048 kB hugepages reported on node 1 00:18:16.140 [2024-04-18 21:11:32.045459] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.399 [2024-04-18 21:11:32.114093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:16.967 21:11:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:16.967 21:11:32 -- common/autotest_common.sh@850 -- # return 0 00:18:16.968 21:11:32 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Lu9I4YB0Nk 00:18:17.227 [2024-04-18 21:11:32.940647] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:17.227 [2024-04-18 21:11:32.940727] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:17.227 [2024-04-18 21:11:32.948043] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:17.227 [2024-04-18 21:11:32.949129] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf570 (107): Transport endpoint is not connected 00:18:17.227 [2024-04-18 21:11:32.950123] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf570 (9): Bad file descriptor 00:18:17.227 [2024-04-18 21:11:32.951125] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:17.227 [2024-04-18 21:11:32.951134] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:17.227 [2024-04-18 21:11:32.951140] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:17.227 request: 00:18:17.227 { 00:18:17.227 "name": "TLSTEST", 00:18:17.227 "trtype": "tcp", 00:18:17.227 "traddr": "10.0.0.2", 00:18:17.227 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:17.227 "adrfam": "ipv4", 00:18:17.227 "trsvcid": "4420", 00:18:17.227 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:17.227 "psk": "/tmp/tmp.Lu9I4YB0Nk", 00:18:17.227 "method": "bdev_nvme_attach_controller", 00:18:17.227 "req_id": 1 00:18:17.227 } 00:18:17.227 Got JSON-RPC error response 00:18:17.227 response: 00:18:17.227 { 00:18:17.227 "code": -32602, 00:18:17.227 "message": "Invalid parameters" 00:18:17.227 } 00:18:17.227 21:11:32 -- target/tls.sh@36 -- # killprocess 3074474 00:18:17.227 21:11:32 -- common/autotest_common.sh@936 -- # '[' -z 3074474 ']' 00:18:17.227 21:11:32 -- common/autotest_common.sh@940 -- # kill -0 3074474 00:18:17.227 21:11:32 -- common/autotest_common.sh@941 -- # uname 00:18:17.227 21:11:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:17.227 21:11:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3074474 00:18:17.227 21:11:33 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:17.227 21:11:33 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:17.227 21:11:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3074474' 00:18:17.227 killing process with pid 3074474 00:18:17.227 21:11:33 -- common/autotest_common.sh@955 -- # kill 3074474 00:18:17.227 Received shutdown signal, test time was about 10.000000 seconds 00:18:17.227 00:18:17.227 Latency(us) 00:18:17.227 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.227 =================================================================================================================== 00:18:17.227 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:17.227 [2024-04-18 21:11:33.012682] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:17.227 21:11:33 -- common/autotest_common.sh@960 -- # wait 3074474 00:18:17.486 21:11:33 -- target/tls.sh@37 -- # return 1 00:18:17.486 21:11:33 -- common/autotest_common.sh@641 -- # es=1 00:18:17.486 21:11:33 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:17.486 21:11:33 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:17.486 21:11:33 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:17.486 21:11:33 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.p9sszzdDoP 00:18:17.486 21:11:33 -- common/autotest_common.sh@638 -- # local es=0 00:18:17.486 21:11:33 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.p9sszzdDoP 00:18:17.486 21:11:33 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:18:17.486 21:11:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:17.486 21:11:33 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:18:17.486 21:11:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:17.486 21:11:33 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.p9sszzdDoP 00:18:17.486 21:11:33 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:17.486 21:11:33 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:17.486 21:11:33 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:17.486 21:11:33 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.p9sszzdDoP' 00:18:17.486 21:11:33 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:17.486 21:11:33 -- target/tls.sh@28 -- # bdevperf_pid=3074718 00:18:17.486 21:11:33 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:17.486 21:11:33 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:17.486 21:11:33 -- target/tls.sh@31 -- # waitforlisten 3074718 /var/tmp/bdevperf.sock 00:18:17.486 21:11:33 -- common/autotest_common.sh@817 -- # '[' -z 3074718 ']' 00:18:17.486 21:11:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:17.486 21:11:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:17.486 21:11:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:17.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:17.487 21:11:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:17.487 21:11:33 -- common/autotest_common.sh@10 -- # set +x 00:18:17.487 [2024-04-18 21:11:33.257594] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:18:17.487 [2024-04-18 21:11:33.257641] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3074718 ] 00:18:17.487 EAL: No free 2048 kB hugepages reported on node 1 00:18:17.487 [2024-04-18 21:11:33.312899] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.487 [2024-04-18 21:11:33.379460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:18.426 21:11:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:18.426 21:11:34 -- common/autotest_common.sh@850 -- # return 0 00:18:18.426 21:11:34 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.p9sszzdDoP 00:18:18.426 [2024-04-18 21:11:34.218353] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:18.426 [2024-04-18 21:11:34.218429] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:18.426 [2024-04-18 21:11:34.223033] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:18.426 [2024-04-18 21:11:34.223057] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:18.426 [2024-04-18 21:11:34.223081] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:18.426 [2024-04-18 21:11:34.223756] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d36570 (107): Transport endpoint is not connected 00:18:18.426 [2024-04-18 21:11:34.224747] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d36570 (9): Bad file descriptor 00:18:18.426 [2024-04-18 21:11:34.225748] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:18.426 [2024-04-18 21:11:34.225758] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:18.426 [2024-04-18 21:11:34.225765] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:18.426 request: 00:18:18.426 { 00:18:18.426 "name": "TLSTEST", 00:18:18.426 "trtype": "tcp", 00:18:18.426 "traddr": "10.0.0.2", 00:18:18.426 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:18.426 "adrfam": "ipv4", 00:18:18.426 "trsvcid": "4420", 00:18:18.426 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.426 "psk": "/tmp/tmp.p9sszzdDoP", 00:18:18.426 "method": "bdev_nvme_attach_controller", 00:18:18.426 "req_id": 1 00:18:18.426 } 00:18:18.426 Got JSON-RPC error response 00:18:18.426 response: 00:18:18.426 { 00:18:18.426 "code": -32602, 00:18:18.426 "message": "Invalid parameters" 00:18:18.426 } 00:18:18.426 21:11:34 -- target/tls.sh@36 -- # killprocess 3074718 00:18:18.426 21:11:34 -- common/autotest_common.sh@936 -- # '[' -z 3074718 ']' 00:18:18.426 21:11:34 -- common/autotest_common.sh@940 -- # kill -0 3074718 00:18:18.426 21:11:34 -- common/autotest_common.sh@941 -- # uname 00:18:18.426 21:11:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:18.426 21:11:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3074718 00:18:18.426 21:11:34 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:18.426 21:11:34 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:18.426 21:11:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3074718' 00:18:18.426 killing process with pid 3074718 00:18:18.426 21:11:34 -- common/autotest_common.sh@955 -- # kill 3074718 00:18:18.426 Received shutdown signal, test time was about 10.000000 seconds 00:18:18.426 00:18:18.426 Latency(us) 00:18:18.426 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:18.426 =================================================================================================================== 00:18:18.426 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:18.426 [2024-04-18 21:11:34.288778] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:18.426 21:11:34 -- common/autotest_common.sh@960 -- # wait 3074718 00:18:18.686 21:11:34 -- target/tls.sh@37 -- # return 1 00:18:18.686 21:11:34 -- common/autotest_common.sh@641 -- # es=1 00:18:18.686 21:11:34 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:18.686 21:11:34 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:18.686 21:11:34 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:18.686 21:11:34 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.p9sszzdDoP 00:18:18.686 21:11:34 -- common/autotest_common.sh@638 -- # local es=0 00:18:18.686 21:11:34 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.p9sszzdDoP 00:18:18.686 21:11:34 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:18:18.686 21:11:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:18.686 21:11:34 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:18:18.686 21:11:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:18.686 21:11:34 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.p9sszzdDoP 00:18:18.686 21:11:34 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:18.686 21:11:34 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:18.686 21:11:34 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:18.686 21:11:34 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.p9sszzdDoP' 00:18:18.686 21:11:34 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:18.686 21:11:34 -- target/tls.sh@28 -- # bdevperf_pid=3074953 00:18:18.686 21:11:34 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:18.686 21:11:34 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:18.686 21:11:34 -- target/tls.sh@31 -- # waitforlisten 3074953 /var/tmp/bdevperf.sock 00:18:18.686 21:11:34 -- common/autotest_common.sh@817 -- # '[' -z 3074953 ']' 00:18:18.686 21:11:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:18.686 21:11:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:18.687 21:11:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:18.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:18.687 21:11:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:18.687 21:11:34 -- common/autotest_common.sh@10 -- # set +x 00:18:18.687 [2024-04-18 21:11:34.532453] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:18:18.687 [2024-04-18 21:11:34.532501] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3074953 ] 00:18:18.687 EAL: No free 2048 kB hugepages reported on node 1 00:18:18.687 [2024-04-18 21:11:34.604392] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.947 [2024-04-18 21:11:34.673706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:19.579 21:11:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:19.579 21:11:35 -- common/autotest_common.sh@850 -- # return 0 00:18:19.579 21:11:35 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.p9sszzdDoP 00:18:19.579 [2024-04-18 21:11:35.507114] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:19.579 [2024-04-18 21:11:35.507186] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:19.839 [2024-04-18 21:11:35.514497] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:19.839 [2024-04-18 21:11:35.514526] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:19.840 [2024-04-18 21:11:35.514549] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:19.840 [2024-04-18 21:11:35.515548] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfcc570 (107): Transport endpoint is not connected 00:18:19.840 [2024-04-18 21:11:35.516554] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfcc570 (9): Bad file descriptor 00:18:19.840 [2024-04-18 21:11:35.517555] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:18:19.840 [2024-04-18 21:11:35.517566] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:19.840 [2024-04-18 21:11:35.517573] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:18:19.840 request: 00:18:19.840 { 00:18:19.840 "name": "TLSTEST", 00:18:19.840 "trtype": "tcp", 00:18:19.840 "traddr": "10.0.0.2", 00:18:19.840 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:19.840 "adrfam": "ipv4", 00:18:19.840 "trsvcid": "4420", 00:18:19.840 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:19.840 "psk": "/tmp/tmp.p9sszzdDoP", 00:18:19.840 "method": "bdev_nvme_attach_controller", 00:18:19.840 "req_id": 1 00:18:19.840 } 00:18:19.840 Got JSON-RPC error response 00:18:19.840 response: 00:18:19.840 { 00:18:19.840 "code": -32602, 00:18:19.840 "message": "Invalid parameters" 00:18:19.840 } 00:18:19.840 21:11:35 -- target/tls.sh@36 -- # killprocess 3074953 00:18:19.840 21:11:35 -- common/autotest_common.sh@936 -- # '[' -z 3074953 ']' 00:18:19.840 21:11:35 -- common/autotest_common.sh@940 -- # kill -0 3074953 00:18:19.840 21:11:35 -- common/autotest_common.sh@941 -- # uname 00:18:19.840 21:11:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:19.840 21:11:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3074953 00:18:19.840 21:11:35 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:19.840 21:11:35 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:19.840 21:11:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3074953' 00:18:19.840 killing process with pid 3074953 00:18:19.840 21:11:35 -- common/autotest_common.sh@955 -- # kill 3074953 00:18:19.840 Received shutdown signal, test time was about 10.000000 seconds 00:18:19.840 00:18:19.840 Latency(us) 00:18:19.840 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.840 =================================================================================================================== 00:18:19.840 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:19.840 [2024-04-18 21:11:35.579916] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:19.840 21:11:35 -- common/autotest_common.sh@960 -- # wait 3074953 00:18:20.100 21:11:35 -- target/tls.sh@37 -- # return 1 00:18:20.100 21:11:35 -- common/autotest_common.sh@641 -- # es=1 00:18:20.100 21:11:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:20.100 21:11:35 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:20.100 21:11:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:20.100 21:11:35 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:20.100 21:11:35 -- common/autotest_common.sh@638 -- # local es=0 00:18:20.100 21:11:35 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:20.100 21:11:35 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:18:20.100 21:11:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:20.100 21:11:35 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:18:20.100 21:11:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:20.100 21:11:35 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:20.100 21:11:35 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:20.100 21:11:35 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:20.100 21:11:35 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:20.100 21:11:35 -- target/tls.sh@23 -- # psk= 00:18:20.100 21:11:35 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:20.100 21:11:35 -- target/tls.sh@28 -- # bdevperf_pid=3075191 00:18:20.100 21:11:35 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:20.100 21:11:35 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:20.100 21:11:35 -- target/tls.sh@31 -- # waitforlisten 3075191 /var/tmp/bdevperf.sock 00:18:20.100 21:11:35 -- common/autotest_common.sh@817 -- # '[' -z 3075191 ']' 00:18:20.100 21:11:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:20.100 21:11:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:20.100 21:11:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:20.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:20.100 21:11:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:20.100 21:11:35 -- common/autotest_common.sh@10 -- # set +x 00:18:20.100 [2024-04-18 21:11:35.826643] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:18:20.100 [2024-04-18 21:11:35.826687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3075191 ] 00:18:20.100 EAL: No free 2048 kB hugepages reported on node 1 00:18:20.100 [2024-04-18 21:11:35.881803] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.100 [2024-04-18 21:11:35.949210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:21.039 21:11:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:21.039 21:11:36 -- common/autotest_common.sh@850 -- # return 0 00:18:21.039 21:11:36 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:21.039 [2024-04-18 21:11:36.789367] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:21.039 [2024-04-18 21:11:36.791238] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x57bbe0 (9): Bad file descriptor 00:18:21.039 [2024-04-18 21:11:36.792236] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:21.039 [2024-04-18 21:11:36.792246] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:21.039 [2024-04-18 21:11:36.792253] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:21.039 request: 00:18:21.039 { 00:18:21.039 "name": "TLSTEST", 00:18:21.039 "trtype": "tcp", 00:18:21.039 "traddr": "10.0.0.2", 00:18:21.039 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:21.039 "adrfam": "ipv4", 00:18:21.039 "trsvcid": "4420", 00:18:21.039 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.039 "method": "bdev_nvme_attach_controller", 00:18:21.039 "req_id": 1 00:18:21.039 } 00:18:21.039 Got JSON-RPC error response 00:18:21.039 response: 00:18:21.039 { 00:18:21.039 "code": -32602, 00:18:21.039 "message": "Invalid parameters" 00:18:21.039 } 00:18:21.039 21:11:36 -- target/tls.sh@36 -- # killprocess 3075191 00:18:21.039 21:11:36 -- common/autotest_common.sh@936 -- # '[' -z 3075191 ']' 00:18:21.039 21:11:36 -- common/autotest_common.sh@940 -- # kill -0 3075191 00:18:21.039 21:11:36 -- common/autotest_common.sh@941 -- # uname 00:18:21.039 21:11:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:21.039 21:11:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3075191 00:18:21.039 21:11:36 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:21.039 21:11:36 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:21.039 21:11:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3075191' 00:18:21.039 killing process with pid 3075191 00:18:21.039 21:11:36 -- common/autotest_common.sh@955 -- # kill 3075191 00:18:21.039 Received shutdown signal, test time was about 10.000000 seconds 00:18:21.039 00:18:21.039 Latency(us) 00:18:21.040 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.040 =================================================================================================================== 00:18:21.040 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:21.040 21:11:36 -- common/autotest_common.sh@960 -- # wait 3075191 00:18:21.299 21:11:37 -- target/tls.sh@37 -- # return 1 00:18:21.299 21:11:37 -- common/autotest_common.sh@641 -- # es=1 00:18:21.299 21:11:37 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:21.299 21:11:37 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:21.299 21:11:37 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:21.299 21:11:37 -- target/tls.sh@158 -- # killprocess 3070232 00:18:21.299 21:11:37 -- common/autotest_common.sh@936 -- # '[' -z 3070232 ']' 00:18:21.299 21:11:37 -- common/autotest_common.sh@940 -- # kill -0 3070232 00:18:21.299 21:11:37 -- common/autotest_common.sh@941 -- # uname 00:18:21.299 21:11:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:21.299 21:11:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3070232 00:18:21.300 21:11:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:21.300 21:11:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:21.300 21:11:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3070232' 00:18:21.300 killing process with pid 3070232 00:18:21.300 21:11:37 -- common/autotest_common.sh@955 -- # kill 3070232 00:18:21.300 [2024-04-18 21:11:37.097151] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:21.300 21:11:37 -- common/autotest_common.sh@960 -- # wait 3070232 00:18:21.559 21:11:37 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:21.559 21:11:37 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:21.559 21:11:37 -- nvmf/common.sh@691 -- # local prefix key digest 00:18:21.559 21:11:37 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:18:21.559 21:11:37 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:21.560 21:11:37 -- nvmf/common.sh@693 -- # digest=2 00:18:21.560 21:11:37 -- nvmf/common.sh@694 -- # python - 00:18:21.560 21:11:37 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:21.560 21:11:37 -- target/tls.sh@160 -- # mktemp 00:18:21.560 21:11:37 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.W1fES83uIr 00:18:21.560 21:11:37 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:21.560 21:11:37 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.W1fES83uIr 00:18:21.560 21:11:37 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:18:21.560 21:11:37 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:21.560 21:11:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:21.560 21:11:37 -- common/autotest_common.sh@10 -- # set +x 00:18:21.560 21:11:37 -- nvmf/common.sh@470 -- # nvmfpid=3075446 00:18:21.560 21:11:37 -- nvmf/common.sh@471 -- # waitforlisten 3075446 00:18:21.560 21:11:37 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:21.560 21:11:37 -- common/autotest_common.sh@817 -- # '[' -z 3075446 ']' 00:18:21.560 21:11:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:21.560 21:11:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:21.560 21:11:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:21.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:21.560 21:11:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:21.560 21:11:37 -- common/autotest_common.sh@10 -- # set +x 00:18:21.560 [2024-04-18 21:11:37.419171] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:18:21.560 [2024-04-18 21:11:37.419217] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:21.560 EAL: No free 2048 kB hugepages reported on node 1 00:18:21.560 [2024-04-18 21:11:37.481644] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.820 [2024-04-18 21:11:37.548802] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:21.820 [2024-04-18 21:11:37.548844] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:21.820 [2024-04-18 21:11:37.548851] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:21.820 [2024-04-18 21:11:37.548857] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:21.820 [2024-04-18 21:11:37.548863] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:21.820 [2024-04-18 21:11:37.548885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:22.389 21:11:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:22.389 21:11:38 -- common/autotest_common.sh@850 -- # return 0 00:18:22.389 21:11:38 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:22.389 21:11:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:22.389 21:11:38 -- common/autotest_common.sh@10 -- # set +x 00:18:22.389 21:11:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:22.389 21:11:38 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.W1fES83uIr 00:18:22.389 21:11:38 -- target/tls.sh@49 -- # local key=/tmp/tmp.W1fES83uIr 00:18:22.389 21:11:38 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:22.649 [2024-04-18 21:11:38.411951] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:22.649 21:11:38 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:22.909 21:11:38 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:22.909 [2024-04-18 21:11:38.748805] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:22.909 [2024-04-18 21:11:38.748987] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:22.909 21:11:38 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:23.168 malloc0 00:18:23.168 21:11:38 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:23.428 21:11:39 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.W1fES83uIr 00:18:23.428 [2024-04-18 21:11:39.254250] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:23.428 21:11:39 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.W1fES83uIr 00:18:23.428 21:11:39 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:23.428 21:11:39 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:23.428 21:11:39 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:23.428 21:11:39 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.W1fES83uIr' 00:18:23.428 21:11:39 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:23.428 21:11:39 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:23.428 21:11:39 -- target/tls.sh@28 -- # bdevperf_pid=3075700 00:18:23.428 21:11:39 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:23.428 21:11:39 -- target/tls.sh@31 -- # waitforlisten 3075700 /var/tmp/bdevperf.sock 00:18:23.428 21:11:39 -- common/autotest_common.sh@817 -- # '[' -z 3075700 ']' 00:18:23.428 21:11:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:23.428 21:11:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:23.428 21:11:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:23.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:23.428 21:11:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:23.428 21:11:39 -- common/autotest_common.sh@10 -- # set +x 00:18:23.428 [2024-04-18 21:11:39.316978] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:18:23.428 [2024-04-18 21:11:39.317021] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3075700 ] 00:18:23.428 EAL: No free 2048 kB hugepages reported on node 1 00:18:23.688 [2024-04-18 21:11:39.372552] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.688 [2024-04-18 21:11:39.448317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:24.257 21:11:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:24.257 21:11:40 -- common/autotest_common.sh@850 -- # return 0 00:18:24.257 21:11:40 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.W1fES83uIr 00:18:24.517 [2024-04-18 21:11:40.278855] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:24.517 [2024-04-18 21:11:40.278934] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:24.517 TLSTESTn1 00:18:24.517 21:11:40 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:24.777 Running I/O for 10 seconds... 00:18:34.765 00:18:34.765 Latency(us) 00:18:34.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.765 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:34.765 Verification LBA range: start 0x0 length 0x2000 00:18:34.765 TLSTESTn1 : 10.05 2496.86 9.75 0.00 0.00 51148.58 7465.41 81606.57 00:18:34.765 =================================================================================================================== 00:18:34.765 Total : 2496.86 9.75 0.00 0.00 51148.58 7465.41 81606.57 00:18:34.765 0 00:18:34.765 21:11:50 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:34.765 21:11:50 -- target/tls.sh@45 -- # killprocess 3075700 00:18:34.765 21:11:50 -- common/autotest_common.sh@936 -- # '[' -z 3075700 ']' 00:18:34.765 21:11:50 -- common/autotest_common.sh@940 -- # kill -0 3075700 00:18:34.765 21:11:50 -- common/autotest_common.sh@941 -- # uname 00:18:34.765 21:11:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:34.765 21:11:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3075700 00:18:34.765 21:11:50 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:34.765 21:11:50 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:34.765 21:11:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3075700' 00:18:34.765 killing process with pid 3075700 00:18:34.765 21:11:50 -- common/autotest_common.sh@955 -- # kill 3075700 00:18:34.765 Received shutdown signal, test time was about 10.000000 seconds 00:18:34.765 00:18:34.765 Latency(us) 00:18:34.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.765 =================================================================================================================== 00:18:34.765 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:34.765 [2024-04-18 21:11:50.586155] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:34.765 21:11:50 -- common/autotest_common.sh@960 -- # wait 3075700 00:18:35.025 21:11:50 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.W1fES83uIr 00:18:35.025 21:11:50 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.W1fES83uIr 00:18:35.025 21:11:50 -- common/autotest_common.sh@638 -- # local es=0 00:18:35.025 21:11:50 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.W1fES83uIr 00:18:35.025 21:11:50 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:18:35.025 21:11:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:35.025 21:11:50 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:18:35.025 21:11:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:35.025 21:11:50 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.W1fES83uIr 00:18:35.025 21:11:50 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:35.025 21:11:50 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:35.025 21:11:50 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:35.025 21:11:50 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.W1fES83uIr' 00:18:35.025 21:11:50 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:35.025 21:11:50 -- target/tls.sh@28 -- # bdevperf_pid=3077626 00:18:35.025 21:11:50 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:35.025 21:11:50 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:35.025 21:11:50 -- target/tls.sh@31 -- # waitforlisten 3077626 /var/tmp/bdevperf.sock 00:18:35.025 21:11:50 -- common/autotest_common.sh@817 -- # '[' -z 3077626 ']' 00:18:35.025 21:11:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:35.025 21:11:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:35.025 21:11:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:35.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:35.025 21:11:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:35.025 21:11:50 -- common/autotest_common.sh@10 -- # set +x 00:18:35.025 [2024-04-18 21:11:50.845890] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:18:35.025 [2024-04-18 21:11:50.845938] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3077626 ] 00:18:35.025 EAL: No free 2048 kB hugepages reported on node 1 00:18:35.025 [2024-04-18 21:11:50.903282] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.285 [2024-04-18 21:11:50.974554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:35.853 21:11:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:35.853 21:11:51 -- common/autotest_common.sh@850 -- # return 0 00:18:35.853 21:11:51 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.W1fES83uIr 00:18:36.113 [2024-04-18 21:11:51.796329] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:36.113 [2024-04-18 21:11:51.796380] bdev_nvme.c:6068:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:36.113 [2024-04-18 21:11:51.796387] bdev_nvme.c:6177:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.W1fES83uIr 00:18:36.113 request: 00:18:36.113 { 00:18:36.113 "name": "TLSTEST", 00:18:36.113 "trtype": "tcp", 00:18:36.113 "traddr": "10.0.0.2", 00:18:36.113 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:36.113 "adrfam": "ipv4", 00:18:36.113 "trsvcid": "4420", 00:18:36.113 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.113 "psk": "/tmp/tmp.W1fES83uIr", 00:18:36.113 "method": "bdev_nvme_attach_controller", 00:18:36.113 "req_id": 1 00:18:36.113 } 00:18:36.113 Got JSON-RPC error response 00:18:36.113 response: 00:18:36.113 { 00:18:36.113 "code": -1, 00:18:36.113 "message": "Operation not permitted" 00:18:36.113 } 00:18:36.113 21:11:51 -- target/tls.sh@36 -- # killprocess 3077626 00:18:36.113 21:11:51 -- common/autotest_common.sh@936 -- # '[' -z 3077626 ']' 00:18:36.113 21:11:51 -- common/autotest_common.sh@940 -- # kill -0 3077626 00:18:36.113 21:11:51 -- common/autotest_common.sh@941 -- # uname 00:18:36.113 21:11:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:36.113 21:11:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3077626 00:18:36.113 21:11:51 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:36.113 21:11:51 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:36.113 21:11:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3077626' 00:18:36.113 killing process with pid 3077626 00:18:36.113 21:11:51 -- common/autotest_common.sh@955 -- # kill 3077626 00:18:36.113 Received shutdown signal, test time was about 10.000000 seconds 00:18:36.113 00:18:36.113 Latency(us) 00:18:36.113 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.113 =================================================================================================================== 00:18:36.113 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:36.113 21:11:51 -- common/autotest_common.sh@960 -- # wait 3077626 00:18:36.373 21:11:52 -- target/tls.sh@37 -- # return 1 00:18:36.373 21:11:52 -- common/autotest_common.sh@641 -- # es=1 00:18:36.373 21:11:52 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:36.373 21:11:52 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:36.373 21:11:52 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:36.373 21:11:52 -- target/tls.sh@174 -- # killprocess 3075446 00:18:36.373 21:11:52 -- common/autotest_common.sh@936 -- # '[' -z 3075446 ']' 00:18:36.373 21:11:52 -- common/autotest_common.sh@940 -- # kill -0 3075446 00:18:36.373 21:11:52 -- common/autotest_common.sh@941 -- # uname 00:18:36.373 21:11:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:36.373 21:11:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3075446 00:18:36.373 21:11:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:36.373 21:11:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:36.373 21:11:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3075446' 00:18:36.373 killing process with pid 3075446 00:18:36.373 21:11:52 -- common/autotest_common.sh@955 -- # kill 3075446 00:18:36.373 [2024-04-18 21:11:52.096876] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:36.373 21:11:52 -- common/autotest_common.sh@960 -- # wait 3075446 00:18:36.633 21:11:52 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:18:36.633 21:11:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:36.633 21:11:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:36.633 21:11:52 -- common/autotest_common.sh@10 -- # set +x 00:18:36.633 21:11:52 -- nvmf/common.sh@470 -- # nvmfpid=3077924 00:18:36.633 21:11:52 -- nvmf/common.sh@471 -- # waitforlisten 3077924 00:18:36.633 21:11:52 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:36.633 21:11:52 -- common/autotest_common.sh@817 -- # '[' -z 3077924 ']' 00:18:36.633 21:11:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.633 21:11:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:36.633 21:11:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.633 21:11:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:36.633 21:11:52 -- common/autotest_common.sh@10 -- # set +x 00:18:36.633 [2024-04-18 21:11:52.369538] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:18:36.633 [2024-04-18 21:11:52.369581] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:36.633 EAL: No free 2048 kB hugepages reported on node 1 00:18:36.633 [2024-04-18 21:11:52.432858] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.633 [2024-04-18 21:11:52.505780] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:36.633 [2024-04-18 21:11:52.505815] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:36.633 [2024-04-18 21:11:52.505822] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:36.633 [2024-04-18 21:11:52.505828] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:36.633 [2024-04-18 21:11:52.505834] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:36.633 [2024-04-18 21:11:52.505855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:37.571 21:11:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:37.571 21:11:53 -- common/autotest_common.sh@850 -- # return 0 00:18:37.571 21:11:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:37.571 21:11:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:37.571 21:11:53 -- common/autotest_common.sh@10 -- # set +x 00:18:37.571 21:11:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:37.571 21:11:53 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.W1fES83uIr 00:18:37.571 21:11:53 -- common/autotest_common.sh@638 -- # local es=0 00:18:37.571 21:11:53 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.W1fES83uIr 00:18:37.571 21:11:53 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:18:37.571 21:11:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:37.571 21:11:53 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:18:37.571 21:11:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:37.571 21:11:53 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /tmp/tmp.W1fES83uIr 00:18:37.571 21:11:53 -- target/tls.sh@49 -- # local key=/tmp/tmp.W1fES83uIr 00:18:37.571 21:11:53 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:37.571 [2024-04-18 21:11:53.353358] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:37.571 21:11:53 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:37.830 21:11:53 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:37.830 [2024-04-18 21:11:53.706276] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:37.830 [2024-04-18 21:11:53.706465] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:37.830 21:11:53 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:38.089 malloc0 00:18:38.089 21:11:53 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:38.347 21:11:54 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.W1fES83uIr 00:18:38.347 [2024-04-18 21:11:54.223779] tcp.c:3562:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:38.348 [2024-04-18 21:11:54.223808] tcp.c:3648:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:18:38.348 [2024-04-18 21:11:54.223828] subsystem.c:1011:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:38.348 request: 00:18:38.348 { 00:18:38.348 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:38.348 "host": "nqn.2016-06.io.spdk:host1", 00:18:38.348 "psk": "/tmp/tmp.W1fES83uIr", 00:18:38.348 "method": "nvmf_subsystem_add_host", 00:18:38.348 "req_id": 1 00:18:38.348 } 00:18:38.348 Got JSON-RPC error response 00:18:38.348 response: 00:18:38.348 { 00:18:38.348 "code": -32603, 00:18:38.348 "message": "Internal error" 00:18:38.348 } 00:18:38.348 21:11:54 -- common/autotest_common.sh@641 -- # es=1 00:18:38.348 21:11:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:38.348 21:11:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:38.348 21:11:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:38.348 21:11:54 -- target/tls.sh@180 -- # killprocess 3077924 00:18:38.348 21:11:54 -- common/autotest_common.sh@936 -- # '[' -z 3077924 ']' 00:18:38.348 21:11:54 -- common/autotest_common.sh@940 -- # kill -0 3077924 00:18:38.348 21:11:54 -- common/autotest_common.sh@941 -- # uname 00:18:38.348 21:11:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:38.348 21:11:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3077924 00:18:38.606 21:11:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:38.606 21:11:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:38.606 21:11:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3077924' 00:18:38.606 killing process with pid 3077924 00:18:38.606 21:11:54 -- common/autotest_common.sh@955 -- # kill 3077924 00:18:38.606 21:11:54 -- common/autotest_common.sh@960 -- # wait 3077924 00:18:38.606 21:11:54 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.W1fES83uIr 00:18:38.606 21:11:54 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:18:38.606 21:11:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:38.606 21:11:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:38.606 21:11:54 -- common/autotest_common.sh@10 -- # set +x 00:18:38.606 21:11:54 -- nvmf/common.sh@470 -- # nvmfpid=3078272 00:18:38.606 21:11:54 -- nvmf/common.sh@471 -- # waitforlisten 3078272 00:18:38.606 21:11:54 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:38.606 21:11:54 -- common/autotest_common.sh@817 -- # '[' -z 3078272 ']' 00:18:38.607 21:11:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.607 21:11:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:38.607 21:11:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:38.607 21:11:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:38.607 21:11:54 -- common/autotest_common.sh@10 -- # set +x 00:18:38.865 [2024-04-18 21:11:54.563461] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:18:38.866 [2024-04-18 21:11:54.563506] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:38.866 EAL: No free 2048 kB hugepages reported on node 1 00:18:38.866 [2024-04-18 21:11:54.626525] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.866 [2024-04-18 21:11:54.700398] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:38.866 [2024-04-18 21:11:54.700436] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:38.866 [2024-04-18 21:11:54.700443] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:38.866 [2024-04-18 21:11:54.700450] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:38.866 [2024-04-18 21:11:54.700455] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:38.866 [2024-04-18 21:11:54.700471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:39.433 21:11:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:39.433 21:11:55 -- common/autotest_common.sh@850 -- # return 0 00:18:39.433 21:11:55 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:39.433 21:11:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:39.433 21:11:55 -- common/autotest_common.sh@10 -- # set +x 00:18:39.692 21:11:55 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:39.692 21:11:55 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.W1fES83uIr 00:18:39.692 21:11:55 -- target/tls.sh@49 -- # local key=/tmp/tmp.W1fES83uIr 00:18:39.692 21:11:55 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:39.692 [2024-04-18 21:11:55.540324] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:39.692 21:11:55 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:39.951 21:11:55 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:40.210 [2024-04-18 21:11:55.885208] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:40.210 [2024-04-18 21:11:55.885395] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:40.210 21:11:55 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:40.210 malloc0 00:18:40.210 21:11:56 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:40.503 21:11:56 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.W1fES83uIr 00:18:40.504 [2024-04-18 21:11:56.398659] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:40.504 21:11:56 -- target/tls.sh@188 -- # bdevperf_pid=3078607 00:18:40.504 21:11:56 -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:40.504 21:11:56 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:40.504 21:11:56 -- target/tls.sh@191 -- # waitforlisten 3078607 /var/tmp/bdevperf.sock 00:18:40.504 21:11:56 -- common/autotest_common.sh@817 -- # '[' -z 3078607 ']' 00:18:40.504 21:11:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:40.504 21:11:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:40.504 21:11:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:40.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:40.504 21:11:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:40.504 21:11:56 -- common/autotest_common.sh@10 -- # set +x 00:18:40.784 [2024-04-18 21:11:56.464039] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:18:40.784 [2024-04-18 21:11:56.464088] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3078607 ] 00:18:40.784 EAL: No free 2048 kB hugepages reported on node 1 00:18:40.784 [2024-04-18 21:11:56.522592] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.784 [2024-04-18 21:11:56.595043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:41.353 21:11:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:41.353 21:11:57 -- common/autotest_common.sh@850 -- # return 0 00:18:41.353 21:11:57 -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.W1fES83uIr 00:18:41.613 [2024-04-18 21:11:57.412844] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:41.613 [2024-04-18 21:11:57.412920] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:41.613 TLSTESTn1 00:18:41.613 21:11:57 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:41.873 21:11:57 -- target/tls.sh@196 -- # tgtconf='{ 00:18:41.873 "subsystems": [ 00:18:41.873 { 00:18:41.873 "subsystem": "keyring", 00:18:41.873 "config": [] 00:18:41.873 }, 00:18:41.873 { 00:18:41.873 "subsystem": "iobuf", 00:18:41.873 "config": [ 00:18:41.873 { 00:18:41.873 "method": "iobuf_set_options", 00:18:41.873 "params": { 00:18:41.873 "small_pool_count": 8192, 00:18:41.873 "large_pool_count": 1024, 00:18:41.873 "small_bufsize": 8192, 00:18:41.873 "large_bufsize": 135168 00:18:41.873 } 00:18:41.873 } 00:18:41.873 ] 00:18:41.873 }, 00:18:41.873 { 00:18:41.873 "subsystem": "sock", 00:18:41.873 "config": [ 00:18:41.873 { 00:18:41.873 "method": "sock_impl_set_options", 00:18:41.873 "params": { 00:18:41.873 "impl_name": "posix", 00:18:41.873 "recv_buf_size": 2097152, 00:18:41.873 "send_buf_size": 2097152, 00:18:41.873 "enable_recv_pipe": true, 00:18:41.873 "enable_quickack": false, 00:18:41.873 "enable_placement_id": 0, 00:18:41.873 "enable_zerocopy_send_server": true, 00:18:41.873 "enable_zerocopy_send_client": false, 00:18:41.873 "zerocopy_threshold": 0, 00:18:41.873 "tls_version": 0, 00:18:41.873 "enable_ktls": false 00:18:41.873 } 00:18:41.873 }, 00:18:41.873 { 00:18:41.873 "method": "sock_impl_set_options", 00:18:41.873 "params": { 00:18:41.873 "impl_name": "ssl", 00:18:41.873 "recv_buf_size": 4096, 00:18:41.873 "send_buf_size": 4096, 00:18:41.873 "enable_recv_pipe": true, 00:18:41.873 "enable_quickack": false, 00:18:41.873 "enable_placement_id": 0, 00:18:41.873 "enable_zerocopy_send_server": true, 00:18:41.873 "enable_zerocopy_send_client": false, 00:18:41.873 "zerocopy_threshold": 0, 00:18:41.873 "tls_version": 0, 00:18:41.873 "enable_ktls": false 00:18:41.873 } 00:18:41.873 } 00:18:41.873 ] 00:18:41.873 }, 00:18:41.873 { 00:18:41.873 "subsystem": "vmd", 00:18:41.873 "config": [] 00:18:41.873 }, 00:18:41.873 { 00:18:41.873 "subsystem": "accel", 00:18:41.873 "config": [ 00:18:41.873 { 00:18:41.873 "method": "accel_set_options", 00:18:41.873 "params": { 00:18:41.873 "small_cache_size": 128, 00:18:41.873 "large_cache_size": 16, 00:18:41.873 "task_count": 2048, 00:18:41.873 "sequence_count": 2048, 00:18:41.873 "buf_count": 2048 00:18:41.873 } 00:18:41.873 } 00:18:41.873 ] 00:18:41.873 }, 00:18:41.873 { 00:18:41.873 "subsystem": "bdev", 00:18:41.873 "config": [ 00:18:41.873 { 00:18:41.873 "method": "bdev_set_options", 00:18:41.873 "params": { 00:18:41.873 "bdev_io_pool_size": 65535, 00:18:41.873 "bdev_io_cache_size": 256, 00:18:41.873 "bdev_auto_examine": true, 00:18:41.873 "iobuf_small_cache_size": 128, 00:18:41.873 "iobuf_large_cache_size": 16 00:18:41.873 } 00:18:41.873 }, 00:18:41.873 { 00:18:41.873 "method": "bdev_raid_set_options", 00:18:41.873 "params": { 00:18:41.873 "process_window_size_kb": 1024 00:18:41.873 } 00:18:41.873 }, 00:18:41.873 { 00:18:41.873 "method": "bdev_iscsi_set_options", 00:18:41.873 "params": { 00:18:41.873 "timeout_sec": 30 00:18:41.873 } 00:18:41.873 }, 00:18:41.873 { 00:18:41.873 "method": "bdev_nvme_set_options", 00:18:41.873 "params": { 00:18:41.873 "action_on_timeout": "none", 00:18:41.873 "timeout_us": 0, 00:18:41.873 "timeout_admin_us": 0, 00:18:41.873 "keep_alive_timeout_ms": 10000, 00:18:41.873 "arbitration_burst": 0, 00:18:41.873 "low_priority_weight": 0, 00:18:41.873 "medium_priority_weight": 0, 00:18:41.873 "high_priority_weight": 0, 00:18:41.873 "nvme_adminq_poll_period_us": 10000, 00:18:41.873 "nvme_ioq_poll_period_us": 0, 00:18:41.873 "io_queue_requests": 0, 00:18:41.873 "delay_cmd_submit": true, 00:18:41.873 "transport_retry_count": 4, 00:18:41.873 "bdev_retry_count": 3, 00:18:41.873 "transport_ack_timeout": 0, 00:18:41.873 "ctrlr_loss_timeout_sec": 0, 00:18:41.873 "reconnect_delay_sec": 0, 00:18:41.874 "fast_io_fail_timeout_sec": 0, 00:18:41.874 "disable_auto_failback": false, 00:18:41.874 "generate_uuids": false, 00:18:41.874 "transport_tos": 0, 00:18:41.874 "nvme_error_stat": false, 00:18:41.874 "rdma_srq_size": 0, 00:18:41.874 "io_path_stat": false, 00:18:41.874 "allow_accel_sequence": false, 00:18:41.874 "rdma_max_cq_size": 0, 00:18:41.874 "rdma_cm_event_timeout_ms": 0, 00:18:41.874 "dhchap_digests": [ 00:18:41.874 "sha256", 00:18:41.874 "sha384", 00:18:41.874 "sha512" 00:18:41.874 ], 00:18:41.874 "dhchap_dhgroups": [ 00:18:41.874 "null", 00:18:41.874 "ffdhe2048", 00:18:41.874 "ffdhe3072", 00:18:41.874 "ffdhe4096", 00:18:41.874 "ffdhe6144", 00:18:41.874 "ffdhe8192" 00:18:41.874 ] 00:18:41.874 } 00:18:41.874 }, 00:18:41.874 { 00:18:41.874 "method": "bdev_nvme_set_hotplug", 00:18:41.874 "params": { 00:18:41.874 "period_us": 100000, 00:18:41.874 "enable": false 00:18:41.874 } 00:18:41.874 }, 00:18:41.874 { 00:18:41.874 "method": "bdev_malloc_create", 00:18:41.874 "params": { 00:18:41.874 "name": "malloc0", 00:18:41.874 "num_blocks": 8192, 00:18:41.874 "block_size": 4096, 00:18:41.874 "physical_block_size": 4096, 00:18:41.874 "uuid": "84902ddd-0037-4095-bdf2-b7a46fcb280e", 00:18:41.874 "optimal_io_boundary": 0 00:18:41.874 } 00:18:41.874 }, 00:18:41.874 { 00:18:41.874 "method": "bdev_wait_for_examine" 00:18:41.874 } 00:18:41.874 ] 00:18:41.874 }, 00:18:41.874 { 00:18:41.874 "subsystem": "nbd", 00:18:41.874 "config": [] 00:18:41.874 }, 00:18:41.874 { 00:18:41.874 "subsystem": "scheduler", 00:18:41.874 "config": [ 00:18:41.874 { 00:18:41.874 "method": "framework_set_scheduler", 00:18:41.874 "params": { 00:18:41.874 "name": "static" 00:18:41.874 } 00:18:41.874 } 00:18:41.874 ] 00:18:41.874 }, 00:18:41.874 { 00:18:41.874 "subsystem": "nvmf", 00:18:41.874 "config": [ 00:18:41.874 { 00:18:41.874 "method": "nvmf_set_config", 00:18:41.874 "params": { 00:18:41.874 "discovery_filter": "match_any", 00:18:41.874 "admin_cmd_passthru": { 00:18:41.874 "identify_ctrlr": false 00:18:41.874 } 00:18:41.874 } 00:18:41.874 }, 00:18:41.874 { 00:18:41.874 "method": "nvmf_set_max_subsystems", 00:18:41.874 "params": { 00:18:41.874 "max_subsystems": 1024 00:18:41.874 } 00:18:41.874 }, 00:18:41.874 { 00:18:41.874 "method": "nvmf_set_crdt", 00:18:41.874 "params": { 00:18:41.874 "crdt1": 0, 00:18:41.874 "crdt2": 0, 00:18:41.874 "crdt3": 0 00:18:41.874 } 00:18:41.874 }, 00:18:41.874 { 00:18:41.874 "method": "nvmf_create_transport", 00:18:41.874 "params": { 00:18:41.874 "trtype": "TCP", 00:18:41.874 "max_queue_depth": 128, 00:18:41.874 "max_io_qpairs_per_ctrlr": 127, 00:18:41.874 "in_capsule_data_size": 4096, 00:18:41.874 "max_io_size": 131072, 00:18:41.874 "io_unit_size": 131072, 00:18:41.874 "max_aq_depth": 128, 00:18:41.874 "num_shared_buffers": 511, 00:18:41.874 "buf_cache_size": 4294967295, 00:18:41.874 "dif_insert_or_strip": false, 00:18:41.874 "zcopy": false, 00:18:41.874 "c2h_success": false, 00:18:41.874 "sock_priority": 0, 00:18:41.874 "abort_timeout_sec": 1, 00:18:41.874 "ack_timeout": 0 00:18:41.874 } 00:18:41.874 }, 00:18:41.874 { 00:18:41.874 "method": "nvmf_create_subsystem", 00:18:41.874 "params": { 00:18:41.874 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.874 "allow_any_host": false, 00:18:41.874 "serial_number": "SPDK00000000000001", 00:18:41.874 "model_number": "SPDK bdev Controller", 00:18:41.874 "max_namespaces": 10, 00:18:41.874 "min_cntlid": 1, 00:18:41.874 "max_cntlid": 65519, 00:18:41.874 "ana_reporting": false 00:18:41.874 } 00:18:41.874 }, 00:18:41.874 { 00:18:41.874 "method": "nvmf_subsystem_add_host", 00:18:41.874 "params": { 00:18:41.874 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.874 "host": "nqn.2016-06.io.spdk:host1", 00:18:41.874 "psk": "/tmp/tmp.W1fES83uIr" 00:18:41.874 } 00:18:41.874 }, 00:18:41.874 { 00:18:41.874 "method": "nvmf_subsystem_add_ns", 00:18:41.874 "params": { 00:18:41.874 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.874 "namespace": { 00:18:41.874 "nsid": 1, 00:18:41.874 "bdev_name": "malloc0", 00:18:41.874 "nguid": "84902DDD00374095BDF2B7A46FCB280E", 00:18:41.874 "uuid": "84902ddd-0037-4095-bdf2-b7a46fcb280e", 00:18:41.874 "no_auto_visible": false 00:18:41.874 } 00:18:41.874 } 00:18:41.874 }, 00:18:41.874 { 00:18:41.874 "method": "nvmf_subsystem_add_listener", 00:18:41.874 "params": { 00:18:41.874 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.874 "listen_address": { 00:18:41.874 "trtype": "TCP", 00:18:41.874 "adrfam": "IPv4", 00:18:41.874 "traddr": "10.0.0.2", 00:18:41.874 "trsvcid": "4420" 00:18:41.874 }, 00:18:41.874 "secure_channel": true 00:18:41.874 } 00:18:41.874 } 00:18:41.874 ] 00:18:41.874 } 00:18:41.874 ] 00:18:41.874 }' 00:18:41.874 21:11:57 -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:42.134 21:11:57 -- target/tls.sh@197 -- # bdevperfconf='{ 00:18:42.134 "subsystems": [ 00:18:42.134 { 00:18:42.134 "subsystem": "keyring", 00:18:42.134 "config": [] 00:18:42.134 }, 00:18:42.134 { 00:18:42.134 "subsystem": "iobuf", 00:18:42.134 "config": [ 00:18:42.134 { 00:18:42.134 "method": "iobuf_set_options", 00:18:42.134 "params": { 00:18:42.134 "small_pool_count": 8192, 00:18:42.134 "large_pool_count": 1024, 00:18:42.134 "small_bufsize": 8192, 00:18:42.134 "large_bufsize": 135168 00:18:42.134 } 00:18:42.134 } 00:18:42.134 ] 00:18:42.134 }, 00:18:42.134 { 00:18:42.134 "subsystem": "sock", 00:18:42.134 "config": [ 00:18:42.134 { 00:18:42.134 "method": "sock_impl_set_options", 00:18:42.134 "params": { 00:18:42.134 "impl_name": "posix", 00:18:42.134 "recv_buf_size": 2097152, 00:18:42.134 "send_buf_size": 2097152, 00:18:42.134 "enable_recv_pipe": true, 00:18:42.134 "enable_quickack": false, 00:18:42.134 "enable_placement_id": 0, 00:18:42.134 "enable_zerocopy_send_server": true, 00:18:42.134 "enable_zerocopy_send_client": false, 00:18:42.134 "zerocopy_threshold": 0, 00:18:42.134 "tls_version": 0, 00:18:42.134 "enable_ktls": false 00:18:42.134 } 00:18:42.135 }, 00:18:42.135 { 00:18:42.135 "method": "sock_impl_set_options", 00:18:42.135 "params": { 00:18:42.135 "impl_name": "ssl", 00:18:42.135 "recv_buf_size": 4096, 00:18:42.135 "send_buf_size": 4096, 00:18:42.135 "enable_recv_pipe": true, 00:18:42.135 "enable_quickack": false, 00:18:42.135 "enable_placement_id": 0, 00:18:42.135 "enable_zerocopy_send_server": true, 00:18:42.135 "enable_zerocopy_send_client": false, 00:18:42.135 "zerocopy_threshold": 0, 00:18:42.135 "tls_version": 0, 00:18:42.135 "enable_ktls": false 00:18:42.135 } 00:18:42.135 } 00:18:42.135 ] 00:18:42.135 }, 00:18:42.135 { 00:18:42.135 "subsystem": "vmd", 00:18:42.135 "config": [] 00:18:42.135 }, 00:18:42.135 { 00:18:42.135 "subsystem": "accel", 00:18:42.135 "config": [ 00:18:42.135 { 00:18:42.135 "method": "accel_set_options", 00:18:42.135 "params": { 00:18:42.135 "small_cache_size": 128, 00:18:42.135 "large_cache_size": 16, 00:18:42.135 "task_count": 2048, 00:18:42.135 "sequence_count": 2048, 00:18:42.135 "buf_count": 2048 00:18:42.135 } 00:18:42.135 } 00:18:42.135 ] 00:18:42.135 }, 00:18:42.135 { 00:18:42.135 "subsystem": "bdev", 00:18:42.135 "config": [ 00:18:42.135 { 00:18:42.135 "method": "bdev_set_options", 00:18:42.135 "params": { 00:18:42.135 "bdev_io_pool_size": 65535, 00:18:42.135 "bdev_io_cache_size": 256, 00:18:42.135 "bdev_auto_examine": true, 00:18:42.135 "iobuf_small_cache_size": 128, 00:18:42.135 "iobuf_large_cache_size": 16 00:18:42.135 } 00:18:42.135 }, 00:18:42.135 { 00:18:42.135 "method": "bdev_raid_set_options", 00:18:42.135 "params": { 00:18:42.135 "process_window_size_kb": 1024 00:18:42.135 } 00:18:42.135 }, 00:18:42.135 { 00:18:42.135 "method": "bdev_iscsi_set_options", 00:18:42.135 "params": { 00:18:42.135 "timeout_sec": 30 00:18:42.135 } 00:18:42.135 }, 00:18:42.135 { 00:18:42.135 "method": "bdev_nvme_set_options", 00:18:42.135 "params": { 00:18:42.135 "action_on_timeout": "none", 00:18:42.135 "timeout_us": 0, 00:18:42.135 "timeout_admin_us": 0, 00:18:42.135 "keep_alive_timeout_ms": 10000, 00:18:42.135 "arbitration_burst": 0, 00:18:42.135 "low_priority_weight": 0, 00:18:42.135 "medium_priority_weight": 0, 00:18:42.135 "high_priority_weight": 0, 00:18:42.135 "nvme_adminq_poll_period_us": 10000, 00:18:42.135 "nvme_ioq_poll_period_us": 0, 00:18:42.135 "io_queue_requests": 512, 00:18:42.135 "delay_cmd_submit": true, 00:18:42.135 "transport_retry_count": 4, 00:18:42.135 "bdev_retry_count": 3, 00:18:42.135 "transport_ack_timeout": 0, 00:18:42.135 "ctrlr_loss_timeout_sec": 0, 00:18:42.135 "reconnect_delay_sec": 0, 00:18:42.135 "fast_io_fail_timeout_sec": 0, 00:18:42.135 "disable_auto_failback": false, 00:18:42.135 "generate_uuids": false, 00:18:42.135 "transport_tos": 0, 00:18:42.135 "nvme_error_stat": false, 00:18:42.135 "rdma_srq_size": 0, 00:18:42.135 "io_path_stat": false, 00:18:42.135 "allow_accel_sequence": false, 00:18:42.135 "rdma_max_cq_size": 0, 00:18:42.135 "rdma_cm_event_timeout_ms": 0, 00:18:42.135 "dhchap_digests": [ 00:18:42.135 "sha256", 00:18:42.135 "sha384", 00:18:42.135 "sha512" 00:18:42.135 ], 00:18:42.135 "dhchap_dhgroups": [ 00:18:42.135 "null", 00:18:42.135 "ffdhe2048", 00:18:42.135 "ffdhe3072", 00:18:42.135 "ffdhe4096", 00:18:42.135 "ffdhe6144", 00:18:42.135 "ffdhe8192" 00:18:42.135 ] 00:18:42.135 } 00:18:42.135 }, 00:18:42.135 { 00:18:42.135 "method": "bdev_nvme_attach_controller", 00:18:42.135 "params": { 00:18:42.135 "name": "TLSTEST", 00:18:42.135 "trtype": "TCP", 00:18:42.135 "adrfam": "IPv4", 00:18:42.135 "traddr": "10.0.0.2", 00:18:42.135 "trsvcid": "4420", 00:18:42.135 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:42.135 "prchk_reftag": false, 00:18:42.135 "prchk_guard": false, 00:18:42.135 "ctrlr_loss_timeout_sec": 0, 00:18:42.135 "reconnect_delay_sec": 0, 00:18:42.135 "fast_io_fail_timeout_sec": 0, 00:18:42.135 "psk": "/tmp/tmp.W1fES83uIr", 00:18:42.135 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:42.135 "hdgst": false, 00:18:42.135 "ddgst": false 00:18:42.135 } 00:18:42.135 }, 00:18:42.135 { 00:18:42.135 "method": "bdev_nvme_set_hotplug", 00:18:42.135 "params": { 00:18:42.135 "period_us": 100000, 00:18:42.135 "enable": false 00:18:42.135 } 00:18:42.135 }, 00:18:42.135 { 00:18:42.135 "method": "bdev_wait_for_examine" 00:18:42.135 } 00:18:42.135 ] 00:18:42.135 }, 00:18:42.135 { 00:18:42.135 "subsystem": "nbd", 00:18:42.135 "config": [] 00:18:42.135 } 00:18:42.135 ] 00:18:42.135 }' 00:18:42.135 21:11:57 -- target/tls.sh@199 -- # killprocess 3078607 00:18:42.135 21:11:57 -- common/autotest_common.sh@936 -- # '[' -z 3078607 ']' 00:18:42.135 21:11:57 -- common/autotest_common.sh@940 -- # kill -0 3078607 00:18:42.135 21:11:57 -- common/autotest_common.sh@941 -- # uname 00:18:42.135 21:11:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:42.135 21:11:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3078607 00:18:42.135 21:11:58 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:42.135 21:11:58 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:42.135 21:11:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3078607' 00:18:42.135 killing process with pid 3078607 00:18:42.135 21:11:58 -- common/autotest_common.sh@955 -- # kill 3078607 00:18:42.135 Received shutdown signal, test time was about 10.000000 seconds 00:18:42.135 00:18:42.135 Latency(us) 00:18:42.135 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.135 =================================================================================================================== 00:18:42.135 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:42.135 [2024-04-18 21:11:58.044466] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:42.135 21:11:58 -- common/autotest_common.sh@960 -- # wait 3078607 00:18:42.395 21:11:58 -- target/tls.sh@200 -- # killprocess 3078272 00:18:42.395 21:11:58 -- common/autotest_common.sh@936 -- # '[' -z 3078272 ']' 00:18:42.395 21:11:58 -- common/autotest_common.sh@940 -- # kill -0 3078272 00:18:42.395 21:11:58 -- common/autotest_common.sh@941 -- # uname 00:18:42.395 21:11:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:42.395 21:11:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3078272 00:18:42.395 21:11:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:42.395 21:11:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:42.395 21:11:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3078272' 00:18:42.395 killing process with pid 3078272 00:18:42.395 21:11:58 -- common/autotest_common.sh@955 -- # kill 3078272 00:18:42.395 [2024-04-18 21:11:58.294042] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:42.395 21:11:58 -- common/autotest_common.sh@960 -- # wait 3078272 00:18:42.655 21:11:58 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:42.655 21:11:58 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:42.655 21:11:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:42.655 21:11:58 -- target/tls.sh@203 -- # echo '{ 00:18:42.655 "subsystems": [ 00:18:42.655 { 00:18:42.655 "subsystem": "keyring", 00:18:42.655 "config": [] 00:18:42.655 }, 00:18:42.655 { 00:18:42.655 "subsystem": "iobuf", 00:18:42.655 "config": [ 00:18:42.655 { 00:18:42.655 "method": "iobuf_set_options", 00:18:42.655 "params": { 00:18:42.655 "small_pool_count": 8192, 00:18:42.655 "large_pool_count": 1024, 00:18:42.655 "small_bufsize": 8192, 00:18:42.655 "large_bufsize": 135168 00:18:42.655 } 00:18:42.655 } 00:18:42.655 ] 00:18:42.655 }, 00:18:42.655 { 00:18:42.655 "subsystem": "sock", 00:18:42.655 "config": [ 00:18:42.655 { 00:18:42.655 "method": "sock_impl_set_options", 00:18:42.655 "params": { 00:18:42.655 "impl_name": "posix", 00:18:42.655 "recv_buf_size": 2097152, 00:18:42.655 "send_buf_size": 2097152, 00:18:42.655 "enable_recv_pipe": true, 00:18:42.655 "enable_quickack": false, 00:18:42.655 "enable_placement_id": 0, 00:18:42.655 "enable_zerocopy_send_server": true, 00:18:42.655 "enable_zerocopy_send_client": false, 00:18:42.655 "zerocopy_threshold": 0, 00:18:42.655 "tls_version": 0, 00:18:42.655 "enable_ktls": false 00:18:42.655 } 00:18:42.655 }, 00:18:42.655 { 00:18:42.655 "method": "sock_impl_set_options", 00:18:42.655 "params": { 00:18:42.655 "impl_name": "ssl", 00:18:42.655 "recv_buf_size": 4096, 00:18:42.655 "send_buf_size": 4096, 00:18:42.655 "enable_recv_pipe": true, 00:18:42.655 "enable_quickack": false, 00:18:42.655 "enable_placement_id": 0, 00:18:42.655 "enable_zerocopy_send_server": true, 00:18:42.655 "enable_zerocopy_send_client": false, 00:18:42.655 "zerocopy_threshold": 0, 00:18:42.655 "tls_version": 0, 00:18:42.655 "enable_ktls": false 00:18:42.655 } 00:18:42.655 } 00:18:42.655 ] 00:18:42.655 }, 00:18:42.655 { 00:18:42.655 "subsystem": "vmd", 00:18:42.655 "config": [] 00:18:42.655 }, 00:18:42.655 { 00:18:42.655 "subsystem": "accel", 00:18:42.655 "config": [ 00:18:42.655 { 00:18:42.655 "method": "accel_set_options", 00:18:42.655 "params": { 00:18:42.655 "small_cache_size": 128, 00:18:42.655 "large_cache_size": 16, 00:18:42.655 "task_count": 2048, 00:18:42.655 "sequence_count": 2048, 00:18:42.655 "buf_count": 2048 00:18:42.655 } 00:18:42.655 } 00:18:42.655 ] 00:18:42.655 }, 00:18:42.655 { 00:18:42.655 "subsystem": "bdev", 00:18:42.655 "config": [ 00:18:42.655 { 00:18:42.655 "method": "bdev_set_options", 00:18:42.655 "params": { 00:18:42.655 "bdev_io_pool_size": 65535, 00:18:42.655 "bdev_io_cache_size": 256, 00:18:42.655 "bdev_auto_examine": true, 00:18:42.655 "iobuf_small_cache_size": 128, 00:18:42.655 "iobuf_large_cache_size": 16 00:18:42.655 } 00:18:42.655 }, 00:18:42.655 { 00:18:42.655 "method": "bdev_raid_set_options", 00:18:42.655 "params": { 00:18:42.655 "process_window_size_kb": 1024 00:18:42.655 } 00:18:42.655 }, 00:18:42.655 { 00:18:42.655 "method": "bdev_iscsi_set_options", 00:18:42.655 "params": { 00:18:42.655 "timeout_sec": 30 00:18:42.655 } 00:18:42.655 }, 00:18:42.655 { 00:18:42.655 "method": "bdev_nvme_set_options", 00:18:42.655 "params": { 00:18:42.655 "action_on_timeout": "none", 00:18:42.655 "timeout_us": 0, 00:18:42.655 "timeout_admin_us": 0, 00:18:42.655 "keep_alive_timeout_ms": 10000, 00:18:42.655 "arbitration_burst": 0, 00:18:42.655 "low_priority_weight": 0, 00:18:42.655 "medium_priority_weight": 0, 00:18:42.655 "high_priority_weight": 0, 00:18:42.655 "nvme_adminq_poll_period_us": 10000, 00:18:42.655 "nvme_ioq_poll_period_us": 0, 00:18:42.655 "io_queue_requests": 0, 00:18:42.655 "delay_cmd_submit": true, 00:18:42.655 "transport_retry_count": 4, 00:18:42.655 "bdev_retry_count": 3, 00:18:42.655 "transport_ack_timeout": 0, 00:18:42.655 "ctrlr_loss_timeout_sec": 0, 00:18:42.655 "reconnect_delay_sec": 0, 00:18:42.655 "fast_io_fail_timeout_sec": 0, 00:18:42.655 "disable_auto_failback": false, 00:18:42.655 "generate_uuids": false, 00:18:42.655 "transport_tos": 0, 00:18:42.655 "nvme_error_stat": false, 00:18:42.655 "rdma_srq_size": 0, 00:18:42.655 "io_path_stat": false, 00:18:42.655 "allow_accel_sequence": false, 00:18:42.655 "rdma_max_cq_size": 0, 00:18:42.655 "rdma_cm_event_timeout_ms": 0, 00:18:42.655 "dhchap_digests": [ 00:18:42.655 "sha256", 00:18:42.655 "sha384", 00:18:42.655 "sha512" 00:18:42.655 ], 00:18:42.655 "dhchap_dhgroups": [ 00:18:42.655 "null", 00:18:42.655 "ffdhe2048", 00:18:42.655 "ffdhe3072", 00:18:42.655 "ffdhe4096", 00:18:42.655 "ffdhe6144", 00:18:42.655 "ffdhe8192" 00:18:42.655 ] 00:18:42.655 } 00:18:42.655 }, 00:18:42.655 { 00:18:42.655 "method": "bdev_nvme_set_hotplug", 00:18:42.655 "params": { 00:18:42.655 "period_us": 100000, 00:18:42.655 "enable": false 00:18:42.655 } 00:18:42.655 }, 00:18:42.655 { 00:18:42.655 "method": "bdev_malloc_create", 00:18:42.655 "params": { 00:18:42.655 "name": "malloc0", 00:18:42.655 "num_blocks": 8192, 00:18:42.655 "block_size": 4096, 00:18:42.655 "physical_block_size": 4096, 00:18:42.655 "uuid": "84902ddd-0037-4095-bdf2-b7a46fcb280e", 00:18:42.655 "optimal_io_boundary": 0 00:18:42.655 } 00:18:42.655 }, 00:18:42.655 { 00:18:42.655 "method": "bdev_wait_for_examine" 00:18:42.655 } 00:18:42.655 ] 00:18:42.655 }, 00:18:42.655 { 00:18:42.655 "subsystem": "nbd", 00:18:42.655 "config": [] 00:18:42.655 }, 00:18:42.655 { 00:18:42.655 "subsystem": "scheduler", 00:18:42.655 "config": [ 00:18:42.655 { 00:18:42.655 "method": "framework_set_scheduler", 00:18:42.655 "params": { 00:18:42.656 "name": "static" 00:18:42.656 } 00:18:42.656 } 00:18:42.656 ] 00:18:42.656 }, 00:18:42.656 { 00:18:42.656 "subsystem": "nvmf", 00:18:42.656 "config": [ 00:18:42.656 { 00:18:42.656 "method": "nvmf_set_config", 00:18:42.656 "params": { 00:18:42.656 "discovery_filter": "match_any", 00:18:42.656 "admin_cmd_passthru": { 00:18:42.656 "identify_ctrlr": false 00:18:42.656 } 00:18:42.656 } 00:18:42.656 }, 00:18:42.656 { 00:18:42.656 "method": "nvmf_set_max_subsystems", 00:18:42.656 "params": { 00:18:42.656 "max_subsystems": 1024 00:18:42.656 } 00:18:42.656 }, 00:18:42.656 { 00:18:42.656 "method": "nvmf_set_crdt", 00:18:42.656 "params": { 00:18:42.656 "crdt1": 0, 00:18:42.656 "crdt2": 0, 00:18:42.656 "crdt3": 0 00:18:42.656 } 00:18:42.656 }, 00:18:42.656 { 00:18:42.656 "method": "nvmf_create_transport", 00:18:42.656 "params": { 00:18:42.656 "trtype": "TCP", 00:18:42.656 "max_queue_depth": 128, 00:18:42.656 "max_io_qpairs_per_ctrlr": 127, 00:18:42.656 "in_capsule_data_size": 4096, 00:18:42.656 "max_io_size": 131072, 00:18:42.656 "io_unit_size": 131072, 00:18:42.656 "max_aq_depth": 128, 00:18:42.656 "num_shared_buffers": 511, 00:18:42.656 "buf_cache_size": 4294967295, 00:18:42.656 "dif_insert_or_strip": false, 00:18:42.656 "zcopy": false, 00:18:42.656 "c2h_success": false, 00:18:42.656 "sock_priority": 0, 00:18:42.656 "abort_timeout_sec": 1, 00:18:42.656 "ack_timeout": 0 00:18:42.656 } 00:18:42.656 }, 00:18:42.656 { 00:18:42.656 "method": "nvmf_create_subsystem", 00:18:42.656 "params": { 00:18:42.656 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:42.656 "allow_any_host": false, 00:18:42.656 "serial_number": "SPDK00000000000001", 00:18:42.656 "model_number": "SPDK bdev Controller", 00:18:42.656 "max_namespaces": 10, 00:18:42.656 "min_cntlid": 1, 00:18:42.656 "max_cntlid": 65519, 00:18:42.656 "ana_reporting": false 00:18:42.656 } 00:18:42.656 }, 00:18:42.656 { 00:18:42.656 "method": "nvmf_subsystem_add_host", 00:18:42.656 "params": { 00:18:42.656 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:42.656 "host": "nqn.2016-06.io.spdk:host1", 00:18:42.656 "psk": "/tmp/tmp.W1fES83uIr" 00:18:42.656 } 00:18:42.656 }, 00:18:42.656 { 00:18:42.656 "method": "nvmf_subsystem_add_ns", 00:18:42.656 "params": { 00:18:42.656 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:42.656 "namespace": { 00:18:42.656 "nsid": 1, 00:18:42.656 "bdev_name": "malloc0", 00:18:42.656 "nguid": "84902DDD00374095BDF2B7A46FCB280E", 00:18:42.656 "uuid": "84902ddd-0037-4095-bdf2-b7a46fcb280e", 00:18:42.656 "no_auto_visible": false 00:18:42.656 } 00:18:42.656 } 00:18:42.656 }, 00:18:42.656 { 00:18:42.656 "method": "nvmf_subsystem_add_listener", 00:18:42.656 "params": { 00:18:42.656 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:42.656 "listen_address": { 00:18:42.656 "trtype": "TCP", 00:18:42.656 "adrfam": "IPv4", 00:18:42.656 "traddr": "10.0.0.2", 00:18:42.656 "trsvcid": "4420" 00:18:42.656 }, 00:18:42.656 "secure_channel": true 00:18:42.656 } 00:18:42.656 } 00:18:42.656 ] 00:18:42.656 } 00:18:42.656 ] 00:18:42.656 }' 00:18:42.656 21:11:58 -- common/autotest_common.sh@10 -- # set +x 00:18:42.656 21:11:58 -- nvmf/common.sh@470 -- # nvmfpid=3079003 00:18:42.656 21:11:58 -- nvmf/common.sh@471 -- # waitforlisten 3079003 00:18:42.656 21:11:58 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:42.656 21:11:58 -- common/autotest_common.sh@817 -- # '[' -z 3079003 ']' 00:18:42.656 21:11:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.656 21:11:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:42.656 21:11:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.656 21:11:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:42.656 21:11:58 -- common/autotest_common.sh@10 -- # set +x 00:18:42.656 [2024-04-18 21:11:58.561092] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:18:42.656 [2024-04-18 21:11:58.561138] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:42.916 EAL: No free 2048 kB hugepages reported on node 1 00:18:42.916 [2024-04-18 21:11:58.623399] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.916 [2024-04-18 21:11:58.699593] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:42.916 [2024-04-18 21:11:58.699628] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:42.916 [2024-04-18 21:11:58.699638] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:42.916 [2024-04-18 21:11:58.699644] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:42.916 [2024-04-18 21:11:58.699650] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:42.916 [2024-04-18 21:11:58.699698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:43.176 [2024-04-18 21:11:58.893593] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:43.176 [2024-04-18 21:11:58.909554] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:43.176 [2024-04-18 21:11:58.925596] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:43.176 [2024-04-18 21:11:58.933747] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:43.435 21:11:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:43.435 21:11:59 -- common/autotest_common.sh@850 -- # return 0 00:18:43.435 21:11:59 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:43.435 21:11:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:43.435 21:11:59 -- common/autotest_common.sh@10 -- # set +x 00:18:43.695 21:11:59 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:43.695 21:11:59 -- target/tls.sh@207 -- # bdevperf_pid=3079191 00:18:43.695 21:11:59 -- target/tls.sh@208 -- # waitforlisten 3079191 /var/tmp/bdevperf.sock 00:18:43.695 21:11:59 -- common/autotest_common.sh@817 -- # '[' -z 3079191 ']' 00:18:43.695 21:11:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:43.695 21:11:59 -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:43.695 21:11:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:43.695 21:11:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:43.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:43.695 21:11:59 -- target/tls.sh@204 -- # echo '{ 00:18:43.695 "subsystems": [ 00:18:43.695 { 00:18:43.695 "subsystem": "keyring", 00:18:43.695 "config": [] 00:18:43.695 }, 00:18:43.695 { 00:18:43.695 "subsystem": "iobuf", 00:18:43.695 "config": [ 00:18:43.695 { 00:18:43.695 "method": "iobuf_set_options", 00:18:43.695 "params": { 00:18:43.695 "small_pool_count": 8192, 00:18:43.695 "large_pool_count": 1024, 00:18:43.695 "small_bufsize": 8192, 00:18:43.695 "large_bufsize": 135168 00:18:43.695 } 00:18:43.695 } 00:18:43.695 ] 00:18:43.695 }, 00:18:43.695 { 00:18:43.695 "subsystem": "sock", 00:18:43.695 "config": [ 00:18:43.695 { 00:18:43.695 "method": "sock_impl_set_options", 00:18:43.695 "params": { 00:18:43.695 "impl_name": "posix", 00:18:43.695 "recv_buf_size": 2097152, 00:18:43.695 "send_buf_size": 2097152, 00:18:43.695 "enable_recv_pipe": true, 00:18:43.695 "enable_quickack": false, 00:18:43.695 "enable_placement_id": 0, 00:18:43.695 "enable_zerocopy_send_server": true, 00:18:43.695 "enable_zerocopy_send_client": false, 00:18:43.695 "zerocopy_threshold": 0, 00:18:43.695 "tls_version": 0, 00:18:43.695 "enable_ktls": false 00:18:43.695 } 00:18:43.695 }, 00:18:43.695 { 00:18:43.695 "method": "sock_impl_set_options", 00:18:43.695 "params": { 00:18:43.695 "impl_name": "ssl", 00:18:43.695 "recv_buf_size": 4096, 00:18:43.695 "send_buf_size": 4096, 00:18:43.695 "enable_recv_pipe": true, 00:18:43.695 "enable_quickack": false, 00:18:43.695 "enable_placement_id": 0, 00:18:43.695 "enable_zerocopy_send_server": true, 00:18:43.695 "enable_zerocopy_send_client": false, 00:18:43.695 "zerocopy_threshold": 0, 00:18:43.695 "tls_version": 0, 00:18:43.695 "enable_ktls": false 00:18:43.695 } 00:18:43.695 } 00:18:43.695 ] 00:18:43.695 }, 00:18:43.695 { 00:18:43.695 "subsystem": "vmd", 00:18:43.695 "config": [] 00:18:43.695 }, 00:18:43.695 { 00:18:43.695 "subsystem": "accel", 00:18:43.695 "config": [ 00:18:43.695 { 00:18:43.695 "method": "accel_set_options", 00:18:43.695 "params": { 00:18:43.695 "small_cache_size": 128, 00:18:43.695 "large_cache_size": 16, 00:18:43.695 "task_count": 2048, 00:18:43.695 "sequence_count": 2048, 00:18:43.695 "buf_count": 2048 00:18:43.695 } 00:18:43.695 } 00:18:43.695 ] 00:18:43.695 }, 00:18:43.695 { 00:18:43.695 "subsystem": "bdev", 00:18:43.695 "config": [ 00:18:43.695 { 00:18:43.695 "method": "bdev_set_options", 00:18:43.695 "params": { 00:18:43.695 "bdev_io_pool_size": 65535, 00:18:43.695 "bdev_io_cache_size": 256, 00:18:43.695 "bdev_auto_examine": true, 00:18:43.695 "iobuf_small_cache_size": 128, 00:18:43.695 "iobuf_large_cache_size": 16 00:18:43.695 } 00:18:43.695 }, 00:18:43.695 { 00:18:43.695 "method": "bdev_raid_set_options", 00:18:43.695 "params": { 00:18:43.695 "process_window_size_kb": 1024 00:18:43.695 } 00:18:43.695 }, 00:18:43.695 { 00:18:43.695 "method": "bdev_iscsi_set_options", 00:18:43.695 "params": { 00:18:43.695 "timeout_sec": 30 00:18:43.695 } 00:18:43.695 }, 00:18:43.695 { 00:18:43.695 "method": "bdev_nvme_set_options", 00:18:43.696 "params": { 00:18:43.696 "action_on_timeout": "none", 00:18:43.696 "timeout_us": 0, 00:18:43.696 "timeout_admin_us": 0, 00:18:43.696 "keep_alive_timeout_ms": 10000, 00:18:43.696 "arbitration_burst": 0, 00:18:43.696 "low_priority_weight": 0, 00:18:43.696 "medium_priority_weight": 0, 00:18:43.696 "high_priority_weight": 0, 00:18:43.696 "nvme_adminq_poll_period_us": 10000, 00:18:43.696 "nvme_ioq_poll_period_us": 0, 00:18:43.696 "io_queue_requests": 512, 00:18:43.696 "delay_cmd_submit": true, 00:18:43.696 "transport_retry_count": 4, 00:18:43.696 "bdev_retry_count": 3, 00:18:43.696 "transport_ack_timeout": 0, 00:18:43.696 "ctrlr_loss_timeout_sec": 0, 00:18:43.696 "reconnect_delay_sec": 0, 00:18:43.696 "fast_io_fail_timeout_sec": 0, 00:18:43.696 "disable_auto_failback": false, 00:18:43.696 "generate_uuids": false, 00:18:43.696 "transport_tos": 0, 00:18:43.696 "nvme_error_stat": false, 00:18:43.696 "rdma_srq_size": 0, 00:18:43.696 "io_path_stat": false, 00:18:43.696 "allow_accel_sequence": false, 00:18:43.696 "rdma_max_cq_size": 0, 00:18:43.696 "rdma_cm_event_timeout_ms": 0, 00:18:43.696 "dhchap_digests": [ 00:18:43.696 "sha256", 00:18:43.696 "sha384", 00:18:43.696 "sha512" 00:18:43.696 ], 00:18:43.696 "dhchap_dhgroups": [ 00:18:43.696 "null", 00:18:43.696 "ffdhe2048", 00:18:43.696 "ffdhe3072", 00:18:43.696 "ffdhe4096", 00:18:43.696 "ffdhe6144", 00:18:43.696 "ffdhe8192" 00:18:43.696 ] 00:18:43.696 } 00:18:43.696 }, 00:18:43.696 { 00:18:43.696 "method": "bdev_nvme_attach_controller", 00:18:43.696 "params": { 00:18:43.696 "name": "TLSTEST", 00:18:43.696 "trtype": "TCP", 00:18:43.696 "adrfam": "IPv4", 00:18:43.696 "traddr": "10.0.0.2", 00:18:43.696 "trsvcid": "4420", 00:18:43.696 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:43.696 "prchk_reftag": false, 00:18:43.696 "prchk_guard": false, 00:18:43.696 "ctrlr_loss_timeout_sec": 0, 00:18:43.696 "reconnect_delay_sec": 0, 00:18:43.696 "fast_io_fail_timeout_sec": 0, 00:18:43.696 "psk": "/tmp/tmp.W1fES83uIr", 00:18:43.696 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:43.696 "hdgst": false, 00:18:43.696 "ddgst": false 00:18:43.696 } 00:18:43.696 }, 00:18:43.696 { 00:18:43.696 "method": "bdev_nvme_set_hotplug", 00:18:43.696 "params": { 00:18:43.696 "period_us": 100000, 00:18:43.696 "enable": false 00:18:43.696 } 00:18:43.696 }, 00:18:43.696 { 00:18:43.696 "method": "bdev_wait_for_examine" 00:18:43.696 } 00:18:43.696 ] 00:18:43.696 }, 00:18:43.696 { 00:18:43.696 "subsystem": "nbd", 00:18:43.696 "config": [] 00:18:43.696 } 00:18:43.696 ] 00:18:43.696 }' 00:18:43.696 21:11:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:43.696 21:11:59 -- common/autotest_common.sh@10 -- # set +x 00:18:43.696 [2024-04-18 21:11:59.436756] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:18:43.696 [2024-04-18 21:11:59.436803] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3079191 ] 00:18:43.696 EAL: No free 2048 kB hugepages reported on node 1 00:18:43.696 [2024-04-18 21:11:59.490613] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.696 [2024-04-18 21:11:59.561856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:43.956 [2024-04-18 21:11:59.696609] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:43.956 [2024-04-18 21:11:59.696685] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:44.524 21:12:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:44.524 21:12:00 -- common/autotest_common.sh@850 -- # return 0 00:18:44.524 21:12:00 -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:44.524 Running I/O for 10 seconds... 00:18:54.535 00:18:54.535 Latency(us) 00:18:54.535 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.535 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:54.535 Verification LBA range: start 0x0 length 0x2000 00:18:54.535 TLSTESTn1 : 10.03 2500.88 9.77 0.00 0.00 51116.00 5014.93 81150.66 00:18:54.535 =================================================================================================================== 00:18:54.535 Total : 2500.88 9.77 0.00 0.00 51116.00 5014.93 81150.66 00:18:54.535 0 00:18:54.535 21:12:10 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:54.535 21:12:10 -- target/tls.sh@214 -- # killprocess 3079191 00:18:54.535 21:12:10 -- common/autotest_common.sh@936 -- # '[' -z 3079191 ']' 00:18:54.535 21:12:10 -- common/autotest_common.sh@940 -- # kill -0 3079191 00:18:54.535 21:12:10 -- common/autotest_common.sh@941 -- # uname 00:18:54.535 21:12:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:54.535 21:12:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3079191 00:18:54.795 21:12:10 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:54.795 21:12:10 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:54.795 21:12:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3079191' 00:18:54.795 killing process with pid 3079191 00:18:54.795 21:12:10 -- common/autotest_common.sh@955 -- # kill 3079191 00:18:54.795 Received shutdown signal, test time was about 10.000000 seconds 00:18:54.795 00:18:54.795 Latency(us) 00:18:54.795 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.795 =================================================================================================================== 00:18:54.795 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:54.795 [2024-04-18 21:12:10.448576] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:54.795 21:12:10 -- common/autotest_common.sh@960 -- # wait 3079191 00:18:54.795 21:12:10 -- target/tls.sh@215 -- # killprocess 3079003 00:18:54.795 21:12:10 -- common/autotest_common.sh@936 -- # '[' -z 3079003 ']' 00:18:54.795 21:12:10 -- common/autotest_common.sh@940 -- # kill -0 3079003 00:18:54.795 21:12:10 -- common/autotest_common.sh@941 -- # uname 00:18:54.795 21:12:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:54.795 21:12:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3079003 00:18:54.795 21:12:10 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:54.795 21:12:10 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:54.795 21:12:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3079003' 00:18:54.795 killing process with pid 3079003 00:18:54.795 21:12:10 -- common/autotest_common.sh@955 -- # kill 3079003 00:18:54.795 [2024-04-18 21:12:10.702131] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:54.795 21:12:10 -- common/autotest_common.sh@960 -- # wait 3079003 00:18:55.055 21:12:10 -- target/tls.sh@218 -- # nvmfappstart 00:18:55.055 21:12:10 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:55.055 21:12:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:55.055 21:12:10 -- common/autotest_common.sh@10 -- # set +x 00:18:55.055 21:12:10 -- nvmf/common.sh@470 -- # nvmfpid=3081609 00:18:55.055 21:12:10 -- nvmf/common.sh@471 -- # waitforlisten 3081609 00:18:55.055 21:12:10 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:55.055 21:12:10 -- common/autotest_common.sh@817 -- # '[' -z 3081609 ']' 00:18:55.055 21:12:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.055 21:12:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:55.055 21:12:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.055 21:12:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:55.055 21:12:10 -- common/autotest_common.sh@10 -- # set +x 00:18:55.055 [2024-04-18 21:12:10.971737] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:18:55.055 [2024-04-18 21:12:10.971780] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:55.315 EAL: No free 2048 kB hugepages reported on node 1 00:18:55.315 [2024-04-18 21:12:11.035532] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.315 [2024-04-18 21:12:11.103056] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:55.315 [2024-04-18 21:12:11.103098] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:55.315 [2024-04-18 21:12:11.103105] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:55.315 [2024-04-18 21:12:11.103110] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:55.315 [2024-04-18 21:12:11.103115] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:55.315 [2024-04-18 21:12:11.103138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.884 21:12:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:55.884 21:12:11 -- common/autotest_common.sh@850 -- # return 0 00:18:55.884 21:12:11 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:55.884 21:12:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:55.884 21:12:11 -- common/autotest_common.sh@10 -- # set +x 00:18:55.884 21:12:11 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:55.884 21:12:11 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.W1fES83uIr 00:18:55.884 21:12:11 -- target/tls.sh@49 -- # local key=/tmp/tmp.W1fES83uIr 00:18:55.884 21:12:11 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:56.143 [2024-04-18 21:12:11.962699] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:56.143 21:12:11 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:56.403 21:12:12 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:56.403 [2024-04-18 21:12:12.307593] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:56.403 [2024-04-18 21:12:12.307795] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:56.403 21:12:12 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:56.663 malloc0 00:18:56.663 21:12:12 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:56.923 21:12:12 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.W1fES83uIr 00:18:56.923 [2024-04-18 21:12:12.833377] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:56.923 21:12:12 -- target/tls.sh@222 -- # bdevperf_pid=3081865 00:18:56.923 21:12:12 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:56.923 21:12:12 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:56.923 21:12:12 -- target/tls.sh@225 -- # waitforlisten 3081865 /var/tmp/bdevperf.sock 00:18:56.923 21:12:12 -- common/autotest_common.sh@817 -- # '[' -z 3081865 ']' 00:18:56.923 21:12:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:56.923 21:12:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:56.923 21:12:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:56.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:56.923 21:12:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:56.923 21:12:12 -- common/autotest_common.sh@10 -- # set +x 00:18:57.182 [2024-04-18 21:12:12.889042] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:18:57.182 [2024-04-18 21:12:12.889087] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3081865 ] 00:18:57.182 EAL: No free 2048 kB hugepages reported on node 1 00:18:57.182 [2024-04-18 21:12:12.949522] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.182 [2024-04-18 21:12:13.027451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:57.764 21:12:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:57.764 21:12:13 -- common/autotest_common.sh@850 -- # return 0 00:18:57.764 21:12:13 -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.W1fES83uIr 00:18:58.023 21:12:13 -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:58.282 [2024-04-18 21:12:14.022428] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:58.282 nvme0n1 00:18:58.282 21:12:14 -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:58.282 Running I/O for 1 seconds... 00:18:59.660 00:18:59.660 Latency(us) 00:18:59.660 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:59.660 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:59.660 Verification LBA range: start 0x0 length 0x2000 00:18:59.660 nvme0n1 : 1.04 2386.61 9.32 0.00 0.00 52751.27 4986.43 101210.38 00:18:59.660 =================================================================================================================== 00:18:59.660 Total : 2386.61 9.32 0.00 0.00 52751.27 4986.43 101210.38 00:18:59.660 0 00:18:59.660 21:12:15 -- target/tls.sh@234 -- # killprocess 3081865 00:18:59.660 21:12:15 -- common/autotest_common.sh@936 -- # '[' -z 3081865 ']' 00:18:59.660 21:12:15 -- common/autotest_common.sh@940 -- # kill -0 3081865 00:18:59.660 21:12:15 -- common/autotest_common.sh@941 -- # uname 00:18:59.660 21:12:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:59.660 21:12:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3081865 00:18:59.660 21:12:15 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:59.660 21:12:15 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:59.660 21:12:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3081865' 00:18:59.660 killing process with pid 3081865 00:18:59.660 21:12:15 -- common/autotest_common.sh@955 -- # kill 3081865 00:18:59.660 Received shutdown signal, test time was about 1.000000 seconds 00:18:59.660 00:18:59.660 Latency(us) 00:18:59.660 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:59.660 =================================================================================================================== 00:18:59.660 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:59.660 21:12:15 -- common/autotest_common.sh@960 -- # wait 3081865 00:18:59.660 21:12:15 -- target/tls.sh@235 -- # killprocess 3081609 00:18:59.660 21:12:15 -- common/autotest_common.sh@936 -- # '[' -z 3081609 ']' 00:18:59.660 21:12:15 -- common/autotest_common.sh@940 -- # kill -0 3081609 00:18:59.660 21:12:15 -- common/autotest_common.sh@941 -- # uname 00:18:59.660 21:12:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:59.660 21:12:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3081609 00:18:59.660 21:12:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:59.660 21:12:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:59.660 21:12:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3081609' 00:18:59.660 killing process with pid 3081609 00:18:59.660 21:12:15 -- common/autotest_common.sh@955 -- # kill 3081609 00:18:59.660 [2024-04-18 21:12:15.575750] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:59.660 21:12:15 -- common/autotest_common.sh@960 -- # wait 3081609 00:18:59.920 21:12:15 -- target/tls.sh@238 -- # nvmfappstart 00:18:59.920 21:12:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:59.920 21:12:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:59.920 21:12:15 -- common/autotest_common.sh@10 -- # set +x 00:18:59.920 21:12:15 -- nvmf/common.sh@470 -- # nvmfpid=3082344 00:18:59.920 21:12:15 -- nvmf/common.sh@471 -- # waitforlisten 3082344 00:18:59.920 21:12:15 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:59.920 21:12:15 -- common/autotest_common.sh@817 -- # '[' -z 3082344 ']' 00:18:59.920 21:12:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.920 21:12:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:59.920 21:12:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.920 21:12:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:59.920 21:12:15 -- common/autotest_common.sh@10 -- # set +x 00:18:59.920 [2024-04-18 21:12:15.847371] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:18:59.920 [2024-04-18 21:12:15.847417] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:00.179 EAL: No free 2048 kB hugepages reported on node 1 00:19:00.179 [2024-04-18 21:12:15.909411] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.179 [2024-04-18 21:12:15.986653] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:00.179 [2024-04-18 21:12:15.986688] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:00.179 [2024-04-18 21:12:15.986695] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:00.179 [2024-04-18 21:12:15.986701] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:00.179 [2024-04-18 21:12:15.986707] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:00.179 [2024-04-18 21:12:15.986722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.748 21:12:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:00.748 21:12:16 -- common/autotest_common.sh@850 -- # return 0 00:19:00.748 21:12:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:00.748 21:12:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:00.748 21:12:16 -- common/autotest_common.sh@10 -- # set +x 00:19:00.748 21:12:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:00.748 21:12:16 -- target/tls.sh@239 -- # rpc_cmd 00:19:00.748 21:12:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:00.748 21:12:16 -- common/autotest_common.sh@10 -- # set +x 00:19:01.007 [2024-04-18 21:12:16.683229] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:01.007 malloc0 00:19:01.007 [2024-04-18 21:12:16.711438] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:01.007 [2024-04-18 21:12:16.711637] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:01.007 21:12:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:01.007 21:12:16 -- target/tls.sh@252 -- # bdevperf_pid=3082591 00:19:01.007 21:12:16 -- target/tls.sh@254 -- # waitforlisten 3082591 /var/tmp/bdevperf.sock 00:19:01.007 21:12:16 -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:01.007 21:12:16 -- common/autotest_common.sh@817 -- # '[' -z 3082591 ']' 00:19:01.007 21:12:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:01.007 21:12:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:01.007 21:12:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:01.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:01.007 21:12:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:01.007 21:12:16 -- common/autotest_common.sh@10 -- # set +x 00:19:01.007 [2024-04-18 21:12:16.784787] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:19:01.007 [2024-04-18 21:12:16.784828] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3082591 ] 00:19:01.007 EAL: No free 2048 kB hugepages reported on node 1 00:19:01.007 [2024-04-18 21:12:16.844199] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.007 [2024-04-18 21:12:16.921992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:02.002 21:12:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:02.002 21:12:17 -- common/autotest_common.sh@850 -- # return 0 00:19:02.002 21:12:17 -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.W1fES83uIr 00:19:02.002 21:12:17 -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:02.002 [2024-04-18 21:12:17.896878] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:02.261 nvme0n1 00:19:02.261 21:12:17 -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:02.261 Running I/O for 1 seconds... 00:19:03.199 00:19:03.200 Latency(us) 00:19:03.200 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:03.200 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:03.200 Verification LBA range: start 0x0 length 0x2000 00:19:03.200 nvme0n1 : 1.05 2491.78 9.73 0.00 0.00 50408.63 6838.54 74768.03 00:19:03.200 =================================================================================================================== 00:19:03.200 Total : 2491.78 9.73 0.00 0.00 50408.63 6838.54 74768.03 00:19:03.200 0 00:19:03.460 21:12:19 -- target/tls.sh@263 -- # rpc_cmd save_config 00:19:03.460 21:12:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:03.460 21:12:19 -- common/autotest_common.sh@10 -- # set +x 00:19:03.460 21:12:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:03.460 21:12:19 -- target/tls.sh@263 -- # tgtcfg='{ 00:19:03.460 "subsystems": [ 00:19:03.460 { 00:19:03.460 "subsystem": "keyring", 00:19:03.460 "config": [ 00:19:03.460 { 00:19:03.460 "method": "keyring_file_add_key", 00:19:03.460 "params": { 00:19:03.460 "name": "key0", 00:19:03.460 "path": "/tmp/tmp.W1fES83uIr" 00:19:03.460 } 00:19:03.460 } 00:19:03.460 ] 00:19:03.460 }, 00:19:03.460 { 00:19:03.460 "subsystem": "iobuf", 00:19:03.460 "config": [ 00:19:03.460 { 00:19:03.460 "method": "iobuf_set_options", 00:19:03.460 "params": { 00:19:03.460 "small_pool_count": 8192, 00:19:03.460 "large_pool_count": 1024, 00:19:03.460 "small_bufsize": 8192, 00:19:03.460 "large_bufsize": 135168 00:19:03.460 } 00:19:03.460 } 00:19:03.460 ] 00:19:03.460 }, 00:19:03.460 { 00:19:03.460 "subsystem": "sock", 00:19:03.460 "config": [ 00:19:03.460 { 00:19:03.460 "method": "sock_impl_set_options", 00:19:03.460 "params": { 00:19:03.460 "impl_name": "posix", 00:19:03.460 "recv_buf_size": 2097152, 00:19:03.460 "send_buf_size": 2097152, 00:19:03.460 "enable_recv_pipe": true, 00:19:03.460 "enable_quickack": false, 00:19:03.460 "enable_placement_id": 0, 00:19:03.460 "enable_zerocopy_send_server": true, 00:19:03.460 "enable_zerocopy_send_client": false, 00:19:03.460 "zerocopy_threshold": 0, 00:19:03.460 "tls_version": 0, 00:19:03.460 "enable_ktls": false 00:19:03.460 } 00:19:03.460 }, 00:19:03.460 { 00:19:03.460 "method": "sock_impl_set_options", 00:19:03.460 "params": { 00:19:03.460 "impl_name": "ssl", 00:19:03.460 "recv_buf_size": 4096, 00:19:03.460 "send_buf_size": 4096, 00:19:03.460 "enable_recv_pipe": true, 00:19:03.460 "enable_quickack": false, 00:19:03.460 "enable_placement_id": 0, 00:19:03.460 "enable_zerocopy_send_server": true, 00:19:03.460 "enable_zerocopy_send_client": false, 00:19:03.460 "zerocopy_threshold": 0, 00:19:03.460 "tls_version": 0, 00:19:03.460 "enable_ktls": false 00:19:03.460 } 00:19:03.460 } 00:19:03.460 ] 00:19:03.460 }, 00:19:03.460 { 00:19:03.460 "subsystem": "vmd", 00:19:03.460 "config": [] 00:19:03.460 }, 00:19:03.460 { 00:19:03.460 "subsystem": "accel", 00:19:03.460 "config": [ 00:19:03.460 { 00:19:03.460 "method": "accel_set_options", 00:19:03.460 "params": { 00:19:03.460 "small_cache_size": 128, 00:19:03.460 "large_cache_size": 16, 00:19:03.460 "task_count": 2048, 00:19:03.460 "sequence_count": 2048, 00:19:03.460 "buf_count": 2048 00:19:03.460 } 00:19:03.460 } 00:19:03.460 ] 00:19:03.460 }, 00:19:03.460 { 00:19:03.460 "subsystem": "bdev", 00:19:03.460 "config": [ 00:19:03.460 { 00:19:03.460 "method": "bdev_set_options", 00:19:03.460 "params": { 00:19:03.460 "bdev_io_pool_size": 65535, 00:19:03.460 "bdev_io_cache_size": 256, 00:19:03.460 "bdev_auto_examine": true, 00:19:03.460 "iobuf_small_cache_size": 128, 00:19:03.460 "iobuf_large_cache_size": 16 00:19:03.460 } 00:19:03.460 }, 00:19:03.460 { 00:19:03.460 "method": "bdev_raid_set_options", 00:19:03.460 "params": { 00:19:03.460 "process_window_size_kb": 1024 00:19:03.460 } 00:19:03.460 }, 00:19:03.460 { 00:19:03.460 "method": "bdev_iscsi_set_options", 00:19:03.460 "params": { 00:19:03.460 "timeout_sec": 30 00:19:03.460 } 00:19:03.460 }, 00:19:03.460 { 00:19:03.460 "method": "bdev_nvme_set_options", 00:19:03.460 "params": { 00:19:03.460 "action_on_timeout": "none", 00:19:03.460 "timeout_us": 0, 00:19:03.460 "timeout_admin_us": 0, 00:19:03.460 "keep_alive_timeout_ms": 10000, 00:19:03.460 "arbitration_burst": 0, 00:19:03.460 "low_priority_weight": 0, 00:19:03.460 "medium_priority_weight": 0, 00:19:03.460 "high_priority_weight": 0, 00:19:03.460 "nvme_adminq_poll_period_us": 10000, 00:19:03.460 "nvme_ioq_poll_period_us": 0, 00:19:03.460 "io_queue_requests": 0, 00:19:03.460 "delay_cmd_submit": true, 00:19:03.460 "transport_retry_count": 4, 00:19:03.460 "bdev_retry_count": 3, 00:19:03.460 "transport_ack_timeout": 0, 00:19:03.460 "ctrlr_loss_timeout_sec": 0, 00:19:03.460 "reconnect_delay_sec": 0, 00:19:03.460 "fast_io_fail_timeout_sec": 0, 00:19:03.460 "disable_auto_failback": false, 00:19:03.460 "generate_uuids": false, 00:19:03.460 "transport_tos": 0, 00:19:03.460 "nvme_error_stat": false, 00:19:03.460 "rdma_srq_size": 0, 00:19:03.460 "io_path_stat": false, 00:19:03.460 "allow_accel_sequence": false, 00:19:03.460 "rdma_max_cq_size": 0, 00:19:03.460 "rdma_cm_event_timeout_ms": 0, 00:19:03.460 "dhchap_digests": [ 00:19:03.460 "sha256", 00:19:03.460 "sha384", 00:19:03.460 "sha512" 00:19:03.460 ], 00:19:03.460 "dhchap_dhgroups": [ 00:19:03.460 "null", 00:19:03.460 "ffdhe2048", 00:19:03.460 "ffdhe3072", 00:19:03.460 "ffdhe4096", 00:19:03.460 "ffdhe6144", 00:19:03.460 "ffdhe8192" 00:19:03.460 ] 00:19:03.460 } 00:19:03.460 }, 00:19:03.460 { 00:19:03.460 "method": "bdev_nvme_set_hotplug", 00:19:03.460 "params": { 00:19:03.460 "period_us": 100000, 00:19:03.460 "enable": false 00:19:03.460 } 00:19:03.460 }, 00:19:03.460 { 00:19:03.460 "method": "bdev_malloc_create", 00:19:03.460 "params": { 00:19:03.460 "name": "malloc0", 00:19:03.460 "num_blocks": 8192, 00:19:03.460 "block_size": 4096, 00:19:03.460 "physical_block_size": 4096, 00:19:03.460 "uuid": "da907efc-96dc-42aa-9fb0-a0d9bfbab681", 00:19:03.460 "optimal_io_boundary": 0 00:19:03.460 } 00:19:03.460 }, 00:19:03.460 { 00:19:03.460 "method": "bdev_wait_for_examine" 00:19:03.460 } 00:19:03.460 ] 00:19:03.460 }, 00:19:03.460 { 00:19:03.460 "subsystem": "nbd", 00:19:03.460 "config": [] 00:19:03.460 }, 00:19:03.460 { 00:19:03.460 "subsystem": "scheduler", 00:19:03.460 "config": [ 00:19:03.460 { 00:19:03.460 "method": "framework_set_scheduler", 00:19:03.460 "params": { 00:19:03.460 "name": "static" 00:19:03.460 } 00:19:03.460 } 00:19:03.460 ] 00:19:03.460 }, 00:19:03.460 { 00:19:03.460 "subsystem": "nvmf", 00:19:03.460 "config": [ 00:19:03.460 { 00:19:03.460 "method": "nvmf_set_config", 00:19:03.460 "params": { 00:19:03.460 "discovery_filter": "match_any", 00:19:03.460 "admin_cmd_passthru": { 00:19:03.460 "identify_ctrlr": false 00:19:03.460 } 00:19:03.460 } 00:19:03.460 }, 00:19:03.460 { 00:19:03.460 "method": "nvmf_set_max_subsystems", 00:19:03.460 "params": { 00:19:03.460 "max_subsystems": 1024 00:19:03.460 } 00:19:03.460 }, 00:19:03.460 { 00:19:03.460 "method": "nvmf_set_crdt", 00:19:03.460 "params": { 00:19:03.460 "crdt1": 0, 00:19:03.460 "crdt2": 0, 00:19:03.460 "crdt3": 0 00:19:03.460 } 00:19:03.460 }, 00:19:03.460 { 00:19:03.460 "method": "nvmf_create_transport", 00:19:03.460 "params": { 00:19:03.460 "trtype": "TCP", 00:19:03.460 "max_queue_depth": 128, 00:19:03.460 "max_io_qpairs_per_ctrlr": 127, 00:19:03.460 "in_capsule_data_size": 4096, 00:19:03.460 "max_io_size": 131072, 00:19:03.460 "io_unit_size": 131072, 00:19:03.460 "max_aq_depth": 128, 00:19:03.460 "num_shared_buffers": 511, 00:19:03.460 "buf_cache_size": 4294967295, 00:19:03.460 "dif_insert_or_strip": false, 00:19:03.460 "zcopy": false, 00:19:03.460 "c2h_success": false, 00:19:03.460 "sock_priority": 0, 00:19:03.460 "abort_timeout_sec": 1, 00:19:03.460 "ack_timeout": 0 00:19:03.460 } 00:19:03.460 }, 00:19:03.460 { 00:19:03.460 "method": "nvmf_create_subsystem", 00:19:03.460 "params": { 00:19:03.460 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:03.460 "allow_any_host": false, 00:19:03.460 "serial_number": "00000000000000000000", 00:19:03.460 "model_number": "SPDK bdev Controller", 00:19:03.460 "max_namespaces": 32, 00:19:03.460 "min_cntlid": 1, 00:19:03.460 "max_cntlid": 65519, 00:19:03.460 "ana_reporting": false 00:19:03.460 } 00:19:03.460 }, 00:19:03.460 { 00:19:03.460 "method": "nvmf_subsystem_add_host", 00:19:03.460 "params": { 00:19:03.461 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:03.461 "host": "nqn.2016-06.io.spdk:host1", 00:19:03.461 "psk": "key0" 00:19:03.461 } 00:19:03.461 }, 00:19:03.461 { 00:19:03.461 "method": "nvmf_subsystem_add_ns", 00:19:03.461 "params": { 00:19:03.461 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:03.461 "namespace": { 00:19:03.461 "nsid": 1, 00:19:03.461 "bdev_name": "malloc0", 00:19:03.461 "nguid": "DA907EFC96DC42AA9FB0A0D9BFBAB681", 00:19:03.461 "uuid": "da907efc-96dc-42aa-9fb0-a0d9bfbab681", 00:19:03.461 "no_auto_visible": false 00:19:03.461 } 00:19:03.461 } 00:19:03.461 }, 00:19:03.461 { 00:19:03.461 "method": "nvmf_subsystem_add_listener", 00:19:03.461 "params": { 00:19:03.461 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:03.461 "listen_address": { 00:19:03.461 "trtype": "TCP", 00:19:03.461 "adrfam": "IPv4", 00:19:03.461 "traddr": "10.0.0.2", 00:19:03.461 "trsvcid": "4420" 00:19:03.461 }, 00:19:03.461 "secure_channel": true 00:19:03.461 } 00:19:03.461 } 00:19:03.461 ] 00:19:03.461 } 00:19:03.461 ] 00:19:03.461 }' 00:19:03.461 21:12:19 -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:03.721 21:12:19 -- target/tls.sh@264 -- # bperfcfg='{ 00:19:03.721 "subsystems": [ 00:19:03.721 { 00:19:03.721 "subsystem": "keyring", 00:19:03.721 "config": [ 00:19:03.721 { 00:19:03.721 "method": "keyring_file_add_key", 00:19:03.721 "params": { 00:19:03.721 "name": "key0", 00:19:03.721 "path": "/tmp/tmp.W1fES83uIr" 00:19:03.721 } 00:19:03.721 } 00:19:03.721 ] 00:19:03.721 }, 00:19:03.721 { 00:19:03.721 "subsystem": "iobuf", 00:19:03.721 "config": [ 00:19:03.721 { 00:19:03.721 "method": "iobuf_set_options", 00:19:03.721 "params": { 00:19:03.721 "small_pool_count": 8192, 00:19:03.721 "large_pool_count": 1024, 00:19:03.721 "small_bufsize": 8192, 00:19:03.721 "large_bufsize": 135168 00:19:03.721 } 00:19:03.721 } 00:19:03.721 ] 00:19:03.721 }, 00:19:03.721 { 00:19:03.721 "subsystem": "sock", 00:19:03.721 "config": [ 00:19:03.721 { 00:19:03.721 "method": "sock_impl_set_options", 00:19:03.721 "params": { 00:19:03.721 "impl_name": "posix", 00:19:03.721 "recv_buf_size": 2097152, 00:19:03.721 "send_buf_size": 2097152, 00:19:03.721 "enable_recv_pipe": true, 00:19:03.721 "enable_quickack": false, 00:19:03.721 "enable_placement_id": 0, 00:19:03.721 "enable_zerocopy_send_server": true, 00:19:03.721 "enable_zerocopy_send_client": false, 00:19:03.721 "zerocopy_threshold": 0, 00:19:03.721 "tls_version": 0, 00:19:03.721 "enable_ktls": false 00:19:03.721 } 00:19:03.721 }, 00:19:03.721 { 00:19:03.721 "method": "sock_impl_set_options", 00:19:03.721 "params": { 00:19:03.721 "impl_name": "ssl", 00:19:03.721 "recv_buf_size": 4096, 00:19:03.721 "send_buf_size": 4096, 00:19:03.721 "enable_recv_pipe": true, 00:19:03.721 "enable_quickack": false, 00:19:03.721 "enable_placement_id": 0, 00:19:03.721 "enable_zerocopy_send_server": true, 00:19:03.721 "enable_zerocopy_send_client": false, 00:19:03.721 "zerocopy_threshold": 0, 00:19:03.721 "tls_version": 0, 00:19:03.721 "enable_ktls": false 00:19:03.721 } 00:19:03.721 } 00:19:03.721 ] 00:19:03.721 }, 00:19:03.721 { 00:19:03.721 "subsystem": "vmd", 00:19:03.721 "config": [] 00:19:03.721 }, 00:19:03.721 { 00:19:03.721 "subsystem": "accel", 00:19:03.721 "config": [ 00:19:03.721 { 00:19:03.721 "method": "accel_set_options", 00:19:03.721 "params": { 00:19:03.721 "small_cache_size": 128, 00:19:03.721 "large_cache_size": 16, 00:19:03.721 "task_count": 2048, 00:19:03.721 "sequence_count": 2048, 00:19:03.721 "buf_count": 2048 00:19:03.721 } 00:19:03.721 } 00:19:03.721 ] 00:19:03.721 }, 00:19:03.721 { 00:19:03.721 "subsystem": "bdev", 00:19:03.721 "config": [ 00:19:03.721 { 00:19:03.721 "method": "bdev_set_options", 00:19:03.721 "params": { 00:19:03.721 "bdev_io_pool_size": 65535, 00:19:03.721 "bdev_io_cache_size": 256, 00:19:03.721 "bdev_auto_examine": true, 00:19:03.721 "iobuf_small_cache_size": 128, 00:19:03.721 "iobuf_large_cache_size": 16 00:19:03.721 } 00:19:03.721 }, 00:19:03.721 { 00:19:03.721 "method": "bdev_raid_set_options", 00:19:03.721 "params": { 00:19:03.721 "process_window_size_kb": 1024 00:19:03.721 } 00:19:03.721 }, 00:19:03.721 { 00:19:03.721 "method": "bdev_iscsi_set_options", 00:19:03.721 "params": { 00:19:03.721 "timeout_sec": 30 00:19:03.721 } 00:19:03.721 }, 00:19:03.721 { 00:19:03.721 "method": "bdev_nvme_set_options", 00:19:03.721 "params": { 00:19:03.721 "action_on_timeout": "none", 00:19:03.721 "timeout_us": 0, 00:19:03.721 "timeout_admin_us": 0, 00:19:03.721 "keep_alive_timeout_ms": 10000, 00:19:03.721 "arbitration_burst": 0, 00:19:03.721 "low_priority_weight": 0, 00:19:03.721 "medium_priority_weight": 0, 00:19:03.721 "high_priority_weight": 0, 00:19:03.721 "nvme_adminq_poll_period_us": 10000, 00:19:03.721 "nvme_ioq_poll_period_us": 0, 00:19:03.721 "io_queue_requests": 512, 00:19:03.721 "delay_cmd_submit": true, 00:19:03.721 "transport_retry_count": 4, 00:19:03.721 "bdev_retry_count": 3, 00:19:03.721 "transport_ack_timeout": 0, 00:19:03.721 "ctrlr_loss_timeout_sec": 0, 00:19:03.721 "reconnect_delay_sec": 0, 00:19:03.721 "fast_io_fail_timeout_sec": 0, 00:19:03.721 "disable_auto_failback": false, 00:19:03.721 "generate_uuids": false, 00:19:03.721 "transport_tos": 0, 00:19:03.721 "nvme_error_stat": false, 00:19:03.721 "rdma_srq_size": 0, 00:19:03.721 "io_path_stat": false, 00:19:03.721 "allow_accel_sequence": false, 00:19:03.721 "rdma_max_cq_size": 0, 00:19:03.721 "rdma_cm_event_timeout_ms": 0, 00:19:03.721 "dhchap_digests": [ 00:19:03.721 "sha256", 00:19:03.721 "sha384", 00:19:03.721 "sha512" 00:19:03.721 ], 00:19:03.721 "dhchap_dhgroups": [ 00:19:03.721 "null", 00:19:03.721 "ffdhe2048", 00:19:03.721 "ffdhe3072", 00:19:03.721 "ffdhe4096", 00:19:03.721 "ffdhe6144", 00:19:03.721 "ffdhe8192" 00:19:03.721 ] 00:19:03.721 } 00:19:03.721 }, 00:19:03.721 { 00:19:03.721 "method": "bdev_nvme_attach_controller", 00:19:03.721 "params": { 00:19:03.721 "name": "nvme0", 00:19:03.721 "trtype": "TCP", 00:19:03.721 "adrfam": "IPv4", 00:19:03.721 "traddr": "10.0.0.2", 00:19:03.722 "trsvcid": "4420", 00:19:03.722 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:03.722 "prchk_reftag": false, 00:19:03.722 "prchk_guard": false, 00:19:03.722 "ctrlr_loss_timeout_sec": 0, 00:19:03.722 "reconnect_delay_sec": 0, 00:19:03.722 "fast_io_fail_timeout_sec": 0, 00:19:03.722 "psk": "key0", 00:19:03.722 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:03.722 "hdgst": false, 00:19:03.722 "ddgst": false 00:19:03.722 } 00:19:03.722 }, 00:19:03.722 { 00:19:03.722 "method": "bdev_nvme_set_hotplug", 00:19:03.722 "params": { 00:19:03.722 "period_us": 100000, 00:19:03.722 "enable": false 00:19:03.722 } 00:19:03.722 }, 00:19:03.722 { 00:19:03.722 "method": "bdev_enable_histogram", 00:19:03.722 "params": { 00:19:03.722 "name": "nvme0n1", 00:19:03.722 "enable": true 00:19:03.722 } 00:19:03.722 }, 00:19:03.722 { 00:19:03.722 "method": "bdev_wait_for_examine" 00:19:03.722 } 00:19:03.722 ] 00:19:03.722 }, 00:19:03.722 { 00:19:03.722 "subsystem": "nbd", 00:19:03.722 "config": [] 00:19:03.722 } 00:19:03.722 ] 00:19:03.722 }' 00:19:03.722 21:12:19 -- target/tls.sh@266 -- # killprocess 3082591 00:19:03.722 21:12:19 -- common/autotest_common.sh@936 -- # '[' -z 3082591 ']' 00:19:03.722 21:12:19 -- common/autotest_common.sh@940 -- # kill -0 3082591 00:19:03.722 21:12:19 -- common/autotest_common.sh@941 -- # uname 00:19:03.722 21:12:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:03.722 21:12:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3082591 00:19:03.722 21:12:19 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:03.722 21:12:19 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:03.722 21:12:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3082591' 00:19:03.722 killing process with pid 3082591 00:19:03.722 21:12:19 -- common/autotest_common.sh@955 -- # kill 3082591 00:19:03.722 Received shutdown signal, test time was about 1.000000 seconds 00:19:03.722 00:19:03.722 Latency(us) 00:19:03.722 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:03.722 =================================================================================================================== 00:19:03.722 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:03.722 21:12:19 -- common/autotest_common.sh@960 -- # wait 3082591 00:19:03.982 21:12:19 -- target/tls.sh@267 -- # killprocess 3082344 00:19:03.982 21:12:19 -- common/autotest_common.sh@936 -- # '[' -z 3082344 ']' 00:19:03.982 21:12:19 -- common/autotest_common.sh@940 -- # kill -0 3082344 00:19:03.982 21:12:19 -- common/autotest_common.sh@941 -- # uname 00:19:03.982 21:12:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:03.982 21:12:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3082344 00:19:03.982 21:12:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:03.982 21:12:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:03.982 21:12:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3082344' 00:19:03.982 killing process with pid 3082344 00:19:03.982 21:12:19 -- common/autotest_common.sh@955 -- # kill 3082344 00:19:03.982 21:12:19 -- common/autotest_common.sh@960 -- # wait 3082344 00:19:04.242 21:12:20 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:19:04.242 21:12:20 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:04.242 21:12:20 -- target/tls.sh@269 -- # echo '{ 00:19:04.242 "subsystems": [ 00:19:04.242 { 00:19:04.242 "subsystem": "keyring", 00:19:04.242 "config": [ 00:19:04.242 { 00:19:04.242 "method": "keyring_file_add_key", 00:19:04.242 "params": { 00:19:04.242 "name": "key0", 00:19:04.242 "path": "/tmp/tmp.W1fES83uIr" 00:19:04.242 } 00:19:04.242 } 00:19:04.242 ] 00:19:04.242 }, 00:19:04.242 { 00:19:04.242 "subsystem": "iobuf", 00:19:04.242 "config": [ 00:19:04.242 { 00:19:04.242 "method": "iobuf_set_options", 00:19:04.242 "params": { 00:19:04.242 "small_pool_count": 8192, 00:19:04.242 "large_pool_count": 1024, 00:19:04.242 "small_bufsize": 8192, 00:19:04.242 "large_bufsize": 135168 00:19:04.242 } 00:19:04.242 } 00:19:04.242 ] 00:19:04.242 }, 00:19:04.242 { 00:19:04.242 "subsystem": "sock", 00:19:04.242 "config": [ 00:19:04.242 { 00:19:04.242 "method": "sock_impl_set_options", 00:19:04.242 "params": { 00:19:04.242 "impl_name": "posix", 00:19:04.242 "recv_buf_size": 2097152, 00:19:04.242 "send_buf_size": 2097152, 00:19:04.242 "enable_recv_pipe": true, 00:19:04.242 "enable_quickack": false, 00:19:04.242 "enable_placement_id": 0, 00:19:04.242 "enable_zerocopy_send_server": true, 00:19:04.242 "enable_zerocopy_send_client": false, 00:19:04.242 "zerocopy_threshold": 0, 00:19:04.242 "tls_version": 0, 00:19:04.242 "enable_ktls": false 00:19:04.242 } 00:19:04.242 }, 00:19:04.242 { 00:19:04.242 "method": "sock_impl_set_options", 00:19:04.242 "params": { 00:19:04.242 "impl_name": "ssl", 00:19:04.242 "recv_buf_size": 4096, 00:19:04.242 "send_buf_size": 4096, 00:19:04.242 "enable_recv_pipe": true, 00:19:04.242 "enable_quickack": false, 00:19:04.242 "enable_placement_id": 0, 00:19:04.242 "enable_zerocopy_send_server": true, 00:19:04.242 "enable_zerocopy_send_client": false, 00:19:04.242 "zerocopy_threshold": 0, 00:19:04.242 "tls_version": 0, 00:19:04.242 "enable_ktls": false 00:19:04.242 } 00:19:04.242 } 00:19:04.242 ] 00:19:04.242 }, 00:19:04.242 { 00:19:04.242 "subsystem": "vmd", 00:19:04.242 "config": [] 00:19:04.242 }, 00:19:04.242 { 00:19:04.242 "subsystem": "accel", 00:19:04.242 "config": [ 00:19:04.242 { 00:19:04.242 "method": "accel_set_options", 00:19:04.242 "params": { 00:19:04.242 "small_cache_size": 128, 00:19:04.242 "large_cache_size": 16, 00:19:04.242 "task_count": 2048, 00:19:04.242 "sequence_count": 2048, 00:19:04.242 "buf_count": 2048 00:19:04.242 } 00:19:04.242 } 00:19:04.242 ] 00:19:04.242 }, 00:19:04.242 { 00:19:04.242 "subsystem": "bdev", 00:19:04.242 "config": [ 00:19:04.242 { 00:19:04.242 "method": "bdev_set_options", 00:19:04.242 "params": { 00:19:04.242 "bdev_io_pool_size": 65535, 00:19:04.242 "bdev_io_cache_size": 256, 00:19:04.242 "bdev_auto_examine": true, 00:19:04.242 "iobuf_small_cache_size": 128, 00:19:04.242 "iobuf_large_cache_size": 16 00:19:04.242 } 00:19:04.242 }, 00:19:04.242 { 00:19:04.242 "method": "bdev_raid_set_options", 00:19:04.242 "params": { 00:19:04.242 "process_window_size_kb": 1024 00:19:04.242 } 00:19:04.242 }, 00:19:04.242 { 00:19:04.242 "method": "bdev_iscsi_set_options", 00:19:04.242 "params": { 00:19:04.242 "timeout_sec": 30 00:19:04.242 } 00:19:04.242 }, 00:19:04.242 { 00:19:04.242 "method": "bdev_nvme_set_options", 00:19:04.242 "params": { 00:19:04.242 "action_on_timeout": "none", 00:19:04.242 "timeout_us": 0, 00:19:04.242 "timeout_admin_us": 0, 00:19:04.242 "keep_alive_timeout_ms": 10000, 00:19:04.242 "arbitration_burst": 0, 00:19:04.242 "low_priority_weight": 0, 00:19:04.242 "medium_priority_weight": 0, 00:19:04.242 "high_priority_weight": 0, 00:19:04.242 "nvme_adminq_poll_period_us": 10000, 00:19:04.242 "nvme_ioq_poll_period_us": 0, 00:19:04.242 "io_queue_requests": 0, 00:19:04.242 "delay_cmd_submit": true, 00:19:04.242 "transport_retry_count": 4, 00:19:04.242 "bdev_retry_count": 3, 00:19:04.242 "transport_ack_timeout": 0, 00:19:04.242 "ctrlr_loss_timeout_sec": 0, 00:19:04.242 "reconnect_delay_sec": 0, 00:19:04.242 "fast_io_fail_timeout_sec": 0, 00:19:04.242 "disable_auto_failback": false, 00:19:04.242 "generate_uuids": false, 00:19:04.242 "transport_tos": 0, 00:19:04.242 "nvme_error_stat": false, 00:19:04.242 "rdma_srq_size": 0, 00:19:04.242 "io_path_stat": false, 00:19:04.242 "allow_accel_sequence": false, 00:19:04.242 "rdma_max_cq_size": 0, 00:19:04.242 "rdma_cm_event_timeout_ms": 0, 00:19:04.242 "dhchap_digests": [ 00:19:04.242 "sha256", 00:19:04.242 "sha384", 00:19:04.242 "sha512" 00:19:04.242 ], 00:19:04.242 "dhchap_dhgroups": [ 00:19:04.242 "null", 00:19:04.242 "ffdhe2048", 00:19:04.242 "ffdhe3072", 00:19:04.242 "ffdhe4096", 00:19:04.242 "ffdhe6144", 00:19:04.242 "ffdhe8192" 00:19:04.242 ] 00:19:04.242 } 00:19:04.242 }, 00:19:04.242 { 00:19:04.242 "method": "bdev_nvme_set_hotplug", 00:19:04.242 "params": { 00:19:04.242 "period_us": 100000, 00:19:04.242 "enable": false 00:19:04.242 } 00:19:04.242 }, 00:19:04.242 { 00:19:04.242 "method": "bdev_malloc_create", 00:19:04.242 "params": { 00:19:04.242 "name": "malloc0", 00:19:04.242 "num_blocks": 8192, 00:19:04.242 "block_size": 4096, 00:19:04.242 "physical_block_size": 4096, 00:19:04.242 "uuid": "da907efc-96dc-42aa-9fb0-a0d9bfbab681", 00:19:04.242 "optimal_io_boundary": 0 00:19:04.242 } 00:19:04.242 }, 00:19:04.242 { 00:19:04.242 "method": "bdev_wait_for_examine" 00:19:04.242 } 00:19:04.242 ] 00:19:04.242 }, 00:19:04.242 { 00:19:04.242 "subsystem": "nbd", 00:19:04.242 "config": [] 00:19:04.242 }, 00:19:04.242 { 00:19:04.242 "subsystem": "scheduler", 00:19:04.242 "config": [ 00:19:04.242 { 00:19:04.242 "method": "framework_set_scheduler", 00:19:04.242 "params": { 00:19:04.242 "name": "static" 00:19:04.242 } 00:19:04.242 } 00:19:04.242 ] 00:19:04.242 }, 00:19:04.242 { 00:19:04.242 "subsystem": "nvmf", 00:19:04.242 "config": [ 00:19:04.242 { 00:19:04.242 "method": "nvmf_set_config", 00:19:04.242 "params": { 00:19:04.242 "discovery_filter": "match_any", 00:19:04.242 "admin_cmd_passthru": { 00:19:04.242 "identify_ctrlr": false 00:19:04.242 } 00:19:04.242 } 00:19:04.242 }, 00:19:04.242 { 00:19:04.242 "method": "nvmf_set_max_subsystems", 00:19:04.242 "params": { 00:19:04.242 "max_subsystems": 1024 00:19:04.242 } 00:19:04.242 }, 00:19:04.242 { 00:19:04.242 "method": "nvmf_set_crdt", 00:19:04.242 "params": { 00:19:04.242 "crdt1": 0, 00:19:04.242 "crdt2": 0, 00:19:04.242 "crdt3": 0 00:19:04.242 } 00:19:04.242 }, 00:19:04.242 { 00:19:04.242 "method": "nvmf_create_transport", 00:19:04.242 "params": { 00:19:04.242 "trtype": "TCP", 00:19:04.242 "max_queue_depth": 128, 00:19:04.242 "max_io_qpairs_per_ctrlr": 127, 00:19:04.242 "in_capsule_data_size": 4096, 00:19:04.242 "max_io_size": 131072, 00:19:04.242 "io_unit_size": 131072, 00:19:04.242 "max_aq_depth": 128, 00:19:04.242 "num_shared_buffers": 511, 00:19:04.242 "buf_cache_size": 4294967295, 00:19:04.242 "dif_insert_or_strip": false, 00:19:04.242 "zcopy": false, 00:19:04.242 "c2h_success": false, 00:19:04.242 "sock_priority": 0, 00:19:04.242 "abort_timeout_sec": 1, 00:19:04.242 "ack_timeout": 0 00:19:04.242 } 00:19:04.242 }, 00:19:04.242 { 00:19:04.242 "method": "nvmf_create_subsystem", 00:19:04.242 "params": { 00:19:04.242 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:04.242 "allow_any_host": false, 00:19:04.242 "serial_number": "00000000000000000000", 00:19:04.242 "model_number": "SPDK bdev Controller", 00:19:04.243 "max_names 21:12:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:04.243 paces": 32, 00:19:04.243 "min_cntlid": 1, 00:19:04.243 "max_cntlid": 65519, 00:19:04.243 "ana_reporting": false 00:19:04.243 } 00:19:04.243 }, 00:19:04.243 { 00:19:04.243 "method": "nvmf_subsystem_add_host", 00:19:04.243 "params": { 00:19:04.243 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:04.243 "host": "nqn.2016-06.io.spdk:host1", 00:19:04.243 "psk": "key0" 00:19:04.243 } 00:19:04.243 }, 00:19:04.243 { 00:19:04.243 "method": "nvmf_subsystem_add_ns", 00:19:04.243 "params": { 00:19:04.243 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:04.243 "namespace": { 00:19:04.243 "nsid": 1, 00:19:04.243 "bdev_name": "malloc0", 00:19:04.243 "nguid": "DA907EFC96DC42AA9FB0A0D9BFBAB681", 00:19:04.243 "uuid": "da907efc-96dc-42aa-9fb0-a0d9bfbab681", 00:19:04.243 "no_auto_visible": false 00:19:04.243 } 00:19:04.243 } 00:19:04.243 }, 00:19:04.243 { 00:19:04.243 "method": "nvmf_subsystem_add_listener", 00:19:04.243 "params": { 00:19:04.243 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:04.243 "listen_address": { 00:19:04.243 "trtype": "TCP", 00:19:04.243 "adrfam": "IPv4", 00:19:04.243 "traddr": "10.0.0.2", 00:19:04.243 "trsvcid": "4420" 00:19:04.243 }, 00:19:04.243 "secure_channel": true 00:19:04.243 } 00:19:04.243 } 00:19:04.243 ] 00:19:04.243 } 00:19:04.243 ] 00:19:04.243 }' 00:19:04.243 21:12:20 -- common/autotest_common.sh@10 -- # set +x 00:19:04.243 21:12:20 -- nvmf/common.sh@470 -- # nvmfpid=3083078 00:19:04.243 21:12:20 -- nvmf/common.sh@471 -- # waitforlisten 3083078 00:19:04.243 21:12:20 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:04.243 21:12:20 -- common/autotest_common.sh@817 -- # '[' -z 3083078 ']' 00:19:04.243 21:12:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:04.243 21:12:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:04.243 21:12:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:04.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:04.243 21:12:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:04.243 21:12:20 -- common/autotest_common.sh@10 -- # set +x 00:19:04.243 [2024-04-18 21:12:20.074900] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:19:04.243 [2024-04-18 21:12:20.074947] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:04.243 EAL: No free 2048 kB hugepages reported on node 1 00:19:04.243 [2024-04-18 21:12:20.138837] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.503 [2024-04-18 21:12:20.210810] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:04.503 [2024-04-18 21:12:20.210851] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:04.503 [2024-04-18 21:12:20.210858] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:04.503 [2024-04-18 21:12:20.210865] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:04.503 [2024-04-18 21:12:20.210871] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:04.503 [2024-04-18 21:12:20.210928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.503 [2024-04-18 21:12:20.415458] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:04.763 [2024-04-18 21:12:20.447485] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:04.763 [2024-04-18 21:12:20.465810] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:05.023 21:12:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:05.023 21:12:20 -- common/autotest_common.sh@850 -- # return 0 00:19:05.023 21:12:20 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:05.023 21:12:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:05.023 21:12:20 -- common/autotest_common.sh@10 -- # set +x 00:19:05.023 21:12:20 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:05.023 21:12:20 -- target/tls.sh@272 -- # bdevperf_pid=3083314 00:19:05.023 21:12:20 -- target/tls.sh@273 -- # waitforlisten 3083314 /var/tmp/bdevperf.sock 00:19:05.023 21:12:20 -- common/autotest_common.sh@817 -- # '[' -z 3083314 ']' 00:19:05.023 21:12:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:05.023 21:12:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:05.023 21:12:20 -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:05.023 21:12:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:05.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:05.023 21:12:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:05.023 21:12:20 -- target/tls.sh@270 -- # echo '{ 00:19:05.023 "subsystems": [ 00:19:05.023 { 00:19:05.023 "subsystem": "keyring", 00:19:05.023 "config": [ 00:19:05.023 { 00:19:05.023 "method": "keyring_file_add_key", 00:19:05.023 "params": { 00:19:05.023 "name": "key0", 00:19:05.023 "path": "/tmp/tmp.W1fES83uIr" 00:19:05.023 } 00:19:05.023 } 00:19:05.023 ] 00:19:05.023 }, 00:19:05.023 { 00:19:05.023 "subsystem": "iobuf", 00:19:05.023 "config": [ 00:19:05.023 { 00:19:05.023 "method": "iobuf_set_options", 00:19:05.023 "params": { 00:19:05.023 "small_pool_count": 8192, 00:19:05.023 "large_pool_count": 1024, 00:19:05.023 "small_bufsize": 8192, 00:19:05.023 "large_bufsize": 135168 00:19:05.023 } 00:19:05.023 } 00:19:05.023 ] 00:19:05.023 }, 00:19:05.023 { 00:19:05.023 "subsystem": "sock", 00:19:05.023 "config": [ 00:19:05.023 { 00:19:05.023 "method": "sock_impl_set_options", 00:19:05.023 "params": { 00:19:05.023 "impl_name": "posix", 00:19:05.023 "recv_buf_size": 2097152, 00:19:05.023 "send_buf_size": 2097152, 00:19:05.023 "enable_recv_pipe": true, 00:19:05.023 "enable_quickack": false, 00:19:05.023 "enable_placement_id": 0, 00:19:05.023 "enable_zerocopy_send_server": true, 00:19:05.023 "enable_zerocopy_send_client": false, 00:19:05.023 "zerocopy_threshold": 0, 00:19:05.023 "tls_version": 0, 00:19:05.023 "enable_ktls": false 00:19:05.023 } 00:19:05.023 }, 00:19:05.023 { 00:19:05.023 "method": "sock_impl_set_options", 00:19:05.023 "params": { 00:19:05.023 "impl_name": "ssl", 00:19:05.023 "recv_buf_size": 4096, 00:19:05.023 "send_buf_size": 4096, 00:19:05.023 "enable_recv_pipe": true, 00:19:05.023 "enable_quickack": false, 00:19:05.023 "enable_placement_id": 0, 00:19:05.023 "enable_zerocopy_send_server": true, 00:19:05.023 "enable_zerocopy_send_client": false, 00:19:05.023 "zerocopy_threshold": 0, 00:19:05.023 "tls_version": 0, 00:19:05.023 "enable_ktls": false 00:19:05.023 } 00:19:05.023 } 00:19:05.023 ] 00:19:05.023 }, 00:19:05.023 { 00:19:05.023 "subsystem": "vmd", 00:19:05.023 "config": [] 00:19:05.023 }, 00:19:05.023 { 00:19:05.023 "subsystem": "accel", 00:19:05.023 "config": [ 00:19:05.023 { 00:19:05.023 "method": "accel_set_options", 00:19:05.023 "params": { 00:19:05.023 "small_cache_size": 128, 00:19:05.023 "large_cache_size": 16, 00:19:05.023 "task_count": 2048, 00:19:05.023 "sequence_count": 2048, 00:19:05.023 "buf_count": 2048 00:19:05.023 } 00:19:05.023 } 00:19:05.023 ] 00:19:05.023 }, 00:19:05.023 { 00:19:05.023 "subsystem": "bdev", 00:19:05.023 "config": [ 00:19:05.023 { 00:19:05.023 "method": "bdev_set_options", 00:19:05.023 "params": { 00:19:05.023 "bdev_io_pool_size": 65535, 00:19:05.023 "bdev_io_cache_size": 256, 00:19:05.023 "bdev_auto_examine": true, 00:19:05.023 "iobuf_small_cache_size": 128, 00:19:05.023 "iobuf_large_cache_size": 16 00:19:05.023 } 00:19:05.023 }, 00:19:05.023 { 00:19:05.023 "method": "bdev_raid_set_options", 00:19:05.023 "params": { 00:19:05.023 "process_window_size_kb": 1024 00:19:05.023 } 00:19:05.023 }, 00:19:05.023 { 00:19:05.023 "method": "bdev_iscsi_set_options", 00:19:05.023 "params": { 00:19:05.023 "timeout_sec": 30 00:19:05.023 } 00:19:05.024 }, 00:19:05.024 { 00:19:05.024 "method": "bdev_nvme_set_options", 00:19:05.024 "params": { 00:19:05.024 "action_on_timeout": "none", 00:19:05.024 "timeout_us": 0, 00:19:05.024 "timeout_admin_us": 0, 00:19:05.024 "keep_alive_timeout_ms": 10000, 00:19:05.024 "arbitration_burst": 0, 00:19:05.024 "low_priority_weight": 0, 00:19:05.024 "medium_priority_weight": 0, 00:19:05.024 "high_priority_weight": 0, 00:19:05.024 "nvme_adminq_poll_period_us": 10000, 00:19:05.024 "nvme_ioq_poll_period_us": 0, 00:19:05.024 "io_queue_requests": 512, 00:19:05.024 "delay_cmd_submit": true, 00:19:05.024 "transport_retry_count": 4, 00:19:05.024 "bdev_retry_count": 3, 00:19:05.024 "transport_ack_timeout": 0, 00:19:05.024 "ctrlr_loss_timeout_sec": 0, 00:19:05.024 "reconnect_delay_sec": 0, 00:19:05.024 "fast_io_fail_timeout_sec": 0, 00:19:05.024 "disable_auto_failback": false, 00:19:05.024 "generate_uuids": false, 00:19:05.024 "transport_tos": 0, 00:19:05.024 "nvme_error_stat": false, 00:19:05.024 "rdma_srq_size": 0, 00:19:05.024 "io_path_stat": false, 00:19:05.024 "allow_accel_sequence": false, 00:19:05.024 "rdma_max_cq_size": 0, 00:19:05.024 "rdma_cm_event_timeout_ms": 0, 00:19:05.024 "dhchap_digests": [ 00:19:05.024 "sha256", 00:19:05.024 "sha384", 00:19:05.024 "sha512" 00:19:05.024 ], 00:19:05.024 "dhchap_dhgroups": [ 00:19:05.024 "null", 00:19:05.024 "ffdhe2048", 00:19:05.024 "ffdhe3072", 00:19:05.024 "ffdhe4096", 00:19:05.024 "ffdhe6144", 00:19:05.024 "ffdhe8192" 00:19:05.024 ] 00:19:05.024 } 00:19:05.024 }, 00:19:05.024 { 00:19:05.024 "method": "bdev_nvme_attach_controller", 00:19:05.024 "params": { 00:19:05.024 "name": "nvme0", 00:19:05.024 "trtype": "TCP", 00:19:05.024 "adrfam": "IPv4", 00:19:05.024 "traddr": "10.0.0.2", 00:19:05.024 "trsvcid": "4420", 00:19:05.024 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:05.024 "prchk_reftag": false, 00:19:05.024 "prchk_guard": false, 00:19:05.024 "ctrlr_loss_timeout_sec": 0, 00:19:05.024 "reconnect_delay_sec": 0, 00:19:05.024 "fast_io_fail_timeout_sec": 0, 00:19:05.024 "psk": "key0", 00:19:05.024 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:05.024 "hdgst": false, 00:19:05.024 "ddgst": false 00:19:05.024 } 00:19:05.024 }, 00:19:05.024 { 00:19:05.024 "method": "bdev_nvme_set_hotplug", 00:19:05.024 "params": { 00:19:05.024 "period_us": 100000, 00:19:05.024 "enable": false 00:19:05.024 } 00:19:05.024 }, 00:19:05.024 { 00:19:05.024 "method": "bdev_enable_histogram", 00:19:05.024 "params": { 00:19:05.024 "name": "nvme0n1", 00:19:05.024 "enable": true 00:19:05.024 } 00:19:05.024 }, 00:19:05.024 { 00:19:05.024 "method": "bdev_wait_for_examine" 00:19:05.024 } 00:19:05.024 ] 00:19:05.024 }, 00:19:05.024 { 00:19:05.024 "subsystem": "nbd", 00:19:05.024 "config": [] 00:19:05.024 } 00:19:05.024 ] 00:19:05.024 }' 00:19:05.024 21:12:20 -- common/autotest_common.sh@10 -- # set +x 00:19:05.024 [2024-04-18 21:12:20.944865] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:19:05.024 [2024-04-18 21:12:20.944912] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3083314 ] 00:19:05.284 EAL: No free 2048 kB hugepages reported on node 1 00:19:05.284 [2024-04-18 21:12:21.005534] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.284 [2024-04-18 21:12:21.078490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:05.543 [2024-04-18 21:12:21.220778] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:06.112 21:12:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:06.112 21:12:21 -- common/autotest_common.sh@850 -- # return 0 00:19:06.112 21:12:21 -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:06.112 21:12:21 -- target/tls.sh@275 -- # jq -r '.[].name' 00:19:06.112 21:12:21 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.112 21:12:21 -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:06.112 Running I/O for 1 seconds... 00:19:07.492 00:19:07.492 Latency(us) 00:19:07.492 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:07.492 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:07.492 Verification LBA range: start 0x0 length 0x2000 00:19:07.492 nvme0n1 : 1.04 2391.71 9.34 0.00 0.00 52644.40 6012.22 99842.67 00:19:07.492 =================================================================================================================== 00:19:07.492 Total : 2391.71 9.34 0.00 0.00 52644.40 6012.22 99842.67 00:19:07.492 0 00:19:07.492 21:12:23 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:19:07.492 21:12:23 -- target/tls.sh@279 -- # cleanup 00:19:07.492 21:12:23 -- target/tls.sh@15 -- # process_shm --id 0 00:19:07.492 21:12:23 -- common/autotest_common.sh@794 -- # type=--id 00:19:07.492 21:12:23 -- common/autotest_common.sh@795 -- # id=0 00:19:07.492 21:12:23 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:19:07.492 21:12:23 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:07.492 21:12:23 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:19:07.492 21:12:23 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:19:07.492 21:12:23 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:19:07.492 21:12:23 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:07.492 nvmf_trace.0 00:19:07.492 21:12:23 -- common/autotest_common.sh@809 -- # return 0 00:19:07.492 21:12:23 -- target/tls.sh@16 -- # killprocess 3083314 00:19:07.492 21:12:23 -- common/autotest_common.sh@936 -- # '[' -z 3083314 ']' 00:19:07.492 21:12:23 -- common/autotest_common.sh@940 -- # kill -0 3083314 00:19:07.492 21:12:23 -- common/autotest_common.sh@941 -- # uname 00:19:07.492 21:12:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:07.492 21:12:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3083314 00:19:07.492 21:12:23 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:07.492 21:12:23 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:07.492 21:12:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3083314' 00:19:07.492 killing process with pid 3083314 00:19:07.492 21:12:23 -- common/autotest_common.sh@955 -- # kill 3083314 00:19:07.492 Received shutdown signal, test time was about 1.000000 seconds 00:19:07.492 00:19:07.492 Latency(us) 00:19:07.492 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:07.492 =================================================================================================================== 00:19:07.492 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:07.492 21:12:23 -- common/autotest_common.sh@960 -- # wait 3083314 00:19:07.492 21:12:23 -- target/tls.sh@17 -- # nvmftestfini 00:19:07.492 21:12:23 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:07.492 21:12:23 -- nvmf/common.sh@117 -- # sync 00:19:07.492 21:12:23 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:07.492 21:12:23 -- nvmf/common.sh@120 -- # set +e 00:19:07.492 21:12:23 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:07.492 21:12:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:07.492 rmmod nvme_tcp 00:19:07.751 rmmod nvme_fabrics 00:19:07.751 rmmod nvme_keyring 00:19:07.751 21:12:23 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:07.751 21:12:23 -- nvmf/common.sh@124 -- # set -e 00:19:07.751 21:12:23 -- nvmf/common.sh@125 -- # return 0 00:19:07.751 21:12:23 -- nvmf/common.sh@478 -- # '[' -n 3083078 ']' 00:19:07.751 21:12:23 -- nvmf/common.sh@479 -- # killprocess 3083078 00:19:07.751 21:12:23 -- common/autotest_common.sh@936 -- # '[' -z 3083078 ']' 00:19:07.751 21:12:23 -- common/autotest_common.sh@940 -- # kill -0 3083078 00:19:07.751 21:12:23 -- common/autotest_common.sh@941 -- # uname 00:19:07.751 21:12:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:07.751 21:12:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3083078 00:19:07.751 21:12:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:07.751 21:12:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:07.751 21:12:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3083078' 00:19:07.751 killing process with pid 3083078 00:19:07.751 21:12:23 -- common/autotest_common.sh@955 -- # kill 3083078 00:19:07.751 21:12:23 -- common/autotest_common.sh@960 -- # wait 3083078 00:19:08.011 21:12:23 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:08.011 21:12:23 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:08.011 21:12:23 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:08.011 21:12:23 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:08.011 21:12:23 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:08.011 21:12:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:08.011 21:12:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:08.011 21:12:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:09.921 21:12:25 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:09.921 21:12:25 -- target/tls.sh@18 -- # rm -f /tmp/tmp.p9sszzdDoP /tmp/tmp.Lu9I4YB0Nk /tmp/tmp.W1fES83uIr 00:19:09.921 00:19:09.921 real 1m26.002s 00:19:09.921 user 2m12.383s 00:19:09.921 sys 0m29.062s 00:19:09.921 21:12:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:09.921 21:12:25 -- common/autotest_common.sh@10 -- # set +x 00:19:09.921 ************************************ 00:19:09.921 END TEST nvmf_tls 00:19:09.921 ************************************ 00:19:09.921 21:12:25 -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:09.921 21:12:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:09.921 21:12:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:09.921 21:12:25 -- common/autotest_common.sh@10 -- # set +x 00:19:10.181 ************************************ 00:19:10.181 START TEST nvmf_fips 00:19:10.181 ************************************ 00:19:10.181 21:12:25 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:10.181 * Looking for test storage... 00:19:10.181 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:10.181 21:12:26 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:10.181 21:12:26 -- nvmf/common.sh@7 -- # uname -s 00:19:10.181 21:12:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:10.181 21:12:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:10.181 21:12:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:10.181 21:12:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:10.181 21:12:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:10.181 21:12:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:10.181 21:12:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:10.181 21:12:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:10.181 21:12:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:10.181 21:12:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:10.181 21:12:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:10.181 21:12:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:10.181 21:12:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:10.181 21:12:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:10.181 21:12:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:10.181 21:12:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:10.181 21:12:26 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:10.181 21:12:26 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:10.181 21:12:26 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:10.181 21:12:26 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:10.181 21:12:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.181 21:12:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.181 21:12:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.181 21:12:26 -- paths/export.sh@5 -- # export PATH 00:19:10.181 21:12:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.181 21:12:26 -- nvmf/common.sh@47 -- # : 0 00:19:10.181 21:12:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:10.181 21:12:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:10.181 21:12:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:10.181 21:12:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:10.181 21:12:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:10.181 21:12:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:10.181 21:12:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:10.181 21:12:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:10.181 21:12:26 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:10.182 21:12:26 -- fips/fips.sh@89 -- # check_openssl_version 00:19:10.182 21:12:26 -- fips/fips.sh@83 -- # local target=3.0.0 00:19:10.182 21:12:26 -- fips/fips.sh@85 -- # openssl version 00:19:10.182 21:12:26 -- fips/fips.sh@85 -- # awk '{print $2}' 00:19:10.442 21:12:26 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:19:10.442 21:12:26 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:19:10.442 21:12:26 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:19:10.442 21:12:26 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:19:10.442 21:12:26 -- scripts/common.sh@333 -- # IFS=.-: 00:19:10.442 21:12:26 -- scripts/common.sh@333 -- # read -ra ver1 00:19:10.442 21:12:26 -- scripts/common.sh@334 -- # IFS=.-: 00:19:10.442 21:12:26 -- scripts/common.sh@334 -- # read -ra ver2 00:19:10.442 21:12:26 -- scripts/common.sh@335 -- # local 'op=>=' 00:19:10.442 21:12:26 -- scripts/common.sh@337 -- # ver1_l=3 00:19:10.442 21:12:26 -- scripts/common.sh@338 -- # ver2_l=3 00:19:10.442 21:12:26 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:19:10.442 21:12:26 -- scripts/common.sh@341 -- # case "$op" in 00:19:10.442 21:12:26 -- scripts/common.sh@345 -- # : 1 00:19:10.442 21:12:26 -- scripts/common.sh@361 -- # (( v = 0 )) 00:19:10.442 21:12:26 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:10.442 21:12:26 -- scripts/common.sh@362 -- # decimal 3 00:19:10.442 21:12:26 -- scripts/common.sh@350 -- # local d=3 00:19:10.442 21:12:26 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:10.442 21:12:26 -- scripts/common.sh@352 -- # echo 3 00:19:10.442 21:12:26 -- scripts/common.sh@362 -- # ver1[v]=3 00:19:10.442 21:12:26 -- scripts/common.sh@363 -- # decimal 3 00:19:10.442 21:12:26 -- scripts/common.sh@350 -- # local d=3 00:19:10.442 21:12:26 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:10.442 21:12:26 -- scripts/common.sh@352 -- # echo 3 00:19:10.442 21:12:26 -- scripts/common.sh@363 -- # ver2[v]=3 00:19:10.442 21:12:26 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:10.442 21:12:26 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:10.442 21:12:26 -- scripts/common.sh@361 -- # (( v++ )) 00:19:10.442 21:12:26 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:10.442 21:12:26 -- scripts/common.sh@362 -- # decimal 0 00:19:10.442 21:12:26 -- scripts/common.sh@350 -- # local d=0 00:19:10.442 21:12:26 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:10.442 21:12:26 -- scripts/common.sh@352 -- # echo 0 00:19:10.442 21:12:26 -- scripts/common.sh@362 -- # ver1[v]=0 00:19:10.442 21:12:26 -- scripts/common.sh@363 -- # decimal 0 00:19:10.442 21:12:26 -- scripts/common.sh@350 -- # local d=0 00:19:10.442 21:12:26 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:10.442 21:12:26 -- scripts/common.sh@352 -- # echo 0 00:19:10.442 21:12:26 -- scripts/common.sh@363 -- # ver2[v]=0 00:19:10.442 21:12:26 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:10.442 21:12:26 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:10.442 21:12:26 -- scripts/common.sh@361 -- # (( v++ )) 00:19:10.442 21:12:26 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:10.442 21:12:26 -- scripts/common.sh@362 -- # decimal 9 00:19:10.442 21:12:26 -- scripts/common.sh@350 -- # local d=9 00:19:10.442 21:12:26 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:19:10.442 21:12:26 -- scripts/common.sh@352 -- # echo 9 00:19:10.442 21:12:26 -- scripts/common.sh@362 -- # ver1[v]=9 00:19:10.442 21:12:26 -- scripts/common.sh@363 -- # decimal 0 00:19:10.442 21:12:26 -- scripts/common.sh@350 -- # local d=0 00:19:10.442 21:12:26 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:10.442 21:12:26 -- scripts/common.sh@352 -- # echo 0 00:19:10.442 21:12:26 -- scripts/common.sh@363 -- # ver2[v]=0 00:19:10.442 21:12:26 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:10.442 21:12:26 -- scripts/common.sh@364 -- # return 0 00:19:10.442 21:12:26 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:19:10.442 21:12:26 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:10.442 21:12:26 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:19:10.442 21:12:26 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:10.442 21:12:26 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:10.442 21:12:26 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:19:10.442 21:12:26 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:19:10.442 21:12:26 -- fips/fips.sh@113 -- # build_openssl_config 00:19:10.442 21:12:26 -- fips/fips.sh@37 -- # cat 00:19:10.442 21:12:26 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:19:10.442 21:12:26 -- fips/fips.sh@58 -- # cat - 00:19:10.442 21:12:26 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:10.442 21:12:26 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:19:10.442 21:12:26 -- fips/fips.sh@116 -- # mapfile -t providers 00:19:10.442 21:12:26 -- fips/fips.sh@116 -- # openssl list -providers 00:19:10.442 21:12:26 -- fips/fips.sh@116 -- # grep name 00:19:10.442 21:12:26 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:19:10.442 21:12:26 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:19:10.442 21:12:26 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:10.442 21:12:26 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:19:10.442 21:12:26 -- common/autotest_common.sh@638 -- # local es=0 00:19:10.442 21:12:26 -- fips/fips.sh@127 -- # : 00:19:10.442 21:12:26 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:10.442 21:12:26 -- common/autotest_common.sh@626 -- # local arg=openssl 00:19:10.442 21:12:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:10.442 21:12:26 -- common/autotest_common.sh@630 -- # type -t openssl 00:19:10.442 21:12:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:10.442 21:12:26 -- common/autotest_common.sh@632 -- # type -P openssl 00:19:10.442 21:12:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:10.442 21:12:26 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:19:10.442 21:12:26 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:19:10.442 21:12:26 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:19:10.442 Error setting digest 00:19:10.442 0072B8EE067F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:19:10.442 0072B8EE067F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:19:10.442 21:12:26 -- common/autotest_common.sh@641 -- # es=1 00:19:10.442 21:12:26 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:10.442 21:12:26 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:10.442 21:12:26 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:10.442 21:12:26 -- fips/fips.sh@130 -- # nvmftestinit 00:19:10.442 21:12:26 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:10.442 21:12:26 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:10.442 21:12:26 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:10.442 21:12:26 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:10.442 21:12:26 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:10.442 21:12:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:10.442 21:12:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:10.442 21:12:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:10.442 21:12:26 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:10.442 21:12:26 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:10.442 21:12:26 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:10.442 21:12:26 -- common/autotest_common.sh@10 -- # set +x 00:19:17.013 21:12:31 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:17.013 21:12:31 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:17.013 21:12:31 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:17.013 21:12:31 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:17.013 21:12:31 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:17.013 21:12:31 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:17.013 21:12:31 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:17.013 21:12:31 -- nvmf/common.sh@295 -- # net_devs=() 00:19:17.013 21:12:31 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:17.013 21:12:31 -- nvmf/common.sh@296 -- # e810=() 00:19:17.013 21:12:31 -- nvmf/common.sh@296 -- # local -ga e810 00:19:17.013 21:12:31 -- nvmf/common.sh@297 -- # x722=() 00:19:17.014 21:12:31 -- nvmf/common.sh@297 -- # local -ga x722 00:19:17.014 21:12:31 -- nvmf/common.sh@298 -- # mlx=() 00:19:17.014 21:12:31 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:17.014 21:12:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:17.014 21:12:31 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:17.014 21:12:31 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:17.014 21:12:31 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:17.014 21:12:31 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:17.014 21:12:31 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:17.014 21:12:31 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:17.014 21:12:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:17.014 21:12:31 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:17.014 21:12:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:17.014 21:12:31 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:17.014 21:12:31 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:17.014 21:12:31 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:17.014 21:12:31 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:17.014 21:12:31 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:17.014 21:12:31 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:17.014 21:12:31 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:17.014 21:12:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:17.014 21:12:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:17.014 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:17.014 21:12:31 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:17.014 21:12:31 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:17.014 21:12:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:17.014 21:12:31 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:17.014 21:12:31 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:17.014 21:12:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:17.014 21:12:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:17.014 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:17.014 21:12:31 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:17.014 21:12:31 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:17.014 21:12:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:17.014 21:12:31 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:17.014 21:12:31 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:17.014 21:12:31 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:17.014 21:12:31 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:17.014 21:12:31 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:17.014 21:12:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:17.014 21:12:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:17.014 21:12:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:17.014 21:12:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:17.014 21:12:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:17.014 Found net devices under 0000:86:00.0: cvl_0_0 00:19:17.014 21:12:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:17.014 21:12:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:17.014 21:12:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:17.014 21:12:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:17.014 21:12:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:17.014 21:12:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:17.014 Found net devices under 0000:86:00.1: cvl_0_1 00:19:17.014 21:12:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:17.014 21:12:31 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:17.014 21:12:31 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:17.014 21:12:31 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:17.014 21:12:31 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:17.014 21:12:31 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:17.014 21:12:31 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:17.014 21:12:31 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:17.014 21:12:31 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:17.014 21:12:31 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:17.014 21:12:31 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:17.014 21:12:31 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:17.014 21:12:31 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:17.014 21:12:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:17.014 21:12:31 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:17.014 21:12:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:17.014 21:12:31 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:17.014 21:12:31 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:17.014 21:12:31 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:17.014 21:12:31 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:17.014 21:12:31 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:17.014 21:12:31 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:17.014 21:12:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:17.014 21:12:32 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:17.014 21:12:32 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:17.014 21:12:32 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:17.014 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:17.014 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:19:17.014 00:19:17.014 --- 10.0.0.2 ping statistics --- 00:19:17.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.014 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:19:17.014 21:12:32 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:17.014 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:17.014 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:19:17.014 00:19:17.014 --- 10.0.0.1 ping statistics --- 00:19:17.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.014 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:19:17.014 21:12:32 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:17.014 21:12:32 -- nvmf/common.sh@411 -- # return 0 00:19:17.014 21:12:32 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:17.014 21:12:32 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:17.014 21:12:32 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:17.014 21:12:32 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:17.014 21:12:32 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:17.014 21:12:32 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:17.014 21:12:32 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:17.014 21:12:32 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:19:17.014 21:12:32 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:17.014 21:12:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:17.014 21:12:32 -- common/autotest_common.sh@10 -- # set +x 00:19:17.014 21:12:32 -- nvmf/common.sh@470 -- # nvmfpid=3087632 00:19:17.014 21:12:32 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:17.014 21:12:32 -- nvmf/common.sh@471 -- # waitforlisten 3087632 00:19:17.014 21:12:32 -- common/autotest_common.sh@817 -- # '[' -z 3087632 ']' 00:19:17.014 21:12:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.014 21:12:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:17.014 21:12:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.014 21:12:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:17.014 21:12:32 -- common/autotest_common.sh@10 -- # set +x 00:19:17.014 [2024-04-18 21:12:32.158226] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:19:17.014 [2024-04-18 21:12:32.158272] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:17.014 EAL: No free 2048 kB hugepages reported on node 1 00:19:17.014 [2024-04-18 21:12:32.218521] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.014 [2024-04-18 21:12:32.287434] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:17.014 [2024-04-18 21:12:32.287475] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:17.014 [2024-04-18 21:12:32.287481] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:17.014 [2024-04-18 21:12:32.287491] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:17.014 [2024-04-18 21:12:32.287496] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:17.014 [2024-04-18 21:12:32.287524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:17.014 21:12:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:17.014 21:12:32 -- common/autotest_common.sh@850 -- # return 0 00:19:17.014 21:12:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:17.014 21:12:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:17.273 21:12:32 -- common/autotest_common.sh@10 -- # set +x 00:19:17.273 21:12:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:17.273 21:12:32 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:19:17.273 21:12:32 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:17.273 21:12:32 -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:17.273 21:12:32 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:17.273 21:12:32 -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:17.273 21:12:32 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:17.273 21:12:32 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:17.273 21:12:32 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:17.273 [2024-04-18 21:12:33.145427] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:17.273 [2024-04-18 21:12:33.161420] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:17.273 [2024-04-18 21:12:33.161608] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:17.273 [2024-04-18 21:12:33.189613] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:17.273 malloc0 00:19:17.531 21:12:33 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:17.531 21:12:33 -- fips/fips.sh@147 -- # bdevperf_pid=3087876 00:19:17.531 21:12:33 -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:17.531 21:12:33 -- fips/fips.sh@148 -- # waitforlisten 3087876 /var/tmp/bdevperf.sock 00:19:17.531 21:12:33 -- common/autotest_common.sh@817 -- # '[' -z 3087876 ']' 00:19:17.531 21:12:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:17.531 21:12:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:17.531 21:12:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:17.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:17.531 21:12:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:17.531 21:12:33 -- common/autotest_common.sh@10 -- # set +x 00:19:17.531 [2024-04-18 21:12:33.268201] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:19:17.531 [2024-04-18 21:12:33.268246] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3087876 ] 00:19:17.531 EAL: No free 2048 kB hugepages reported on node 1 00:19:17.531 [2024-04-18 21:12:33.323673] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.531 [2024-04-18 21:12:33.395315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:18.466 21:12:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:18.466 21:12:34 -- common/autotest_common.sh@850 -- # return 0 00:19:18.466 21:12:34 -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:18.466 [2024-04-18 21:12:34.194252] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:18.466 [2024-04-18 21:12:34.194328] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:18.466 TLSTESTn1 00:19:18.466 21:12:34 -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:18.466 Running I/O for 10 seconds... 00:19:30.705 00:19:30.705 Latency(us) 00:19:30.705 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.705 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:30.705 Verification LBA range: start 0x0 length 0x2000 00:19:30.705 TLSTESTn1 : 10.04 2583.65 10.09 0.00 0.00 49435.29 7123.48 69297.20 00:19:30.705 =================================================================================================================== 00:19:30.705 Total : 2583.65 10.09 0.00 0.00 49435.29 7123.48 69297.20 00:19:30.705 0 00:19:30.705 21:12:44 -- fips/fips.sh@1 -- # cleanup 00:19:30.705 21:12:44 -- fips/fips.sh@15 -- # process_shm --id 0 00:19:30.705 21:12:44 -- common/autotest_common.sh@794 -- # type=--id 00:19:30.705 21:12:44 -- common/autotest_common.sh@795 -- # id=0 00:19:30.705 21:12:44 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:19:30.705 21:12:44 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:30.705 21:12:44 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:19:30.705 21:12:44 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:19:30.705 21:12:44 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:19:30.705 21:12:44 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:30.705 nvmf_trace.0 00:19:30.705 21:12:44 -- common/autotest_common.sh@809 -- # return 0 00:19:30.705 21:12:44 -- fips/fips.sh@16 -- # killprocess 3087876 00:19:30.705 21:12:44 -- common/autotest_common.sh@936 -- # '[' -z 3087876 ']' 00:19:30.705 21:12:44 -- common/autotest_common.sh@940 -- # kill -0 3087876 00:19:30.705 21:12:44 -- common/autotest_common.sh@941 -- # uname 00:19:30.705 21:12:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:30.705 21:12:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3087876 00:19:30.705 21:12:44 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:30.705 21:12:44 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:30.705 21:12:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3087876' 00:19:30.705 killing process with pid 3087876 00:19:30.705 21:12:44 -- common/autotest_common.sh@955 -- # kill 3087876 00:19:30.705 Received shutdown signal, test time was about 10.000000 seconds 00:19:30.705 00:19:30.705 Latency(us) 00:19:30.705 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.705 =================================================================================================================== 00:19:30.705 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:30.705 [2024-04-18 21:12:44.547283] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:30.705 21:12:44 -- common/autotest_common.sh@960 -- # wait 3087876 00:19:30.705 21:12:44 -- fips/fips.sh@17 -- # nvmftestfini 00:19:30.705 21:12:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:30.705 21:12:44 -- nvmf/common.sh@117 -- # sync 00:19:30.705 21:12:44 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:30.705 21:12:44 -- nvmf/common.sh@120 -- # set +e 00:19:30.705 21:12:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:30.705 21:12:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:30.705 rmmod nvme_tcp 00:19:30.705 rmmod nvme_fabrics 00:19:30.705 rmmod nvme_keyring 00:19:30.705 21:12:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:30.705 21:12:44 -- nvmf/common.sh@124 -- # set -e 00:19:30.705 21:12:44 -- nvmf/common.sh@125 -- # return 0 00:19:30.705 21:12:44 -- nvmf/common.sh@478 -- # '[' -n 3087632 ']' 00:19:30.705 21:12:44 -- nvmf/common.sh@479 -- # killprocess 3087632 00:19:30.705 21:12:44 -- common/autotest_common.sh@936 -- # '[' -z 3087632 ']' 00:19:30.705 21:12:44 -- common/autotest_common.sh@940 -- # kill -0 3087632 00:19:30.705 21:12:44 -- common/autotest_common.sh@941 -- # uname 00:19:30.705 21:12:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:30.705 21:12:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3087632 00:19:30.705 21:12:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:30.705 21:12:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:30.705 21:12:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3087632' 00:19:30.705 killing process with pid 3087632 00:19:30.705 21:12:44 -- common/autotest_common.sh@955 -- # kill 3087632 00:19:30.705 [2024-04-18 21:12:44.861448] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:30.705 21:12:44 -- common/autotest_common.sh@960 -- # wait 3087632 00:19:30.705 21:12:45 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:30.705 21:12:45 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:30.705 21:12:45 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:30.705 21:12:45 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:30.705 21:12:45 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:30.705 21:12:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.705 21:12:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:30.705 21:12:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:31.273 21:12:47 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:31.273 21:12:47 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:31.273 00:19:31.273 real 0m21.149s 00:19:31.273 user 0m22.595s 00:19:31.273 sys 0m9.296s 00:19:31.273 21:12:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:31.273 21:12:47 -- common/autotest_common.sh@10 -- # set +x 00:19:31.273 ************************************ 00:19:31.273 END TEST nvmf_fips 00:19:31.273 ************************************ 00:19:31.273 21:12:47 -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:19:31.273 21:12:47 -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:19:31.273 21:12:47 -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:19:31.273 21:12:47 -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:19:31.273 21:12:47 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:31.273 21:12:47 -- common/autotest_common.sh@10 -- # set +x 00:19:37.838 21:12:52 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:37.838 21:12:52 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:37.838 21:12:52 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:37.838 21:12:52 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:37.838 21:12:52 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:37.838 21:12:52 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:37.838 21:12:52 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:37.838 21:12:52 -- nvmf/common.sh@295 -- # net_devs=() 00:19:37.838 21:12:52 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:37.838 21:12:52 -- nvmf/common.sh@296 -- # e810=() 00:19:37.838 21:12:52 -- nvmf/common.sh@296 -- # local -ga e810 00:19:37.838 21:12:52 -- nvmf/common.sh@297 -- # x722=() 00:19:37.838 21:12:52 -- nvmf/common.sh@297 -- # local -ga x722 00:19:37.838 21:12:52 -- nvmf/common.sh@298 -- # mlx=() 00:19:37.838 21:12:52 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:37.838 21:12:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:37.838 21:12:52 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:37.838 21:12:52 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:37.838 21:12:52 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:37.838 21:12:52 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:37.838 21:12:52 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:37.838 21:12:52 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:37.838 21:12:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:37.838 21:12:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:37.838 21:12:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:37.838 21:12:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:37.838 21:12:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:37.838 21:12:53 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:37.838 21:12:53 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:37.838 21:12:53 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:37.838 21:12:53 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:37.838 21:12:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:37.838 21:12:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:37.838 21:12:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:37.838 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:37.838 21:12:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:37.838 21:12:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:37.838 21:12:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:37.838 21:12:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:37.838 21:12:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:37.838 21:12:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:37.838 21:12:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:37.838 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:37.838 21:12:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:37.838 21:12:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:37.838 21:12:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:37.838 21:12:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:37.838 21:12:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:37.838 21:12:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:37.838 21:12:53 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:37.838 21:12:53 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:37.838 21:12:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:37.838 21:12:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:37.838 21:12:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:37.838 21:12:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:37.838 21:12:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:37.838 Found net devices under 0000:86:00.0: cvl_0_0 00:19:37.838 21:12:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:37.838 21:12:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:37.838 21:12:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:37.838 21:12:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:37.838 21:12:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:37.838 21:12:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:37.838 Found net devices under 0000:86:00.1: cvl_0_1 00:19:37.838 21:12:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:37.838 21:12:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:37.838 21:12:53 -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:37.838 21:12:53 -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:19:37.838 21:12:53 -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:37.838 21:12:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:37.838 21:12:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:37.838 21:12:53 -- common/autotest_common.sh@10 -- # set +x 00:19:37.838 ************************************ 00:19:37.838 START TEST nvmf_perf_adq 00:19:37.838 ************************************ 00:19:37.838 21:12:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:37.838 * Looking for test storage... 00:19:37.838 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:37.838 21:12:53 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:37.838 21:12:53 -- nvmf/common.sh@7 -- # uname -s 00:19:37.838 21:12:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:37.838 21:12:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:37.838 21:12:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:37.838 21:12:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:37.838 21:12:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:37.838 21:12:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:37.838 21:12:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:37.838 21:12:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:37.838 21:12:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:37.838 21:12:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:37.838 21:12:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:37.838 21:12:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:37.838 21:12:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:37.838 21:12:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:37.838 21:12:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:37.838 21:12:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:37.838 21:12:53 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:37.838 21:12:53 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:37.838 21:12:53 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:37.838 21:12:53 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:37.838 21:12:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.838 21:12:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.838 21:12:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.838 21:12:53 -- paths/export.sh@5 -- # export PATH 00:19:37.838 21:12:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.838 21:12:53 -- nvmf/common.sh@47 -- # : 0 00:19:37.838 21:12:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:37.838 21:12:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:37.838 21:12:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:37.838 21:12:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:37.838 21:12:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:37.838 21:12:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:37.838 21:12:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:37.838 21:12:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:37.838 21:12:53 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:37.838 21:12:53 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:37.838 21:12:53 -- common/autotest_common.sh@10 -- # set +x 00:19:43.119 21:12:58 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:43.119 21:12:58 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:43.119 21:12:58 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:43.119 21:12:58 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:43.119 21:12:58 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:43.119 21:12:58 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:43.119 21:12:58 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:43.119 21:12:58 -- nvmf/common.sh@295 -- # net_devs=() 00:19:43.119 21:12:58 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:43.119 21:12:58 -- nvmf/common.sh@296 -- # e810=() 00:19:43.119 21:12:58 -- nvmf/common.sh@296 -- # local -ga e810 00:19:43.119 21:12:58 -- nvmf/common.sh@297 -- # x722=() 00:19:43.119 21:12:58 -- nvmf/common.sh@297 -- # local -ga x722 00:19:43.119 21:12:58 -- nvmf/common.sh@298 -- # mlx=() 00:19:43.119 21:12:58 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:43.119 21:12:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:43.119 21:12:58 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:43.119 21:12:58 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:43.119 21:12:58 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:43.119 21:12:58 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:43.119 21:12:58 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:43.119 21:12:58 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:43.119 21:12:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:43.119 21:12:58 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:43.119 21:12:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:43.119 21:12:58 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:43.119 21:12:58 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:43.119 21:12:58 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:43.119 21:12:58 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:43.119 21:12:58 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:43.119 21:12:58 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:43.119 21:12:58 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:43.119 21:12:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:43.119 21:12:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:43.119 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:43.119 21:12:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:43.119 21:12:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:43.119 21:12:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:43.119 21:12:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:43.119 21:12:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:43.119 21:12:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:43.119 21:12:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:43.119 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:43.119 21:12:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:43.119 21:12:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:43.119 21:12:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:43.119 21:12:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:43.119 21:12:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:43.119 21:12:58 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:43.119 21:12:58 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:43.119 21:12:58 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:43.119 21:12:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:43.119 21:12:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:43.119 21:12:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:43.119 21:12:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:43.119 21:12:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:43.119 Found net devices under 0000:86:00.0: cvl_0_0 00:19:43.119 21:12:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:43.119 21:12:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:43.119 21:12:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:43.119 21:12:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:43.119 21:12:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:43.119 21:12:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:43.119 Found net devices under 0000:86:00.1: cvl_0_1 00:19:43.119 21:12:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:43.119 21:12:58 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:43.119 21:12:58 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:43.119 21:12:58 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:43.119 21:12:58 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:43.119 21:12:58 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:19:43.119 21:12:58 -- target/perf_adq.sh@52 -- # rmmod ice 00:19:44.497 21:13:00 -- target/perf_adq.sh@53 -- # modprobe ice 00:19:46.398 21:13:02 -- target/perf_adq.sh@54 -- # sleep 5 00:19:51.666 21:13:07 -- target/perf_adq.sh@67 -- # nvmftestinit 00:19:51.666 21:13:07 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:51.666 21:13:07 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:51.666 21:13:07 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:51.666 21:13:07 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:51.667 21:13:07 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:51.667 21:13:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:51.667 21:13:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:51.667 21:13:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:51.667 21:13:07 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:51.667 21:13:07 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:51.667 21:13:07 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:51.667 21:13:07 -- common/autotest_common.sh@10 -- # set +x 00:19:51.667 21:13:07 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:51.667 21:13:07 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:51.667 21:13:07 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:51.667 21:13:07 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:51.667 21:13:07 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:51.667 21:13:07 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:51.667 21:13:07 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:51.667 21:13:07 -- nvmf/common.sh@295 -- # net_devs=() 00:19:51.667 21:13:07 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:51.667 21:13:07 -- nvmf/common.sh@296 -- # e810=() 00:19:51.667 21:13:07 -- nvmf/common.sh@296 -- # local -ga e810 00:19:51.667 21:13:07 -- nvmf/common.sh@297 -- # x722=() 00:19:51.667 21:13:07 -- nvmf/common.sh@297 -- # local -ga x722 00:19:51.667 21:13:07 -- nvmf/common.sh@298 -- # mlx=() 00:19:51.667 21:13:07 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:51.667 21:13:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:51.667 21:13:07 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:51.667 21:13:07 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:51.667 21:13:07 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:51.667 21:13:07 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:51.667 21:13:07 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:51.667 21:13:07 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:51.667 21:13:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:51.667 21:13:07 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:51.667 21:13:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:51.667 21:13:07 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:51.667 21:13:07 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:51.667 21:13:07 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:51.667 21:13:07 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:51.667 21:13:07 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:51.667 21:13:07 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:51.667 21:13:07 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:51.667 21:13:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:51.667 21:13:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:51.667 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:51.667 21:13:07 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:51.667 21:13:07 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:51.667 21:13:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:51.667 21:13:07 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:51.667 21:13:07 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:51.667 21:13:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:51.667 21:13:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:51.667 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:51.667 21:13:07 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:51.667 21:13:07 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:51.667 21:13:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:51.667 21:13:07 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:51.667 21:13:07 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:51.667 21:13:07 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:51.667 21:13:07 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:51.667 21:13:07 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:51.667 21:13:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:51.667 21:13:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:51.667 21:13:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:51.667 21:13:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:51.667 21:13:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:51.667 Found net devices under 0000:86:00.0: cvl_0_0 00:19:51.667 21:13:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:51.667 21:13:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:51.667 21:13:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:51.667 21:13:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:51.667 21:13:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:51.667 21:13:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:51.667 Found net devices under 0000:86:00.1: cvl_0_1 00:19:51.667 21:13:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:51.667 21:13:07 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:51.667 21:13:07 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:51.667 21:13:07 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:51.667 21:13:07 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:51.667 21:13:07 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:51.667 21:13:07 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:51.667 21:13:07 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:51.667 21:13:07 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:51.667 21:13:07 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:51.667 21:13:07 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:51.667 21:13:07 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:51.667 21:13:07 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:51.667 21:13:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:51.667 21:13:07 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:51.667 21:13:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:51.667 21:13:07 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:51.667 21:13:07 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:51.667 21:13:07 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:51.667 21:13:07 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:51.667 21:13:07 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:51.667 21:13:07 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:51.667 21:13:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:51.668 21:13:07 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:51.668 21:13:07 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:51.668 21:13:07 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:51.668 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:51.668 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:19:51.668 00:19:51.668 --- 10.0.0.2 ping statistics --- 00:19:51.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.668 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:19:51.668 21:13:07 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:51.668 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:51.668 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.372 ms 00:19:51.668 00:19:51.668 --- 10.0.0.1 ping statistics --- 00:19:51.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.668 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:19:51.668 21:13:07 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:51.668 21:13:07 -- nvmf/common.sh@411 -- # return 0 00:19:51.668 21:13:07 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:51.668 21:13:07 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:51.668 21:13:07 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:51.668 21:13:07 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:51.668 21:13:07 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:51.668 21:13:07 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:51.668 21:13:07 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:51.668 21:13:07 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:51.668 21:13:07 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:51.668 21:13:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:51.668 21:13:07 -- common/autotest_common.sh@10 -- # set +x 00:19:51.668 21:13:07 -- nvmf/common.sh@470 -- # nvmfpid=3098388 00:19:51.668 21:13:07 -- nvmf/common.sh@471 -- # waitforlisten 3098388 00:19:51.668 21:13:07 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:51.668 21:13:07 -- common/autotest_common.sh@817 -- # '[' -z 3098388 ']' 00:19:51.668 21:13:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.668 21:13:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:51.668 21:13:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:51.668 21:13:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:51.668 21:13:07 -- common/autotest_common.sh@10 -- # set +x 00:19:51.668 [2024-04-18 21:13:07.439713] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:19:51.668 [2024-04-18 21:13:07.439762] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:51.668 EAL: No free 2048 kB hugepages reported on node 1 00:19:51.668 [2024-04-18 21:13:07.503840] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:51.668 [2024-04-18 21:13:07.584184] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:51.668 [2024-04-18 21:13:07.584220] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:51.668 [2024-04-18 21:13:07.584227] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:51.668 [2024-04-18 21:13:07.584233] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:51.668 [2024-04-18 21:13:07.584238] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:51.668 [2024-04-18 21:13:07.584284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:51.668 [2024-04-18 21:13:07.584381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:51.668 [2024-04-18 21:13:07.584458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:51.668 [2024-04-18 21:13:07.584459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.602 21:13:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:52.602 21:13:08 -- common/autotest_common.sh@850 -- # return 0 00:19:52.602 21:13:08 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:52.602 21:13:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:52.602 21:13:08 -- common/autotest_common.sh@10 -- # set +x 00:19:52.602 21:13:08 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:52.602 21:13:08 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:19:52.602 21:13:08 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:52.602 21:13:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.602 21:13:08 -- common/autotest_common.sh@10 -- # set +x 00:19:52.602 21:13:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.602 21:13:08 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:19:52.602 21:13:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.602 21:13:08 -- common/autotest_common.sh@10 -- # set +x 00:19:52.602 21:13:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.602 21:13:08 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:52.602 21:13:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.602 21:13:08 -- common/autotest_common.sh@10 -- # set +x 00:19:52.602 [2024-04-18 21:13:08.393277] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:52.602 21:13:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.602 21:13:08 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:52.602 21:13:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.602 21:13:08 -- common/autotest_common.sh@10 -- # set +x 00:19:52.602 Malloc1 00:19:52.602 21:13:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.602 21:13:08 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:52.602 21:13:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.602 21:13:08 -- common/autotest_common.sh@10 -- # set +x 00:19:52.602 21:13:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.603 21:13:08 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:52.603 21:13:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.603 21:13:08 -- common/autotest_common.sh@10 -- # set +x 00:19:52.603 21:13:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.603 21:13:08 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:52.603 21:13:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.603 21:13:08 -- common/autotest_common.sh@10 -- # set +x 00:19:52.603 [2024-04-18 21:13:08.441299] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:52.603 21:13:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.603 21:13:08 -- target/perf_adq.sh@73 -- # perfpid=3098635 00:19:52.603 21:13:08 -- target/perf_adq.sh@74 -- # sleep 2 00:19:52.603 21:13:08 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:52.603 EAL: No free 2048 kB hugepages reported on node 1 00:19:55.131 21:13:10 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:19:55.131 21:13:10 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:55.131 21:13:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.131 21:13:10 -- target/perf_adq.sh@76 -- # wc -l 00:19:55.131 21:13:10 -- common/autotest_common.sh@10 -- # set +x 00:19:55.131 21:13:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.131 21:13:10 -- target/perf_adq.sh@76 -- # count=4 00:19:55.131 21:13:10 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:19:55.131 21:13:10 -- target/perf_adq.sh@81 -- # wait 3098635 00:20:03.314 Initializing NVMe Controllers 00:20:03.314 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:03.314 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:03.314 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:03.314 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:03.314 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:03.314 Initialization complete. Launching workers. 00:20:03.314 ======================================================== 00:20:03.314 Latency(us) 00:20:03.314 Device Information : IOPS MiB/s Average min max 00:20:03.314 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9281.80 36.26 6895.35 2068.10 10726.53 00:20:03.314 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9600.20 37.50 6666.16 3057.78 10780.61 00:20:03.314 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9455.10 36.93 6769.48 2828.52 11025.14 00:20:03.314 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9402.50 36.73 6807.70 1991.72 11155.05 00:20:03.314 ======================================================== 00:20:03.314 Total : 37739.60 147.42 6783.67 1991.72 11155.05 00:20:03.314 00:20:03.314 21:13:18 -- target/perf_adq.sh@82 -- # nvmftestfini 00:20:03.314 21:13:18 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:03.314 21:13:18 -- nvmf/common.sh@117 -- # sync 00:20:03.314 21:13:18 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:03.314 21:13:18 -- nvmf/common.sh@120 -- # set +e 00:20:03.314 21:13:18 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:03.314 21:13:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:03.314 rmmod nvme_tcp 00:20:03.314 rmmod nvme_fabrics 00:20:03.314 rmmod nvme_keyring 00:20:03.314 21:13:18 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:03.314 21:13:18 -- nvmf/common.sh@124 -- # set -e 00:20:03.314 21:13:18 -- nvmf/common.sh@125 -- # return 0 00:20:03.314 21:13:18 -- nvmf/common.sh@478 -- # '[' -n 3098388 ']' 00:20:03.314 21:13:18 -- nvmf/common.sh@479 -- # killprocess 3098388 00:20:03.314 21:13:18 -- common/autotest_common.sh@936 -- # '[' -z 3098388 ']' 00:20:03.314 21:13:18 -- common/autotest_common.sh@940 -- # kill -0 3098388 00:20:03.314 21:13:18 -- common/autotest_common.sh@941 -- # uname 00:20:03.314 21:13:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:03.314 21:13:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3098388 00:20:03.314 21:13:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:03.314 21:13:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:03.314 21:13:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3098388' 00:20:03.314 killing process with pid 3098388 00:20:03.314 21:13:18 -- common/autotest_common.sh@955 -- # kill 3098388 00:20:03.314 21:13:18 -- common/autotest_common.sh@960 -- # wait 3098388 00:20:03.314 21:13:18 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:03.314 21:13:18 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:03.314 21:13:18 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:03.314 21:13:18 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:03.314 21:13:18 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:03.314 21:13:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:03.314 21:13:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:03.314 21:13:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:05.431 21:13:20 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:05.431 21:13:21 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:20:05.431 21:13:21 -- target/perf_adq.sh@52 -- # rmmod ice 00:20:06.367 21:13:22 -- target/perf_adq.sh@53 -- # modprobe ice 00:20:08.264 21:13:23 -- target/perf_adq.sh@54 -- # sleep 5 00:20:13.531 21:13:28 -- target/perf_adq.sh@87 -- # nvmftestinit 00:20:13.531 21:13:28 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:13.531 21:13:28 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:13.531 21:13:28 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:13.531 21:13:28 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:13.531 21:13:28 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:13.531 21:13:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:13.531 21:13:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:13.531 21:13:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.531 21:13:28 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:13.531 21:13:29 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:13.531 21:13:29 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:13.531 21:13:29 -- common/autotest_common.sh@10 -- # set +x 00:20:13.531 21:13:29 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:13.531 21:13:29 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:13.531 21:13:29 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:13.531 21:13:29 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:13.531 21:13:29 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:13.531 21:13:29 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:13.531 21:13:29 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:13.531 21:13:29 -- nvmf/common.sh@295 -- # net_devs=() 00:20:13.531 21:13:29 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:13.531 21:13:29 -- nvmf/common.sh@296 -- # e810=() 00:20:13.531 21:13:29 -- nvmf/common.sh@296 -- # local -ga e810 00:20:13.531 21:13:29 -- nvmf/common.sh@297 -- # x722=() 00:20:13.531 21:13:29 -- nvmf/common.sh@297 -- # local -ga x722 00:20:13.531 21:13:29 -- nvmf/common.sh@298 -- # mlx=() 00:20:13.531 21:13:29 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:13.531 21:13:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:13.531 21:13:29 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:13.531 21:13:29 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:13.531 21:13:29 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:13.531 21:13:29 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:13.531 21:13:29 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:13.531 21:13:29 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:13.531 21:13:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:13.531 21:13:29 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:13.531 21:13:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:13.531 21:13:29 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:13.531 21:13:29 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:13.531 21:13:29 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:13.531 21:13:29 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:13.531 21:13:29 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:13.531 21:13:29 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:13.531 21:13:29 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:13.531 21:13:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:13.531 21:13:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:13.531 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:13.531 21:13:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:13.531 21:13:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:13.531 21:13:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:13.531 21:13:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:13.531 21:13:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:13.531 21:13:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:13.531 21:13:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:13.531 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:13.531 21:13:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:13.531 21:13:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:13.531 21:13:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:13.531 21:13:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:13.531 21:13:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:13.531 21:13:29 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:13.531 21:13:29 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:13.531 21:13:29 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:13.531 21:13:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:13.531 21:13:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:13.531 21:13:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:13.531 21:13:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:13.531 21:13:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:13.531 Found net devices under 0000:86:00.0: cvl_0_0 00:20:13.531 21:13:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:13.531 21:13:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:13.531 21:13:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:13.531 21:13:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:13.531 21:13:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:13.531 21:13:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:13.531 Found net devices under 0000:86:00.1: cvl_0_1 00:20:13.531 21:13:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:13.531 21:13:29 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:13.531 21:13:29 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:13.531 21:13:29 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:13.531 21:13:29 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:13.531 21:13:29 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:13.531 21:13:29 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:13.531 21:13:29 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:13.531 21:13:29 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:13.531 21:13:29 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:13.531 21:13:29 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:13.531 21:13:29 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:13.531 21:13:29 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:13.531 21:13:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:13.531 21:13:29 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:13.531 21:13:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:13.531 21:13:29 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:13.531 21:13:29 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:13.531 21:13:29 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:13.531 21:13:29 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:13.531 21:13:29 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:13.531 21:13:29 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:13.531 21:13:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:13.531 21:13:29 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:13.531 21:13:29 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:13.531 21:13:29 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:13.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:13.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:20:13.531 00:20:13.531 --- 10.0.0.2 ping statistics --- 00:20:13.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.532 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:20:13.532 21:13:29 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:13.532 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:13.532 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:20:13.532 00:20:13.532 --- 10.0.0.1 ping statistics --- 00:20:13.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.532 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:20:13.532 21:13:29 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:13.532 21:13:29 -- nvmf/common.sh@411 -- # return 0 00:20:13.532 21:13:29 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:13.532 21:13:29 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:13.532 21:13:29 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:13.532 21:13:29 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:13.532 21:13:29 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:13.532 21:13:29 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:13.532 21:13:29 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:13.532 21:13:29 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:20:13.532 21:13:29 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:13.532 21:13:29 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:13.532 21:13:29 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:13.532 net.core.busy_poll = 1 00:20:13.532 21:13:29 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:13.532 net.core.busy_read = 1 00:20:13.532 21:13:29 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:13.532 21:13:29 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:13.532 21:13:29 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:13.790 21:13:29 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:13.790 21:13:29 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:13.790 21:13:29 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:13.790 21:13:29 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:13.790 21:13:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:13.790 21:13:29 -- common/autotest_common.sh@10 -- # set +x 00:20:13.790 21:13:29 -- nvmf/common.sh@470 -- # nvmfpid=3102315 00:20:13.790 21:13:29 -- nvmf/common.sh@471 -- # waitforlisten 3102315 00:20:13.790 21:13:29 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:13.790 21:13:29 -- common/autotest_common.sh@817 -- # '[' -z 3102315 ']' 00:20:13.790 21:13:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.790 21:13:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:13.790 21:13:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.790 21:13:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:13.790 21:13:29 -- common/autotest_common.sh@10 -- # set +x 00:20:13.790 [2024-04-18 21:13:29.570130] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:20:13.790 [2024-04-18 21:13:29.570174] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:13.790 EAL: No free 2048 kB hugepages reported on node 1 00:20:13.790 [2024-04-18 21:13:29.633151] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:13.790 [2024-04-18 21:13:29.714857] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:13.790 [2024-04-18 21:13:29.714894] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:13.790 [2024-04-18 21:13:29.714906] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:13.790 [2024-04-18 21:13:29.714912] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:13.790 [2024-04-18 21:13:29.714916] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:13.790 [2024-04-18 21:13:29.715163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:13.790 [2024-04-18 21:13:29.715242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:13.790 [2024-04-18 21:13:29.715417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:13.790 [2024-04-18 21:13:29.715420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.721 21:13:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:14.722 21:13:30 -- common/autotest_common.sh@850 -- # return 0 00:20:14.722 21:13:30 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:14.722 21:13:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:14.722 21:13:30 -- common/autotest_common.sh@10 -- # set +x 00:20:14.722 21:13:30 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:14.722 21:13:30 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:20:14.722 21:13:30 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:14.722 21:13:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.722 21:13:30 -- common/autotest_common.sh@10 -- # set +x 00:20:14.722 21:13:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.722 21:13:30 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:20:14.722 21:13:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.722 21:13:30 -- common/autotest_common.sh@10 -- # set +x 00:20:14.722 21:13:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.722 21:13:30 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:14.722 21:13:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.722 21:13:30 -- common/autotest_common.sh@10 -- # set +x 00:20:14.722 [2024-04-18 21:13:30.512262] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:14.722 21:13:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.722 21:13:30 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:14.722 21:13:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.722 21:13:30 -- common/autotest_common.sh@10 -- # set +x 00:20:14.722 Malloc1 00:20:14.722 21:13:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.722 21:13:30 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:14.722 21:13:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.722 21:13:30 -- common/autotest_common.sh@10 -- # set +x 00:20:14.722 21:13:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.722 21:13:30 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:14.722 21:13:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.722 21:13:30 -- common/autotest_common.sh@10 -- # set +x 00:20:14.722 21:13:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.722 21:13:30 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:14.722 21:13:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.722 21:13:30 -- common/autotest_common.sh@10 -- # set +x 00:20:14.722 [2024-04-18 21:13:30.560223] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:14.722 21:13:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.722 21:13:30 -- target/perf_adq.sh@94 -- # perfpid=3102454 00:20:14.722 21:13:30 -- target/perf_adq.sh@95 -- # sleep 2 00:20:14.722 21:13:30 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:14.722 EAL: No free 2048 kB hugepages reported on node 1 00:20:17.247 21:13:32 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:20:17.247 21:13:32 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:17.247 21:13:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:17.247 21:13:32 -- target/perf_adq.sh@97 -- # wc -l 00:20:17.247 21:13:32 -- common/autotest_common.sh@10 -- # set +x 00:20:17.247 21:13:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:17.247 21:13:32 -- target/perf_adq.sh@97 -- # count=2 00:20:17.247 21:13:32 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:20:17.247 21:13:32 -- target/perf_adq.sh@103 -- # wait 3102454 00:20:25.346 Initializing NVMe Controllers 00:20:25.346 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:25.346 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:25.346 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:25.346 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:25.346 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:25.346 Initialization complete. Launching workers. 00:20:25.346 ======================================================== 00:20:25.346 Latency(us) 00:20:25.346 Device Information : IOPS MiB/s Average min max 00:20:25.346 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11354.80 44.35 5636.98 1651.23 46548.60 00:20:25.346 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4591.30 17.93 13945.45 1807.48 59232.27 00:20:25.346 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5857.00 22.88 10929.96 1775.12 55705.17 00:20:25.346 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5160.90 20.16 12403.20 1815.12 60372.71 00:20:25.346 ======================================================== 00:20:25.346 Total : 26963.99 105.33 9496.48 1651.23 60372.71 00:20:25.346 00:20:25.346 21:13:40 -- target/perf_adq.sh@104 -- # nvmftestfini 00:20:25.346 21:13:40 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:25.346 21:13:40 -- nvmf/common.sh@117 -- # sync 00:20:25.346 21:13:40 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:25.346 21:13:40 -- nvmf/common.sh@120 -- # set +e 00:20:25.346 21:13:40 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:25.346 21:13:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:25.346 rmmod nvme_tcp 00:20:25.346 rmmod nvme_fabrics 00:20:25.346 rmmod nvme_keyring 00:20:25.346 21:13:40 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:25.346 21:13:40 -- nvmf/common.sh@124 -- # set -e 00:20:25.346 21:13:40 -- nvmf/common.sh@125 -- # return 0 00:20:25.346 21:13:40 -- nvmf/common.sh@478 -- # '[' -n 3102315 ']' 00:20:25.346 21:13:40 -- nvmf/common.sh@479 -- # killprocess 3102315 00:20:25.346 21:13:40 -- common/autotest_common.sh@936 -- # '[' -z 3102315 ']' 00:20:25.346 21:13:40 -- common/autotest_common.sh@940 -- # kill -0 3102315 00:20:25.346 21:13:40 -- common/autotest_common.sh@941 -- # uname 00:20:25.346 21:13:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:25.346 21:13:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3102315 00:20:25.346 21:13:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:25.346 21:13:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:25.346 21:13:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3102315' 00:20:25.346 killing process with pid 3102315 00:20:25.346 21:13:40 -- common/autotest_common.sh@955 -- # kill 3102315 00:20:25.346 21:13:40 -- common/autotest_common.sh@960 -- # wait 3102315 00:20:25.346 21:13:41 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:25.346 21:13:41 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:25.346 21:13:41 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:25.346 21:13:41 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:25.346 21:13:41 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:25.346 21:13:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.346 21:13:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:25.346 21:13:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.310 21:13:43 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:27.310 21:13:43 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:20:27.310 00:20:27.310 real 0m49.985s 00:20:27.310 user 2m48.713s 00:20:27.310 sys 0m9.895s 00:20:27.310 21:13:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:27.310 21:13:43 -- common/autotest_common.sh@10 -- # set +x 00:20:27.310 ************************************ 00:20:27.310 END TEST nvmf_perf_adq 00:20:27.310 ************************************ 00:20:27.310 21:13:43 -- nvmf/nvmf.sh@82 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:27.310 21:13:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:27.310 21:13:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:27.310 21:13:43 -- common/autotest_common.sh@10 -- # set +x 00:20:27.568 ************************************ 00:20:27.568 START TEST nvmf_shutdown 00:20:27.568 ************************************ 00:20:27.568 21:13:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:27.568 * Looking for test storage... 00:20:27.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:27.568 21:13:43 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:27.568 21:13:43 -- nvmf/common.sh@7 -- # uname -s 00:20:27.568 21:13:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:27.568 21:13:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:27.568 21:13:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:27.568 21:13:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:27.568 21:13:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:27.569 21:13:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:27.569 21:13:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:27.569 21:13:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:27.569 21:13:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:27.569 21:13:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:27.569 21:13:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:27.569 21:13:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:27.569 21:13:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:27.569 21:13:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:27.569 21:13:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:27.569 21:13:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:27.569 21:13:43 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:27.569 21:13:43 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:27.569 21:13:43 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:27.569 21:13:43 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:27.569 21:13:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.569 21:13:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.569 21:13:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.569 21:13:43 -- paths/export.sh@5 -- # export PATH 00:20:27.569 21:13:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.569 21:13:43 -- nvmf/common.sh@47 -- # : 0 00:20:27.569 21:13:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:27.569 21:13:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:27.569 21:13:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:27.569 21:13:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:27.569 21:13:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:27.569 21:13:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:27.569 21:13:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:27.569 21:13:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:27.569 21:13:43 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:27.569 21:13:43 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:27.569 21:13:43 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:27.569 21:13:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:27.569 21:13:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:27.569 21:13:43 -- common/autotest_common.sh@10 -- # set +x 00:20:27.827 ************************************ 00:20:27.827 START TEST nvmf_shutdown_tc1 00:20:27.827 ************************************ 00:20:27.827 21:13:43 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc1 00:20:27.827 21:13:43 -- target/shutdown.sh@74 -- # starttarget 00:20:27.827 21:13:43 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:27.827 21:13:43 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:27.827 21:13:43 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:27.827 21:13:43 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:27.827 21:13:43 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:27.827 21:13:43 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:27.827 21:13:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.827 21:13:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:27.827 21:13:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.827 21:13:43 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:27.827 21:13:43 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:27.827 21:13:43 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:27.827 21:13:43 -- common/autotest_common.sh@10 -- # set +x 00:20:34.389 21:13:49 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:34.389 21:13:49 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:34.389 21:13:49 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:34.389 21:13:49 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:34.389 21:13:49 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:34.389 21:13:49 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:34.389 21:13:49 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:34.389 21:13:49 -- nvmf/common.sh@295 -- # net_devs=() 00:20:34.389 21:13:49 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:34.389 21:13:49 -- nvmf/common.sh@296 -- # e810=() 00:20:34.389 21:13:49 -- nvmf/common.sh@296 -- # local -ga e810 00:20:34.389 21:13:49 -- nvmf/common.sh@297 -- # x722=() 00:20:34.389 21:13:49 -- nvmf/common.sh@297 -- # local -ga x722 00:20:34.389 21:13:49 -- nvmf/common.sh@298 -- # mlx=() 00:20:34.389 21:13:49 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:34.389 21:13:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:34.389 21:13:49 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:34.389 21:13:49 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:34.389 21:13:49 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:34.389 21:13:49 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:34.389 21:13:49 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:34.389 21:13:49 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:34.389 21:13:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:34.389 21:13:49 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:34.389 21:13:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:34.389 21:13:49 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:34.389 21:13:49 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:34.389 21:13:49 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:34.389 21:13:49 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:34.389 21:13:49 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:34.389 21:13:49 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:34.389 21:13:49 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:34.389 21:13:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:34.389 21:13:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:34.389 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:34.389 21:13:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:34.389 21:13:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:34.389 21:13:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:34.389 21:13:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:34.389 21:13:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:34.389 21:13:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:34.389 21:13:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:34.389 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:34.389 21:13:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:34.389 21:13:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:34.389 21:13:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:34.389 21:13:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:34.389 21:13:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:34.389 21:13:49 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:34.389 21:13:49 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:34.389 21:13:49 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:34.389 21:13:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:34.389 21:13:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:34.389 21:13:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:34.389 21:13:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:34.389 21:13:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:34.389 Found net devices under 0000:86:00.0: cvl_0_0 00:20:34.389 21:13:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:34.389 21:13:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:34.389 21:13:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:34.389 21:13:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:34.389 21:13:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:34.389 21:13:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:34.389 Found net devices under 0000:86:00.1: cvl_0_1 00:20:34.389 21:13:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:34.389 21:13:49 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:34.389 21:13:49 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:34.389 21:13:49 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:34.389 21:13:49 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:34.389 21:13:49 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:34.389 21:13:49 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:34.389 21:13:49 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:34.389 21:13:49 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:34.389 21:13:49 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:34.389 21:13:49 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:34.389 21:13:49 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:34.389 21:13:49 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:34.389 21:13:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:34.389 21:13:49 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:34.389 21:13:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:34.389 21:13:49 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:34.389 21:13:49 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:34.389 21:13:49 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:34.389 21:13:49 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:34.389 21:13:49 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:34.389 21:13:49 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:34.389 21:13:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:34.389 21:13:49 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:34.390 21:13:49 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:34.390 21:13:49 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:34.390 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:34.390 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:20:34.390 00:20:34.390 --- 10.0.0.2 ping statistics --- 00:20:34.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.390 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:20:34.390 21:13:49 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:34.390 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:34.390 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:20:34.390 00:20:34.390 --- 10.0.0.1 ping statistics --- 00:20:34.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.390 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:20:34.390 21:13:49 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:34.390 21:13:49 -- nvmf/common.sh@411 -- # return 0 00:20:34.390 21:13:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:34.390 21:13:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:34.390 21:13:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:34.390 21:13:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:34.390 21:13:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:34.390 21:13:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:34.390 21:13:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:34.390 21:13:49 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:34.390 21:13:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:34.390 21:13:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:34.390 21:13:49 -- common/autotest_common.sh@10 -- # set +x 00:20:34.390 21:13:49 -- nvmf/common.sh@470 -- # nvmfpid=3108189 00:20:34.390 21:13:49 -- nvmf/common.sh@471 -- # waitforlisten 3108189 00:20:34.390 21:13:49 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:34.390 21:13:49 -- common/autotest_common.sh@817 -- # '[' -z 3108189 ']' 00:20:34.390 21:13:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.390 21:13:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:34.390 21:13:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:34.390 21:13:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:34.390 21:13:49 -- common/autotest_common.sh@10 -- # set +x 00:20:34.390 [2024-04-18 21:13:49.882657] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:20:34.390 [2024-04-18 21:13:49.882700] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:34.390 EAL: No free 2048 kB hugepages reported on node 1 00:20:34.390 [2024-04-18 21:13:49.946908] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:34.390 [2024-04-18 21:13:50.025951] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:34.390 [2024-04-18 21:13:50.025990] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:34.390 [2024-04-18 21:13:50.025998] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:34.390 [2024-04-18 21:13:50.026004] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:34.390 [2024-04-18 21:13:50.026009] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:34.390 [2024-04-18 21:13:50.026116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:34.390 [2024-04-18 21:13:50.026140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:34.390 [2024-04-18 21:13:50.026253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:34.390 [2024-04-18 21:13:50.026255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:34.956 21:13:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:34.956 21:13:50 -- common/autotest_common.sh@850 -- # return 0 00:20:34.956 21:13:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:34.956 21:13:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:34.956 21:13:50 -- common/autotest_common.sh@10 -- # set +x 00:20:34.956 21:13:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:34.956 21:13:50 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:34.956 21:13:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:34.956 21:13:50 -- common/autotest_common.sh@10 -- # set +x 00:20:34.956 [2024-04-18 21:13:50.722439] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:34.956 21:13:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:34.956 21:13:50 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:34.956 21:13:50 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:34.956 21:13:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:34.956 21:13:50 -- common/autotest_common.sh@10 -- # set +x 00:20:34.956 21:13:50 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:34.956 21:13:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:34.956 21:13:50 -- target/shutdown.sh@28 -- # cat 00:20:34.956 21:13:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:34.956 21:13:50 -- target/shutdown.sh@28 -- # cat 00:20:34.956 21:13:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:34.956 21:13:50 -- target/shutdown.sh@28 -- # cat 00:20:34.956 21:13:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:34.956 21:13:50 -- target/shutdown.sh@28 -- # cat 00:20:34.956 21:13:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:34.956 21:13:50 -- target/shutdown.sh@28 -- # cat 00:20:34.956 21:13:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:34.956 21:13:50 -- target/shutdown.sh@28 -- # cat 00:20:34.956 21:13:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:34.956 21:13:50 -- target/shutdown.sh@28 -- # cat 00:20:34.956 21:13:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:34.956 21:13:50 -- target/shutdown.sh@28 -- # cat 00:20:34.956 21:13:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:34.956 21:13:50 -- target/shutdown.sh@28 -- # cat 00:20:34.956 21:13:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:34.956 21:13:50 -- target/shutdown.sh@28 -- # cat 00:20:34.956 21:13:50 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:34.956 21:13:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:34.956 21:13:50 -- common/autotest_common.sh@10 -- # set +x 00:20:34.956 Malloc1 00:20:34.957 [2024-04-18 21:13:50.818253] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:34.957 Malloc2 00:20:34.957 Malloc3 00:20:35.214 Malloc4 00:20:35.214 Malloc5 00:20:35.214 Malloc6 00:20:35.214 Malloc7 00:20:35.214 Malloc8 00:20:35.214 Malloc9 00:20:35.472 Malloc10 00:20:35.472 21:13:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.472 21:13:51 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:35.472 21:13:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:35.472 21:13:51 -- common/autotest_common.sh@10 -- # set +x 00:20:35.472 21:13:51 -- target/shutdown.sh@78 -- # perfpid=3108469 00:20:35.472 21:13:51 -- target/shutdown.sh@79 -- # waitforlisten 3108469 /var/tmp/bdevperf.sock 00:20:35.472 21:13:51 -- common/autotest_common.sh@817 -- # '[' -z 3108469 ']' 00:20:35.472 21:13:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:35.472 21:13:51 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:35.472 21:13:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:35.472 21:13:51 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:35.472 21:13:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:35.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:35.472 21:13:51 -- nvmf/common.sh@521 -- # config=() 00:20:35.472 21:13:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:35.472 21:13:51 -- nvmf/common.sh@521 -- # local subsystem config 00:20:35.472 21:13:51 -- common/autotest_common.sh@10 -- # set +x 00:20:35.472 21:13:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:35.472 21:13:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:35.472 { 00:20:35.472 "params": { 00:20:35.472 "name": "Nvme$subsystem", 00:20:35.472 "trtype": "$TEST_TRANSPORT", 00:20:35.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.472 "adrfam": "ipv4", 00:20:35.472 "trsvcid": "$NVMF_PORT", 00:20:35.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.473 "hdgst": ${hdgst:-false}, 00:20:35.473 "ddgst": ${ddgst:-false} 00:20:35.473 }, 00:20:35.473 "method": "bdev_nvme_attach_controller" 00:20:35.473 } 00:20:35.473 EOF 00:20:35.473 )") 00:20:35.473 21:13:51 -- nvmf/common.sh@543 -- # cat 00:20:35.473 21:13:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:35.473 21:13:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:35.473 { 00:20:35.473 "params": { 00:20:35.473 "name": "Nvme$subsystem", 00:20:35.473 "trtype": "$TEST_TRANSPORT", 00:20:35.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.473 "adrfam": "ipv4", 00:20:35.473 "trsvcid": "$NVMF_PORT", 00:20:35.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.473 "hdgst": ${hdgst:-false}, 00:20:35.473 "ddgst": ${ddgst:-false} 00:20:35.473 }, 00:20:35.473 "method": "bdev_nvme_attach_controller" 00:20:35.473 } 00:20:35.473 EOF 00:20:35.473 )") 00:20:35.473 21:13:51 -- nvmf/common.sh@543 -- # cat 00:20:35.473 21:13:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:35.473 21:13:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:35.473 { 00:20:35.473 "params": { 00:20:35.473 "name": "Nvme$subsystem", 00:20:35.473 "trtype": "$TEST_TRANSPORT", 00:20:35.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.473 "adrfam": "ipv4", 00:20:35.473 "trsvcid": "$NVMF_PORT", 00:20:35.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.473 "hdgst": ${hdgst:-false}, 00:20:35.473 "ddgst": ${ddgst:-false} 00:20:35.473 }, 00:20:35.473 "method": "bdev_nvme_attach_controller" 00:20:35.473 } 00:20:35.473 EOF 00:20:35.473 )") 00:20:35.473 21:13:51 -- nvmf/common.sh@543 -- # cat 00:20:35.473 21:13:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:35.473 21:13:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:35.473 { 00:20:35.473 "params": { 00:20:35.473 "name": "Nvme$subsystem", 00:20:35.473 "trtype": "$TEST_TRANSPORT", 00:20:35.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.473 "adrfam": "ipv4", 00:20:35.473 "trsvcid": "$NVMF_PORT", 00:20:35.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.473 "hdgst": ${hdgst:-false}, 00:20:35.473 "ddgst": ${ddgst:-false} 00:20:35.473 }, 00:20:35.473 "method": "bdev_nvme_attach_controller" 00:20:35.473 } 00:20:35.473 EOF 00:20:35.473 )") 00:20:35.473 21:13:51 -- nvmf/common.sh@543 -- # cat 00:20:35.473 21:13:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:35.473 21:13:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:35.473 { 00:20:35.473 "params": { 00:20:35.473 "name": "Nvme$subsystem", 00:20:35.473 "trtype": "$TEST_TRANSPORT", 00:20:35.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.473 "adrfam": "ipv4", 00:20:35.473 "trsvcid": "$NVMF_PORT", 00:20:35.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.473 "hdgst": ${hdgst:-false}, 00:20:35.473 "ddgst": ${ddgst:-false} 00:20:35.473 }, 00:20:35.473 "method": "bdev_nvme_attach_controller" 00:20:35.473 } 00:20:35.473 EOF 00:20:35.473 )") 00:20:35.473 21:13:51 -- nvmf/common.sh@543 -- # cat 00:20:35.473 21:13:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:35.473 21:13:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:35.473 { 00:20:35.473 "params": { 00:20:35.473 "name": "Nvme$subsystem", 00:20:35.473 "trtype": "$TEST_TRANSPORT", 00:20:35.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.473 "adrfam": "ipv4", 00:20:35.473 "trsvcid": "$NVMF_PORT", 00:20:35.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.473 "hdgst": ${hdgst:-false}, 00:20:35.473 "ddgst": ${ddgst:-false} 00:20:35.473 }, 00:20:35.473 "method": "bdev_nvme_attach_controller" 00:20:35.473 } 00:20:35.473 EOF 00:20:35.473 )") 00:20:35.473 21:13:51 -- nvmf/common.sh@543 -- # cat 00:20:35.473 21:13:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:35.473 21:13:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:35.473 { 00:20:35.473 "params": { 00:20:35.473 "name": "Nvme$subsystem", 00:20:35.473 "trtype": "$TEST_TRANSPORT", 00:20:35.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.473 "adrfam": "ipv4", 00:20:35.473 "trsvcid": "$NVMF_PORT", 00:20:35.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.473 "hdgst": ${hdgst:-false}, 00:20:35.473 "ddgst": ${ddgst:-false} 00:20:35.473 }, 00:20:35.473 "method": "bdev_nvme_attach_controller" 00:20:35.473 } 00:20:35.473 EOF 00:20:35.473 )") 00:20:35.473 [2024-04-18 21:13:51.289876] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:20:35.473 [2024-04-18 21:13:51.289924] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:35.473 21:13:51 -- nvmf/common.sh@543 -- # cat 00:20:35.473 21:13:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:35.473 21:13:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:35.473 { 00:20:35.473 "params": { 00:20:35.473 "name": "Nvme$subsystem", 00:20:35.473 "trtype": "$TEST_TRANSPORT", 00:20:35.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.473 "adrfam": "ipv4", 00:20:35.473 "trsvcid": "$NVMF_PORT", 00:20:35.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.473 "hdgst": ${hdgst:-false}, 00:20:35.473 "ddgst": ${ddgst:-false} 00:20:35.473 }, 00:20:35.473 "method": "bdev_nvme_attach_controller" 00:20:35.473 } 00:20:35.473 EOF 00:20:35.473 )") 00:20:35.473 21:13:51 -- nvmf/common.sh@543 -- # cat 00:20:35.473 21:13:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:35.473 21:13:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:35.473 { 00:20:35.473 "params": { 00:20:35.473 "name": "Nvme$subsystem", 00:20:35.473 "trtype": "$TEST_TRANSPORT", 00:20:35.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.473 "adrfam": "ipv4", 00:20:35.473 "trsvcid": "$NVMF_PORT", 00:20:35.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.473 "hdgst": ${hdgst:-false}, 00:20:35.473 "ddgst": ${ddgst:-false} 00:20:35.473 }, 00:20:35.473 "method": "bdev_nvme_attach_controller" 00:20:35.473 } 00:20:35.473 EOF 00:20:35.473 )") 00:20:35.473 21:13:51 -- nvmf/common.sh@543 -- # cat 00:20:35.473 21:13:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:35.473 21:13:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:35.473 { 00:20:35.473 "params": { 00:20:35.473 "name": "Nvme$subsystem", 00:20:35.473 "trtype": "$TEST_TRANSPORT", 00:20:35.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.473 "adrfam": "ipv4", 00:20:35.473 "trsvcid": "$NVMF_PORT", 00:20:35.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.473 "hdgst": ${hdgst:-false}, 00:20:35.473 "ddgst": ${ddgst:-false} 00:20:35.473 }, 00:20:35.473 "method": "bdev_nvme_attach_controller" 00:20:35.473 } 00:20:35.473 EOF 00:20:35.473 )") 00:20:35.473 21:13:51 -- nvmf/common.sh@543 -- # cat 00:20:35.473 21:13:51 -- nvmf/common.sh@545 -- # jq . 00:20:35.473 EAL: No free 2048 kB hugepages reported on node 1 00:20:35.473 21:13:51 -- nvmf/common.sh@546 -- # IFS=, 00:20:35.473 21:13:51 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:35.473 "params": { 00:20:35.473 "name": "Nvme1", 00:20:35.473 "trtype": "tcp", 00:20:35.473 "traddr": "10.0.0.2", 00:20:35.473 "adrfam": "ipv4", 00:20:35.473 "trsvcid": "4420", 00:20:35.473 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:35.473 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:35.473 "hdgst": false, 00:20:35.473 "ddgst": false 00:20:35.473 }, 00:20:35.473 "method": "bdev_nvme_attach_controller" 00:20:35.473 },{ 00:20:35.473 "params": { 00:20:35.473 "name": "Nvme2", 00:20:35.473 "trtype": "tcp", 00:20:35.473 "traddr": "10.0.0.2", 00:20:35.473 "adrfam": "ipv4", 00:20:35.473 "trsvcid": "4420", 00:20:35.473 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:35.473 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:35.473 "hdgst": false, 00:20:35.473 "ddgst": false 00:20:35.473 }, 00:20:35.473 "method": "bdev_nvme_attach_controller" 00:20:35.473 },{ 00:20:35.473 "params": { 00:20:35.473 "name": "Nvme3", 00:20:35.473 "trtype": "tcp", 00:20:35.473 "traddr": "10.0.0.2", 00:20:35.473 "adrfam": "ipv4", 00:20:35.473 "trsvcid": "4420", 00:20:35.473 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:35.473 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:35.473 "hdgst": false, 00:20:35.473 "ddgst": false 00:20:35.473 }, 00:20:35.473 "method": "bdev_nvme_attach_controller" 00:20:35.473 },{ 00:20:35.473 "params": { 00:20:35.473 "name": "Nvme4", 00:20:35.473 "trtype": "tcp", 00:20:35.474 "traddr": "10.0.0.2", 00:20:35.474 "adrfam": "ipv4", 00:20:35.474 "trsvcid": "4420", 00:20:35.474 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:35.474 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:35.474 "hdgst": false, 00:20:35.474 "ddgst": false 00:20:35.474 }, 00:20:35.474 "method": "bdev_nvme_attach_controller" 00:20:35.474 },{ 00:20:35.474 "params": { 00:20:35.474 "name": "Nvme5", 00:20:35.474 "trtype": "tcp", 00:20:35.474 "traddr": "10.0.0.2", 00:20:35.474 "adrfam": "ipv4", 00:20:35.474 "trsvcid": "4420", 00:20:35.474 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:35.474 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:35.474 "hdgst": false, 00:20:35.474 "ddgst": false 00:20:35.474 }, 00:20:35.474 "method": "bdev_nvme_attach_controller" 00:20:35.474 },{ 00:20:35.474 "params": { 00:20:35.474 "name": "Nvme6", 00:20:35.474 "trtype": "tcp", 00:20:35.474 "traddr": "10.0.0.2", 00:20:35.474 "adrfam": "ipv4", 00:20:35.474 "trsvcid": "4420", 00:20:35.474 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:35.474 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:35.474 "hdgst": false, 00:20:35.474 "ddgst": false 00:20:35.474 }, 00:20:35.474 "method": "bdev_nvme_attach_controller" 00:20:35.474 },{ 00:20:35.474 "params": { 00:20:35.474 "name": "Nvme7", 00:20:35.474 "trtype": "tcp", 00:20:35.474 "traddr": "10.0.0.2", 00:20:35.474 "adrfam": "ipv4", 00:20:35.474 "trsvcid": "4420", 00:20:35.474 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:35.474 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:35.474 "hdgst": false, 00:20:35.474 "ddgst": false 00:20:35.474 }, 00:20:35.474 "method": "bdev_nvme_attach_controller" 00:20:35.474 },{ 00:20:35.474 "params": { 00:20:35.474 "name": "Nvme8", 00:20:35.474 "trtype": "tcp", 00:20:35.474 "traddr": "10.0.0.2", 00:20:35.474 "adrfam": "ipv4", 00:20:35.474 "trsvcid": "4420", 00:20:35.474 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:35.474 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:35.474 "hdgst": false, 00:20:35.474 "ddgst": false 00:20:35.474 }, 00:20:35.474 "method": "bdev_nvme_attach_controller" 00:20:35.474 },{ 00:20:35.474 "params": { 00:20:35.474 "name": "Nvme9", 00:20:35.474 "trtype": "tcp", 00:20:35.474 "traddr": "10.0.0.2", 00:20:35.474 "adrfam": "ipv4", 00:20:35.474 "trsvcid": "4420", 00:20:35.474 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:35.474 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:35.474 "hdgst": false, 00:20:35.474 "ddgst": false 00:20:35.474 }, 00:20:35.474 "method": "bdev_nvme_attach_controller" 00:20:35.474 },{ 00:20:35.474 "params": { 00:20:35.474 "name": "Nvme10", 00:20:35.474 "trtype": "tcp", 00:20:35.474 "traddr": "10.0.0.2", 00:20:35.474 "adrfam": "ipv4", 00:20:35.474 "trsvcid": "4420", 00:20:35.474 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:35.474 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:35.474 "hdgst": false, 00:20:35.474 "ddgst": false 00:20:35.474 }, 00:20:35.474 "method": "bdev_nvme_attach_controller" 00:20:35.474 }' 00:20:35.474 [2024-04-18 21:13:51.350533] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.732 [2024-04-18 21:13:51.421775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.105 21:13:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:37.105 21:13:52 -- common/autotest_common.sh@850 -- # return 0 00:20:37.105 21:13:52 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:37.105 21:13:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.105 21:13:52 -- common/autotest_common.sh@10 -- # set +x 00:20:37.105 21:13:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.105 21:13:52 -- target/shutdown.sh@83 -- # kill -9 3108469 00:20:37.105 21:13:52 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:20:37.105 21:13:52 -- target/shutdown.sh@87 -- # sleep 1 00:20:38.039 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3108469 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:38.039 21:13:53 -- target/shutdown.sh@88 -- # kill -0 3108189 00:20:38.039 21:13:53 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:38.039 21:13:53 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:38.039 21:13:53 -- nvmf/common.sh@521 -- # config=() 00:20:38.039 21:13:53 -- nvmf/common.sh@521 -- # local subsystem config 00:20:38.039 21:13:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:38.039 21:13:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:38.039 { 00:20:38.039 "params": { 00:20:38.039 "name": "Nvme$subsystem", 00:20:38.039 "trtype": "$TEST_TRANSPORT", 00:20:38.039 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.039 "adrfam": "ipv4", 00:20:38.039 "trsvcid": "$NVMF_PORT", 00:20:38.039 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.039 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.039 "hdgst": ${hdgst:-false}, 00:20:38.039 "ddgst": ${ddgst:-false} 00:20:38.039 }, 00:20:38.039 "method": "bdev_nvme_attach_controller" 00:20:38.039 } 00:20:38.039 EOF 00:20:38.039 )") 00:20:38.039 21:13:53 -- nvmf/common.sh@543 -- # cat 00:20:38.039 21:13:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:38.039 21:13:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:38.039 { 00:20:38.039 "params": { 00:20:38.039 "name": "Nvme$subsystem", 00:20:38.039 "trtype": "$TEST_TRANSPORT", 00:20:38.039 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.039 "adrfam": "ipv4", 00:20:38.039 "trsvcid": "$NVMF_PORT", 00:20:38.039 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.039 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.039 "hdgst": ${hdgst:-false}, 00:20:38.039 "ddgst": ${ddgst:-false} 00:20:38.039 }, 00:20:38.039 "method": "bdev_nvme_attach_controller" 00:20:38.039 } 00:20:38.039 EOF 00:20:38.039 )") 00:20:38.039 21:13:53 -- nvmf/common.sh@543 -- # cat 00:20:38.039 21:13:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:38.039 21:13:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:38.039 { 00:20:38.039 "params": { 00:20:38.039 "name": "Nvme$subsystem", 00:20:38.039 "trtype": "$TEST_TRANSPORT", 00:20:38.039 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.039 "adrfam": "ipv4", 00:20:38.039 "trsvcid": "$NVMF_PORT", 00:20:38.039 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.039 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.039 "hdgst": ${hdgst:-false}, 00:20:38.039 "ddgst": ${ddgst:-false} 00:20:38.039 }, 00:20:38.039 "method": "bdev_nvme_attach_controller" 00:20:38.039 } 00:20:38.039 EOF 00:20:38.039 )") 00:20:38.039 21:13:53 -- nvmf/common.sh@543 -- # cat 00:20:38.039 21:13:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:38.039 21:13:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:38.039 { 00:20:38.039 "params": { 00:20:38.039 "name": "Nvme$subsystem", 00:20:38.039 "trtype": "$TEST_TRANSPORT", 00:20:38.039 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.039 "adrfam": "ipv4", 00:20:38.039 "trsvcid": "$NVMF_PORT", 00:20:38.039 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.039 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.039 "hdgst": ${hdgst:-false}, 00:20:38.039 "ddgst": ${ddgst:-false} 00:20:38.039 }, 00:20:38.039 "method": "bdev_nvme_attach_controller" 00:20:38.039 } 00:20:38.039 EOF 00:20:38.039 )") 00:20:38.039 21:13:53 -- nvmf/common.sh@543 -- # cat 00:20:38.039 21:13:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:38.039 21:13:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:38.039 { 00:20:38.039 "params": { 00:20:38.039 "name": "Nvme$subsystem", 00:20:38.039 "trtype": "$TEST_TRANSPORT", 00:20:38.039 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.039 "adrfam": "ipv4", 00:20:38.039 "trsvcid": "$NVMF_PORT", 00:20:38.039 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.039 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.039 "hdgst": ${hdgst:-false}, 00:20:38.039 "ddgst": ${ddgst:-false} 00:20:38.039 }, 00:20:38.039 "method": "bdev_nvme_attach_controller" 00:20:38.039 } 00:20:38.039 EOF 00:20:38.039 )") 00:20:38.039 21:13:53 -- nvmf/common.sh@543 -- # cat 00:20:38.039 21:13:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:38.039 21:13:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:38.039 { 00:20:38.039 "params": { 00:20:38.039 "name": "Nvme$subsystem", 00:20:38.039 "trtype": "$TEST_TRANSPORT", 00:20:38.039 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.039 "adrfam": "ipv4", 00:20:38.039 "trsvcid": "$NVMF_PORT", 00:20:38.039 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.039 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.039 "hdgst": ${hdgst:-false}, 00:20:38.039 "ddgst": ${ddgst:-false} 00:20:38.039 }, 00:20:38.039 "method": "bdev_nvme_attach_controller" 00:20:38.039 } 00:20:38.039 EOF 00:20:38.039 )") 00:20:38.039 21:13:53 -- nvmf/common.sh@543 -- # cat 00:20:38.039 21:13:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:38.039 21:13:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:38.039 { 00:20:38.039 "params": { 00:20:38.039 "name": "Nvme$subsystem", 00:20:38.039 "trtype": "$TEST_TRANSPORT", 00:20:38.039 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.039 "adrfam": "ipv4", 00:20:38.039 "trsvcid": "$NVMF_PORT", 00:20:38.039 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.039 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.039 "hdgst": ${hdgst:-false}, 00:20:38.039 "ddgst": ${ddgst:-false} 00:20:38.039 }, 00:20:38.039 "method": "bdev_nvme_attach_controller" 00:20:38.039 } 00:20:38.039 EOF 00:20:38.039 )") 00:20:38.039 21:13:53 -- nvmf/common.sh@543 -- # cat 00:20:38.039 [2024-04-18 21:13:53.788222] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:20:38.040 [2024-04-18 21:13:53.788270] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3108844 ] 00:20:38.040 21:13:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:38.040 21:13:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:38.040 { 00:20:38.040 "params": { 00:20:38.040 "name": "Nvme$subsystem", 00:20:38.040 "trtype": "$TEST_TRANSPORT", 00:20:38.040 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.040 "adrfam": "ipv4", 00:20:38.040 "trsvcid": "$NVMF_PORT", 00:20:38.040 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.040 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.040 "hdgst": ${hdgst:-false}, 00:20:38.040 "ddgst": ${ddgst:-false} 00:20:38.040 }, 00:20:38.040 "method": "bdev_nvme_attach_controller" 00:20:38.040 } 00:20:38.040 EOF 00:20:38.040 )") 00:20:38.040 21:13:53 -- nvmf/common.sh@543 -- # cat 00:20:38.040 21:13:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:38.040 21:13:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:38.040 { 00:20:38.040 "params": { 00:20:38.040 "name": "Nvme$subsystem", 00:20:38.040 "trtype": "$TEST_TRANSPORT", 00:20:38.040 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.040 "adrfam": "ipv4", 00:20:38.040 "trsvcid": "$NVMF_PORT", 00:20:38.040 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.040 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.040 "hdgst": ${hdgst:-false}, 00:20:38.040 "ddgst": ${ddgst:-false} 00:20:38.040 }, 00:20:38.040 "method": "bdev_nvme_attach_controller" 00:20:38.040 } 00:20:38.040 EOF 00:20:38.040 )") 00:20:38.040 21:13:53 -- nvmf/common.sh@543 -- # cat 00:20:38.040 21:13:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:38.040 21:13:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:38.040 { 00:20:38.040 "params": { 00:20:38.040 "name": "Nvme$subsystem", 00:20:38.040 "trtype": "$TEST_TRANSPORT", 00:20:38.040 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.040 "adrfam": "ipv4", 00:20:38.040 "trsvcid": "$NVMF_PORT", 00:20:38.040 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.040 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.040 "hdgst": ${hdgst:-false}, 00:20:38.040 "ddgst": ${ddgst:-false} 00:20:38.040 }, 00:20:38.040 "method": "bdev_nvme_attach_controller" 00:20:38.040 } 00:20:38.040 EOF 00:20:38.040 )") 00:20:38.040 21:13:53 -- nvmf/common.sh@543 -- # cat 00:20:38.040 21:13:53 -- nvmf/common.sh@545 -- # jq . 00:20:38.040 21:13:53 -- nvmf/common.sh@546 -- # IFS=, 00:20:38.040 21:13:53 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:38.040 "params": { 00:20:38.040 "name": "Nvme1", 00:20:38.040 "trtype": "tcp", 00:20:38.040 "traddr": "10.0.0.2", 00:20:38.040 "adrfam": "ipv4", 00:20:38.040 "trsvcid": "4420", 00:20:38.040 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.040 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:38.040 "hdgst": false, 00:20:38.040 "ddgst": false 00:20:38.040 }, 00:20:38.040 "method": "bdev_nvme_attach_controller" 00:20:38.040 },{ 00:20:38.040 "params": { 00:20:38.040 "name": "Nvme2", 00:20:38.040 "trtype": "tcp", 00:20:38.040 "traddr": "10.0.0.2", 00:20:38.040 "adrfam": "ipv4", 00:20:38.040 "trsvcid": "4420", 00:20:38.040 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:38.040 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:38.040 "hdgst": false, 00:20:38.040 "ddgst": false 00:20:38.040 }, 00:20:38.040 "method": "bdev_nvme_attach_controller" 00:20:38.040 },{ 00:20:38.040 "params": { 00:20:38.040 "name": "Nvme3", 00:20:38.040 "trtype": "tcp", 00:20:38.040 "traddr": "10.0.0.2", 00:20:38.040 "adrfam": "ipv4", 00:20:38.040 "trsvcid": "4420", 00:20:38.040 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:38.040 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:38.040 "hdgst": false, 00:20:38.040 "ddgst": false 00:20:38.040 }, 00:20:38.040 "method": "bdev_nvme_attach_controller" 00:20:38.040 },{ 00:20:38.040 "params": { 00:20:38.040 "name": "Nvme4", 00:20:38.040 "trtype": "tcp", 00:20:38.040 "traddr": "10.0.0.2", 00:20:38.040 "adrfam": "ipv4", 00:20:38.040 "trsvcid": "4420", 00:20:38.040 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:38.040 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:38.040 "hdgst": false, 00:20:38.040 "ddgst": false 00:20:38.040 }, 00:20:38.040 "method": "bdev_nvme_attach_controller" 00:20:38.040 },{ 00:20:38.040 "params": { 00:20:38.040 "name": "Nvme5", 00:20:38.040 "trtype": "tcp", 00:20:38.040 "traddr": "10.0.0.2", 00:20:38.040 "adrfam": "ipv4", 00:20:38.040 "trsvcid": "4420", 00:20:38.040 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:38.040 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:38.040 "hdgst": false, 00:20:38.040 "ddgst": false 00:20:38.040 }, 00:20:38.040 "method": "bdev_nvme_attach_controller" 00:20:38.040 },{ 00:20:38.040 "params": { 00:20:38.040 "name": "Nvme6", 00:20:38.040 "trtype": "tcp", 00:20:38.040 "traddr": "10.0.0.2", 00:20:38.040 "adrfam": "ipv4", 00:20:38.040 "trsvcid": "4420", 00:20:38.040 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:38.040 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:38.040 "hdgst": false, 00:20:38.040 "ddgst": false 00:20:38.040 }, 00:20:38.040 "method": "bdev_nvme_attach_controller" 00:20:38.040 },{ 00:20:38.040 "params": { 00:20:38.040 "name": "Nvme7", 00:20:38.040 "trtype": "tcp", 00:20:38.040 "traddr": "10.0.0.2", 00:20:38.040 "adrfam": "ipv4", 00:20:38.040 "trsvcid": "4420", 00:20:38.040 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:38.040 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:38.040 "hdgst": false, 00:20:38.040 "ddgst": false 00:20:38.040 }, 00:20:38.040 "method": "bdev_nvme_attach_controller" 00:20:38.040 },{ 00:20:38.040 "params": { 00:20:38.040 "name": "Nvme8", 00:20:38.040 "trtype": "tcp", 00:20:38.040 "traddr": "10.0.0.2", 00:20:38.040 "adrfam": "ipv4", 00:20:38.040 "trsvcid": "4420", 00:20:38.040 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:38.040 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:38.040 "hdgst": false, 00:20:38.040 "ddgst": false 00:20:38.040 }, 00:20:38.040 "method": "bdev_nvme_attach_controller" 00:20:38.040 },{ 00:20:38.040 "params": { 00:20:38.040 "name": "Nvme9", 00:20:38.040 "trtype": "tcp", 00:20:38.040 "traddr": "10.0.0.2", 00:20:38.040 "adrfam": "ipv4", 00:20:38.040 "trsvcid": "4420", 00:20:38.040 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:38.040 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:38.040 "hdgst": false, 00:20:38.040 "ddgst": false 00:20:38.040 }, 00:20:38.040 "method": "bdev_nvme_attach_controller" 00:20:38.040 },{ 00:20:38.040 "params": { 00:20:38.040 "name": "Nvme10", 00:20:38.040 "trtype": "tcp", 00:20:38.040 "traddr": "10.0.0.2", 00:20:38.040 "adrfam": "ipv4", 00:20:38.040 "trsvcid": "4420", 00:20:38.040 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:38.040 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:38.040 "hdgst": false, 00:20:38.040 "ddgst": false 00:20:38.040 }, 00:20:38.040 "method": "bdev_nvme_attach_controller" 00:20:38.040 }' 00:20:38.040 EAL: No free 2048 kB hugepages reported on node 1 00:20:38.040 [2024-04-18 21:13:53.851843] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.040 [2024-04-18 21:13:53.923723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.413 Running I/O for 1 seconds... 00:20:40.788 00:20:40.788 Latency(us) 00:20:40.788 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.788 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:40.788 Verification LBA range: start 0x0 length 0x400 00:20:40.788 Nvme1n1 : 1.04 247.22 15.45 0.00 0.00 256430.30 19603.81 215186.03 00:20:40.788 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:40.788 Verification LBA range: start 0x0 length 0x400 00:20:40.788 Nvme2n1 : 1.12 227.65 14.23 0.00 0.00 274652.16 24276.81 266247.12 00:20:40.788 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:40.788 Verification LBA range: start 0x0 length 0x400 00:20:40.788 Nvme3n1 : 1.10 290.86 18.18 0.00 0.00 211588.14 20857.54 204244.37 00:20:40.788 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:40.788 Verification LBA range: start 0x0 length 0x400 00:20:40.788 Nvme4n1 : 1.07 303.07 18.94 0.00 0.00 196093.53 18578.03 192390.90 00:20:40.788 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:40.788 Verification LBA range: start 0x0 length 0x400 00:20:40.788 Nvme5n1 : 1.13 284.38 17.77 0.00 0.00 209423.49 21313.45 217921.45 00:20:40.788 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:40.788 Verification LBA range: start 0x0 length 0x400 00:20:40.788 Nvme6n1 : 1.11 290.43 18.15 0.00 0.00 202269.93 1937.59 214274.23 00:20:40.788 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:40.788 Verification LBA range: start 0x0 length 0x400 00:20:40.788 Nvme7n1 : 1.12 285.38 17.84 0.00 0.00 203227.49 20857.54 217921.45 00:20:40.788 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:40.788 Verification LBA range: start 0x0 length 0x400 00:20:40.788 Nvme8n1 : 1.11 229.64 14.35 0.00 0.00 248464.47 22795.13 262599.90 00:20:40.789 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:40.789 Verification LBA range: start 0x0 length 0x400 00:20:40.789 Nvme9n1 : 1.14 281.46 17.59 0.00 0.00 200060.57 14702.86 251658.24 00:20:40.789 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:40.789 Verification LBA range: start 0x0 length 0x400 00:20:40.789 Nvme10n1 : 1.19 322.66 20.17 0.00 0.00 172942.84 12879.25 232510.33 00:20:40.789 =================================================================================================================== 00:20:40.789 Total : 2762.74 172.67 0.00 0.00 213904.28 1937.59 266247.12 00:20:41.047 21:13:56 -- target/shutdown.sh@94 -- # stoptarget 00:20:41.047 21:13:56 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:41.047 21:13:56 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:41.047 21:13:56 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:41.047 21:13:56 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:41.047 21:13:56 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:41.047 21:13:56 -- nvmf/common.sh@117 -- # sync 00:20:41.047 21:13:56 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:41.047 21:13:56 -- nvmf/common.sh@120 -- # set +e 00:20:41.047 21:13:56 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:41.047 21:13:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:41.047 rmmod nvme_tcp 00:20:41.047 rmmod nvme_fabrics 00:20:41.047 rmmod nvme_keyring 00:20:41.047 21:13:56 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:41.047 21:13:56 -- nvmf/common.sh@124 -- # set -e 00:20:41.047 21:13:56 -- nvmf/common.sh@125 -- # return 0 00:20:41.047 21:13:56 -- nvmf/common.sh@478 -- # '[' -n 3108189 ']' 00:20:41.047 21:13:56 -- nvmf/common.sh@479 -- # killprocess 3108189 00:20:41.047 21:13:56 -- common/autotest_common.sh@936 -- # '[' -z 3108189 ']' 00:20:41.047 21:13:56 -- common/autotest_common.sh@940 -- # kill -0 3108189 00:20:41.047 21:13:56 -- common/autotest_common.sh@941 -- # uname 00:20:41.047 21:13:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:41.047 21:13:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3108189 00:20:41.047 21:13:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:41.047 21:13:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:41.047 21:13:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3108189' 00:20:41.047 killing process with pid 3108189 00:20:41.047 21:13:56 -- common/autotest_common.sh@955 -- # kill 3108189 00:20:41.047 21:13:56 -- common/autotest_common.sh@960 -- # wait 3108189 00:20:41.614 21:13:57 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:41.614 21:13:57 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:41.614 21:13:57 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:41.614 21:13:57 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:41.614 21:13:57 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:41.614 21:13:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.614 21:13:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:41.614 21:13:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.516 21:13:59 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:43.516 00:20:43.516 real 0m15.834s 00:20:43.516 user 0m34.428s 00:20:43.516 sys 0m6.179s 00:20:43.516 21:13:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:43.516 21:13:59 -- common/autotest_common.sh@10 -- # set +x 00:20:43.516 ************************************ 00:20:43.516 END TEST nvmf_shutdown_tc1 00:20:43.516 ************************************ 00:20:43.516 21:13:59 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:43.516 21:13:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:43.516 21:13:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:43.516 21:13:59 -- common/autotest_common.sh@10 -- # set +x 00:20:43.774 ************************************ 00:20:43.774 START TEST nvmf_shutdown_tc2 00:20:43.774 ************************************ 00:20:43.774 21:13:59 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc2 00:20:43.774 21:13:59 -- target/shutdown.sh@99 -- # starttarget 00:20:43.774 21:13:59 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:43.774 21:13:59 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:43.774 21:13:59 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:43.774 21:13:59 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:43.774 21:13:59 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:43.774 21:13:59 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:43.774 21:13:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.774 21:13:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:43.774 21:13:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.774 21:13:59 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:43.774 21:13:59 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:43.774 21:13:59 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:43.774 21:13:59 -- common/autotest_common.sh@10 -- # set +x 00:20:43.774 21:13:59 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:43.774 21:13:59 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:43.774 21:13:59 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:43.774 21:13:59 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:43.774 21:13:59 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:43.774 21:13:59 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:43.774 21:13:59 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:43.774 21:13:59 -- nvmf/common.sh@295 -- # net_devs=() 00:20:43.774 21:13:59 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:43.774 21:13:59 -- nvmf/common.sh@296 -- # e810=() 00:20:43.774 21:13:59 -- nvmf/common.sh@296 -- # local -ga e810 00:20:43.774 21:13:59 -- nvmf/common.sh@297 -- # x722=() 00:20:43.774 21:13:59 -- nvmf/common.sh@297 -- # local -ga x722 00:20:43.774 21:13:59 -- nvmf/common.sh@298 -- # mlx=() 00:20:43.774 21:13:59 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:43.774 21:13:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:43.774 21:13:59 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:43.774 21:13:59 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:43.774 21:13:59 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:43.774 21:13:59 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:43.774 21:13:59 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:43.774 21:13:59 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:43.774 21:13:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:43.774 21:13:59 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:43.774 21:13:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:43.774 21:13:59 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:43.774 21:13:59 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:43.774 21:13:59 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:43.774 21:13:59 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:43.774 21:13:59 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:43.774 21:13:59 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:43.774 21:13:59 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:43.774 21:13:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:43.774 21:13:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:43.774 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:43.774 21:13:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:43.774 21:13:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:43.774 21:13:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:43.774 21:13:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:43.774 21:13:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:43.774 21:13:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:43.774 21:13:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:43.774 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:43.774 21:13:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:43.774 21:13:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:43.774 21:13:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:43.774 21:13:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:43.774 21:13:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:43.774 21:13:59 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:43.774 21:13:59 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:43.774 21:13:59 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:43.774 21:13:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:43.774 21:13:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:43.774 21:13:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:43.774 21:13:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:43.774 21:13:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:43.774 Found net devices under 0000:86:00.0: cvl_0_0 00:20:43.774 21:13:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:43.774 21:13:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:43.774 21:13:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:43.774 21:13:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:43.774 21:13:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:43.774 21:13:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:43.774 Found net devices under 0000:86:00.1: cvl_0_1 00:20:43.774 21:13:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:43.774 21:13:59 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:43.774 21:13:59 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:43.774 21:13:59 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:43.774 21:13:59 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:43.774 21:13:59 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:43.774 21:13:59 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:43.774 21:13:59 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:43.774 21:13:59 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:43.774 21:13:59 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:43.774 21:13:59 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:43.774 21:13:59 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:43.774 21:13:59 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:43.774 21:13:59 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:43.774 21:13:59 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:43.774 21:13:59 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:43.774 21:13:59 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:43.774 21:13:59 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:43.774 21:13:59 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:43.774 21:13:59 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:43.774 21:13:59 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:43.775 21:13:59 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:44.033 21:13:59 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:44.033 21:13:59 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:44.033 21:13:59 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:44.033 21:13:59 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:44.033 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:44.033 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:20:44.033 00:20:44.033 --- 10.0.0.2 ping statistics --- 00:20:44.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.033 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:20:44.033 21:13:59 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:44.033 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:44.033 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:20:44.033 00:20:44.033 --- 10.0.0.1 ping statistics --- 00:20:44.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.033 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:20:44.033 21:13:59 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:44.033 21:13:59 -- nvmf/common.sh@411 -- # return 0 00:20:44.033 21:13:59 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:44.033 21:13:59 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:44.033 21:13:59 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:44.033 21:13:59 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:44.033 21:13:59 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:44.033 21:13:59 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:44.033 21:13:59 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:44.033 21:13:59 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:44.033 21:13:59 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:44.033 21:13:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:44.033 21:13:59 -- common/autotest_common.sh@10 -- # set +x 00:20:44.033 21:13:59 -- nvmf/common.sh@470 -- # nvmfpid=3109993 00:20:44.033 21:13:59 -- nvmf/common.sh@471 -- # waitforlisten 3109993 00:20:44.033 21:13:59 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:44.033 21:13:59 -- common/autotest_common.sh@817 -- # '[' -z 3109993 ']' 00:20:44.033 21:13:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.033 21:13:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:44.033 21:13:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.033 21:13:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:44.033 21:13:59 -- common/autotest_common.sh@10 -- # set +x 00:20:44.033 [2024-04-18 21:13:59.925839] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:20:44.033 [2024-04-18 21:13:59.925885] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:44.033 EAL: No free 2048 kB hugepages reported on node 1 00:20:44.291 [2024-04-18 21:13:59.991665] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:44.291 [2024-04-18 21:14:00.079310] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:44.291 [2024-04-18 21:14:00.079349] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:44.291 [2024-04-18 21:14:00.079357] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:44.291 [2024-04-18 21:14:00.079363] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:44.291 [2024-04-18 21:14:00.079368] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:44.291 [2024-04-18 21:14:00.079470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:44.291 [2024-04-18 21:14:00.079565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:44.291 [2024-04-18 21:14:00.079671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:44.291 [2024-04-18 21:14:00.079673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:44.856 21:14:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:44.857 21:14:00 -- common/autotest_common.sh@850 -- # return 0 00:20:44.857 21:14:00 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:44.857 21:14:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:44.857 21:14:00 -- common/autotest_common.sh@10 -- # set +x 00:20:44.857 21:14:00 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:44.857 21:14:00 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:44.857 21:14:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.857 21:14:00 -- common/autotest_common.sh@10 -- # set +x 00:20:44.857 [2024-04-18 21:14:00.769398] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:44.857 21:14:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.857 21:14:00 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:44.857 21:14:00 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:44.857 21:14:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:44.857 21:14:00 -- common/autotest_common.sh@10 -- # set +x 00:20:44.857 21:14:00 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:44.857 21:14:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:44.857 21:14:00 -- target/shutdown.sh@28 -- # cat 00:20:45.115 21:14:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:45.115 21:14:00 -- target/shutdown.sh@28 -- # cat 00:20:45.115 21:14:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:45.115 21:14:00 -- target/shutdown.sh@28 -- # cat 00:20:45.115 21:14:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:45.115 21:14:00 -- target/shutdown.sh@28 -- # cat 00:20:45.115 21:14:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:45.115 21:14:00 -- target/shutdown.sh@28 -- # cat 00:20:45.115 21:14:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:45.115 21:14:00 -- target/shutdown.sh@28 -- # cat 00:20:45.115 21:14:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:45.115 21:14:00 -- target/shutdown.sh@28 -- # cat 00:20:45.115 21:14:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:45.115 21:14:00 -- target/shutdown.sh@28 -- # cat 00:20:45.115 21:14:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:45.115 21:14:00 -- target/shutdown.sh@28 -- # cat 00:20:45.115 21:14:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:45.115 21:14:00 -- target/shutdown.sh@28 -- # cat 00:20:45.115 21:14:00 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:45.115 21:14:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:45.115 21:14:00 -- common/autotest_common.sh@10 -- # set +x 00:20:45.115 Malloc1 00:20:45.115 [2024-04-18 21:14:00.865545] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:45.115 Malloc2 00:20:45.115 Malloc3 00:20:45.115 Malloc4 00:20:45.115 Malloc5 00:20:45.372 Malloc6 00:20:45.372 Malloc7 00:20:45.372 Malloc8 00:20:45.372 Malloc9 00:20:45.372 Malloc10 00:20:45.372 21:14:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:45.372 21:14:01 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:45.372 21:14:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:45.372 21:14:01 -- common/autotest_common.sh@10 -- # set +x 00:20:45.372 21:14:01 -- target/shutdown.sh@103 -- # perfpid=3110266 00:20:45.372 21:14:01 -- target/shutdown.sh@104 -- # waitforlisten 3110266 /var/tmp/bdevperf.sock 00:20:45.372 21:14:01 -- common/autotest_common.sh@817 -- # '[' -z 3110266 ']' 00:20:45.372 21:14:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:45.373 21:14:01 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:45.373 21:14:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:45.373 21:14:01 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:45.373 21:14:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:45.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:45.373 21:14:01 -- nvmf/common.sh@521 -- # config=() 00:20:45.373 21:14:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:45.373 21:14:01 -- nvmf/common.sh@521 -- # local subsystem config 00:20:45.373 21:14:01 -- common/autotest_common.sh@10 -- # set +x 00:20:45.373 21:14:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:45.373 21:14:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:45.373 { 00:20:45.373 "params": { 00:20:45.373 "name": "Nvme$subsystem", 00:20:45.373 "trtype": "$TEST_TRANSPORT", 00:20:45.373 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.373 "adrfam": "ipv4", 00:20:45.373 "trsvcid": "$NVMF_PORT", 00:20:45.373 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.373 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.373 "hdgst": ${hdgst:-false}, 00:20:45.373 "ddgst": ${ddgst:-false} 00:20:45.373 }, 00:20:45.373 "method": "bdev_nvme_attach_controller" 00:20:45.373 } 00:20:45.373 EOF 00:20:45.373 )") 00:20:45.373 21:14:01 -- nvmf/common.sh@543 -- # cat 00:20:45.373 21:14:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:45.373 21:14:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:45.373 { 00:20:45.373 "params": { 00:20:45.373 "name": "Nvme$subsystem", 00:20:45.373 "trtype": "$TEST_TRANSPORT", 00:20:45.373 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.373 "adrfam": "ipv4", 00:20:45.373 "trsvcid": "$NVMF_PORT", 00:20:45.373 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.373 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.373 "hdgst": ${hdgst:-false}, 00:20:45.373 "ddgst": ${ddgst:-false} 00:20:45.373 }, 00:20:45.373 "method": "bdev_nvme_attach_controller" 00:20:45.373 } 00:20:45.373 EOF 00:20:45.373 )") 00:20:45.373 21:14:01 -- nvmf/common.sh@543 -- # cat 00:20:45.633 21:14:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:45.633 21:14:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:45.633 { 00:20:45.633 "params": { 00:20:45.633 "name": "Nvme$subsystem", 00:20:45.633 "trtype": "$TEST_TRANSPORT", 00:20:45.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.633 "adrfam": "ipv4", 00:20:45.633 "trsvcid": "$NVMF_PORT", 00:20:45.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.633 "hdgst": ${hdgst:-false}, 00:20:45.633 "ddgst": ${ddgst:-false} 00:20:45.633 }, 00:20:45.633 "method": "bdev_nvme_attach_controller" 00:20:45.633 } 00:20:45.633 EOF 00:20:45.633 )") 00:20:45.633 21:14:01 -- nvmf/common.sh@543 -- # cat 00:20:45.633 21:14:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:45.633 21:14:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:45.633 { 00:20:45.633 "params": { 00:20:45.633 "name": "Nvme$subsystem", 00:20:45.633 "trtype": "$TEST_TRANSPORT", 00:20:45.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.633 "adrfam": "ipv4", 00:20:45.633 "trsvcid": "$NVMF_PORT", 00:20:45.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.633 "hdgst": ${hdgst:-false}, 00:20:45.633 "ddgst": ${ddgst:-false} 00:20:45.633 }, 00:20:45.633 "method": "bdev_nvme_attach_controller" 00:20:45.633 } 00:20:45.633 EOF 00:20:45.633 )") 00:20:45.633 21:14:01 -- nvmf/common.sh@543 -- # cat 00:20:45.633 21:14:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:45.633 21:14:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:45.633 { 00:20:45.633 "params": { 00:20:45.633 "name": "Nvme$subsystem", 00:20:45.633 "trtype": "$TEST_TRANSPORT", 00:20:45.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.633 "adrfam": "ipv4", 00:20:45.633 "trsvcid": "$NVMF_PORT", 00:20:45.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.633 "hdgst": ${hdgst:-false}, 00:20:45.633 "ddgst": ${ddgst:-false} 00:20:45.633 }, 00:20:45.633 "method": "bdev_nvme_attach_controller" 00:20:45.633 } 00:20:45.633 EOF 00:20:45.633 )") 00:20:45.633 21:14:01 -- nvmf/common.sh@543 -- # cat 00:20:45.633 21:14:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:45.633 21:14:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:45.633 { 00:20:45.633 "params": { 00:20:45.633 "name": "Nvme$subsystem", 00:20:45.633 "trtype": "$TEST_TRANSPORT", 00:20:45.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.633 "adrfam": "ipv4", 00:20:45.633 "trsvcid": "$NVMF_PORT", 00:20:45.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.633 "hdgst": ${hdgst:-false}, 00:20:45.633 "ddgst": ${ddgst:-false} 00:20:45.633 }, 00:20:45.633 "method": "bdev_nvme_attach_controller" 00:20:45.633 } 00:20:45.633 EOF 00:20:45.633 )") 00:20:45.633 21:14:01 -- nvmf/common.sh@543 -- # cat 00:20:45.633 21:14:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:45.633 21:14:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:45.633 { 00:20:45.633 "params": { 00:20:45.633 "name": "Nvme$subsystem", 00:20:45.633 "trtype": "$TEST_TRANSPORT", 00:20:45.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.633 "adrfam": "ipv4", 00:20:45.633 "trsvcid": "$NVMF_PORT", 00:20:45.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.633 "hdgst": ${hdgst:-false}, 00:20:45.633 "ddgst": ${ddgst:-false} 00:20:45.633 }, 00:20:45.633 "method": "bdev_nvme_attach_controller" 00:20:45.633 } 00:20:45.633 EOF 00:20:45.633 )") 00:20:45.633 21:14:01 -- nvmf/common.sh@543 -- # cat 00:20:45.633 [2024-04-18 21:14:01.335505] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:20:45.633 [2024-04-18 21:14:01.335563] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3110266 ] 00:20:45.633 21:14:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:45.633 21:14:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:45.633 { 00:20:45.633 "params": { 00:20:45.633 "name": "Nvme$subsystem", 00:20:45.633 "trtype": "$TEST_TRANSPORT", 00:20:45.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.633 "adrfam": "ipv4", 00:20:45.633 "trsvcid": "$NVMF_PORT", 00:20:45.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.633 "hdgst": ${hdgst:-false}, 00:20:45.633 "ddgst": ${ddgst:-false} 00:20:45.633 }, 00:20:45.633 "method": "bdev_nvme_attach_controller" 00:20:45.633 } 00:20:45.633 EOF 00:20:45.633 )") 00:20:45.633 21:14:01 -- nvmf/common.sh@543 -- # cat 00:20:45.633 21:14:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:45.633 21:14:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:45.633 { 00:20:45.633 "params": { 00:20:45.633 "name": "Nvme$subsystem", 00:20:45.633 "trtype": "$TEST_TRANSPORT", 00:20:45.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.633 "adrfam": "ipv4", 00:20:45.633 "trsvcid": "$NVMF_PORT", 00:20:45.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.633 "hdgst": ${hdgst:-false}, 00:20:45.633 "ddgst": ${ddgst:-false} 00:20:45.633 }, 00:20:45.633 "method": "bdev_nvme_attach_controller" 00:20:45.633 } 00:20:45.633 EOF 00:20:45.633 )") 00:20:45.633 21:14:01 -- nvmf/common.sh@543 -- # cat 00:20:45.633 21:14:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:45.633 21:14:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:45.633 { 00:20:45.633 "params": { 00:20:45.633 "name": "Nvme$subsystem", 00:20:45.633 "trtype": "$TEST_TRANSPORT", 00:20:45.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.633 "adrfam": "ipv4", 00:20:45.633 "trsvcid": "$NVMF_PORT", 00:20:45.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.633 "hdgst": ${hdgst:-false}, 00:20:45.634 "ddgst": ${ddgst:-false} 00:20:45.634 }, 00:20:45.634 "method": "bdev_nvme_attach_controller" 00:20:45.634 } 00:20:45.634 EOF 00:20:45.634 )") 00:20:45.634 21:14:01 -- nvmf/common.sh@543 -- # cat 00:20:45.634 21:14:01 -- nvmf/common.sh@545 -- # jq . 00:20:45.634 21:14:01 -- nvmf/common.sh@546 -- # IFS=, 00:20:45.634 21:14:01 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:45.634 "params": { 00:20:45.634 "name": "Nvme1", 00:20:45.634 "trtype": "tcp", 00:20:45.634 "traddr": "10.0.0.2", 00:20:45.634 "adrfam": "ipv4", 00:20:45.634 "trsvcid": "4420", 00:20:45.634 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.634 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:45.634 "hdgst": false, 00:20:45.634 "ddgst": false 00:20:45.634 }, 00:20:45.634 "method": "bdev_nvme_attach_controller" 00:20:45.634 },{ 00:20:45.634 "params": { 00:20:45.634 "name": "Nvme2", 00:20:45.634 "trtype": "tcp", 00:20:45.634 "traddr": "10.0.0.2", 00:20:45.634 "adrfam": "ipv4", 00:20:45.634 "trsvcid": "4420", 00:20:45.634 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:45.634 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:45.634 "hdgst": false, 00:20:45.634 "ddgst": false 00:20:45.634 }, 00:20:45.634 "method": "bdev_nvme_attach_controller" 00:20:45.634 },{ 00:20:45.634 "params": { 00:20:45.634 "name": "Nvme3", 00:20:45.634 "trtype": "tcp", 00:20:45.634 "traddr": "10.0.0.2", 00:20:45.634 "adrfam": "ipv4", 00:20:45.634 "trsvcid": "4420", 00:20:45.634 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:45.634 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:45.634 "hdgst": false, 00:20:45.634 "ddgst": false 00:20:45.634 }, 00:20:45.634 "method": "bdev_nvme_attach_controller" 00:20:45.634 },{ 00:20:45.634 "params": { 00:20:45.634 "name": "Nvme4", 00:20:45.634 "trtype": "tcp", 00:20:45.634 "traddr": "10.0.0.2", 00:20:45.634 "adrfam": "ipv4", 00:20:45.634 "trsvcid": "4420", 00:20:45.634 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:45.634 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:45.634 "hdgst": false, 00:20:45.634 "ddgst": false 00:20:45.634 }, 00:20:45.634 "method": "bdev_nvme_attach_controller" 00:20:45.634 },{ 00:20:45.634 "params": { 00:20:45.634 "name": "Nvme5", 00:20:45.634 "trtype": "tcp", 00:20:45.634 "traddr": "10.0.0.2", 00:20:45.634 "adrfam": "ipv4", 00:20:45.634 "trsvcid": "4420", 00:20:45.634 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:45.634 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:45.634 "hdgst": false, 00:20:45.634 "ddgst": false 00:20:45.634 }, 00:20:45.634 "method": "bdev_nvme_attach_controller" 00:20:45.634 },{ 00:20:45.634 "params": { 00:20:45.634 "name": "Nvme6", 00:20:45.634 "trtype": "tcp", 00:20:45.634 "traddr": "10.0.0.2", 00:20:45.634 "adrfam": "ipv4", 00:20:45.634 "trsvcid": "4420", 00:20:45.634 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:45.634 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:45.634 "hdgst": false, 00:20:45.634 "ddgst": false 00:20:45.634 }, 00:20:45.634 "method": "bdev_nvme_attach_controller" 00:20:45.634 },{ 00:20:45.634 "params": { 00:20:45.634 "name": "Nvme7", 00:20:45.634 "trtype": "tcp", 00:20:45.634 "traddr": "10.0.0.2", 00:20:45.634 "adrfam": "ipv4", 00:20:45.634 "trsvcid": "4420", 00:20:45.634 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:45.634 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:45.634 "hdgst": false, 00:20:45.634 "ddgst": false 00:20:45.634 }, 00:20:45.634 "method": "bdev_nvme_attach_controller" 00:20:45.634 },{ 00:20:45.634 "params": { 00:20:45.634 "name": "Nvme8", 00:20:45.634 "trtype": "tcp", 00:20:45.634 "traddr": "10.0.0.2", 00:20:45.634 "adrfam": "ipv4", 00:20:45.634 "trsvcid": "4420", 00:20:45.634 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:45.634 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:45.634 "hdgst": false, 00:20:45.634 "ddgst": false 00:20:45.634 }, 00:20:45.634 "method": "bdev_nvme_attach_controller" 00:20:45.634 },{ 00:20:45.634 "params": { 00:20:45.634 "name": "Nvme9", 00:20:45.634 "trtype": "tcp", 00:20:45.634 "traddr": "10.0.0.2", 00:20:45.634 "adrfam": "ipv4", 00:20:45.634 "trsvcid": "4420", 00:20:45.634 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:45.634 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:45.634 "hdgst": false, 00:20:45.634 "ddgst": false 00:20:45.634 }, 00:20:45.634 "method": "bdev_nvme_attach_controller" 00:20:45.634 },{ 00:20:45.634 "params": { 00:20:45.634 "name": "Nvme10", 00:20:45.634 "trtype": "tcp", 00:20:45.634 "traddr": "10.0.0.2", 00:20:45.634 "adrfam": "ipv4", 00:20:45.634 "trsvcid": "4420", 00:20:45.634 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:45.634 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:45.634 "hdgst": false, 00:20:45.634 "ddgst": false 00:20:45.634 }, 00:20:45.634 "method": "bdev_nvme_attach_controller" 00:20:45.634 }' 00:20:45.634 EAL: No free 2048 kB hugepages reported on node 1 00:20:45.634 [2024-04-18 21:14:01.396868] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.634 [2024-04-18 21:14:01.467762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.573 Running I/O for 10 seconds... 00:20:47.573 21:14:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:47.573 21:14:03 -- common/autotest_common.sh@850 -- # return 0 00:20:47.573 21:14:03 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:47.573 21:14:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.573 21:14:03 -- common/autotest_common.sh@10 -- # set +x 00:20:47.573 21:14:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.573 21:14:03 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:47.573 21:14:03 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:47.573 21:14:03 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:47.573 21:14:03 -- target/shutdown.sh@57 -- # local ret=1 00:20:47.573 21:14:03 -- target/shutdown.sh@58 -- # local i 00:20:47.573 21:14:03 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:47.573 21:14:03 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:47.573 21:14:03 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:47.573 21:14:03 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:47.573 21:14:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.573 21:14:03 -- common/autotest_common.sh@10 -- # set +x 00:20:47.573 21:14:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.573 21:14:03 -- target/shutdown.sh@60 -- # read_io_count=3 00:20:47.573 21:14:03 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:20:47.573 21:14:03 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:47.830 21:14:03 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:47.830 21:14:03 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:47.830 21:14:03 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:47.830 21:14:03 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:47.830 21:14:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.830 21:14:03 -- common/autotest_common.sh@10 -- # set +x 00:20:47.830 21:14:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.830 21:14:03 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:47.830 21:14:03 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:47.830 21:14:03 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:48.088 21:14:03 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:48.088 21:14:03 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:48.088 21:14:03 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:48.088 21:14:03 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:48.088 21:14:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:48.088 21:14:03 -- common/autotest_common.sh@10 -- # set +x 00:20:48.088 21:14:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:48.088 21:14:03 -- target/shutdown.sh@60 -- # read_io_count=131 00:20:48.088 21:14:03 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:20:48.088 21:14:03 -- target/shutdown.sh@64 -- # ret=0 00:20:48.088 21:14:03 -- target/shutdown.sh@65 -- # break 00:20:48.088 21:14:03 -- target/shutdown.sh@69 -- # return 0 00:20:48.088 21:14:03 -- target/shutdown.sh@110 -- # killprocess 3110266 00:20:48.088 21:14:03 -- common/autotest_common.sh@936 -- # '[' -z 3110266 ']' 00:20:48.088 21:14:03 -- common/autotest_common.sh@940 -- # kill -0 3110266 00:20:48.088 21:14:03 -- common/autotest_common.sh@941 -- # uname 00:20:48.088 21:14:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:48.088 21:14:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3110266 00:20:48.088 21:14:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:48.088 21:14:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:48.088 21:14:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3110266' 00:20:48.088 killing process with pid 3110266 00:20:48.088 21:14:03 -- common/autotest_common.sh@955 -- # kill 3110266 00:20:48.088 21:14:03 -- common/autotest_common.sh@960 -- # wait 3110266 00:20:48.345 Received shutdown signal, test time was about 0.923513 seconds 00:20:48.345 00:20:48.345 Latency(us) 00:20:48.345 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:48.345 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.345 Verification LBA range: start 0x0 length 0x400 00:20:48.345 Nvme1n1 : 0.91 282.32 17.65 0.00 0.00 224310.09 20059.71 212450.62 00:20:48.345 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.345 Verification LBA range: start 0x0 length 0x400 00:20:48.345 Nvme2n1 : 0.90 283.05 17.69 0.00 0.00 219746.62 20515.62 216097.84 00:20:48.345 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.345 Verification LBA range: start 0x0 length 0x400 00:20:48.345 Nvme3n1 : 0.89 294.63 18.41 0.00 0.00 205990.19 1951.83 211538.81 00:20:48.345 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.345 Verification LBA range: start 0x0 length 0x400 00:20:48.345 Nvme4n1 : 0.89 287.24 17.95 0.00 0.00 208479.28 19375.86 217921.45 00:20:48.345 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.345 Verification LBA range: start 0x0 length 0x400 00:20:48.345 Nvme5n1 : 0.92 279.26 17.45 0.00 0.00 211037.50 20059.71 200597.15 00:20:48.345 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.345 Verification LBA range: start 0x0 length 0x400 00:20:48.345 Nvme6n1 : 0.91 281.31 17.58 0.00 0.00 205330.48 22795.13 214274.23 00:20:48.345 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.345 Verification LBA range: start 0x0 length 0x400 00:20:48.345 Nvme7n1 : 0.92 278.08 17.38 0.00 0.00 204093.66 16298.52 242540.19 00:20:48.345 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.345 Verification LBA range: start 0x0 length 0x400 00:20:48.345 Nvme8n1 : 0.92 277.40 17.34 0.00 0.00 200756.31 20857.54 221568.67 00:20:48.345 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.345 Verification LBA range: start 0x0 length 0x400 00:20:48.345 Nvme9n1 : 0.87 219.94 13.75 0.00 0.00 245907.22 19261.89 226127.69 00:20:48.345 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.345 Verification LBA range: start 0x0 length 0x400 00:20:48.345 Nvme10n1 : 0.90 217.97 13.62 0.00 0.00 243231.46 3647.22 246187.41 00:20:48.345 =================================================================================================================== 00:20:48.345 Total : 2701.20 168.83 0.00 0.00 215457.60 1951.83 246187.41 00:20:48.345 21:14:04 -- target/shutdown.sh@113 -- # sleep 1 00:20:49.719 21:14:05 -- target/shutdown.sh@114 -- # kill -0 3109993 00:20:49.719 21:14:05 -- target/shutdown.sh@116 -- # stoptarget 00:20:49.719 21:14:05 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:49.719 21:14:05 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:49.719 21:14:05 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:49.719 21:14:05 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:49.719 21:14:05 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:49.719 21:14:05 -- nvmf/common.sh@117 -- # sync 00:20:49.719 21:14:05 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:49.719 21:14:05 -- nvmf/common.sh@120 -- # set +e 00:20:49.719 21:14:05 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:49.719 21:14:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:49.719 rmmod nvme_tcp 00:20:49.719 rmmod nvme_fabrics 00:20:49.719 rmmod nvme_keyring 00:20:49.719 21:14:05 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:49.719 21:14:05 -- nvmf/common.sh@124 -- # set -e 00:20:49.719 21:14:05 -- nvmf/common.sh@125 -- # return 0 00:20:49.719 21:14:05 -- nvmf/common.sh@478 -- # '[' -n 3109993 ']' 00:20:49.719 21:14:05 -- nvmf/common.sh@479 -- # killprocess 3109993 00:20:49.719 21:14:05 -- common/autotest_common.sh@936 -- # '[' -z 3109993 ']' 00:20:49.719 21:14:05 -- common/autotest_common.sh@940 -- # kill -0 3109993 00:20:49.719 21:14:05 -- common/autotest_common.sh@941 -- # uname 00:20:49.719 21:14:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:49.719 21:14:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3109993 00:20:49.719 21:14:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:49.719 21:14:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:49.719 21:14:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3109993' 00:20:49.719 killing process with pid 3109993 00:20:49.719 21:14:05 -- common/autotest_common.sh@955 -- # kill 3109993 00:20:49.719 21:14:05 -- common/autotest_common.sh@960 -- # wait 3109993 00:20:49.977 21:14:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:49.977 21:14:05 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:49.977 21:14:05 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:49.977 21:14:05 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:49.977 21:14:05 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:49.977 21:14:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:49.977 21:14:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:49.977 21:14:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:52.509 21:14:07 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:52.509 00:20:52.509 real 0m8.305s 00:20:52.509 user 0m25.470s 00:20:52.509 sys 0m1.402s 00:20:52.509 21:14:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:52.509 21:14:07 -- common/autotest_common.sh@10 -- # set +x 00:20:52.509 ************************************ 00:20:52.510 END TEST nvmf_shutdown_tc2 00:20:52.510 ************************************ 00:20:52.510 21:14:07 -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:52.510 21:14:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:52.510 21:14:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:52.510 21:14:07 -- common/autotest_common.sh@10 -- # set +x 00:20:52.510 ************************************ 00:20:52.510 START TEST nvmf_shutdown_tc3 00:20:52.510 ************************************ 00:20:52.510 21:14:08 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc3 00:20:52.510 21:14:08 -- target/shutdown.sh@121 -- # starttarget 00:20:52.510 21:14:08 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:52.510 21:14:08 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:52.510 21:14:08 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:52.510 21:14:08 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:52.510 21:14:08 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:52.510 21:14:08 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:52.510 21:14:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:52.510 21:14:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:52.510 21:14:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:52.510 21:14:08 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:52.510 21:14:08 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:52.510 21:14:08 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:52.510 21:14:08 -- common/autotest_common.sh@10 -- # set +x 00:20:52.510 21:14:08 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:52.510 21:14:08 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:52.510 21:14:08 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:52.510 21:14:08 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:52.510 21:14:08 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:52.510 21:14:08 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:52.510 21:14:08 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:52.510 21:14:08 -- nvmf/common.sh@295 -- # net_devs=() 00:20:52.510 21:14:08 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:52.510 21:14:08 -- nvmf/common.sh@296 -- # e810=() 00:20:52.510 21:14:08 -- nvmf/common.sh@296 -- # local -ga e810 00:20:52.510 21:14:08 -- nvmf/common.sh@297 -- # x722=() 00:20:52.510 21:14:08 -- nvmf/common.sh@297 -- # local -ga x722 00:20:52.510 21:14:08 -- nvmf/common.sh@298 -- # mlx=() 00:20:52.510 21:14:08 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:52.510 21:14:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:52.510 21:14:08 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:52.510 21:14:08 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:52.510 21:14:08 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:52.510 21:14:08 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:52.510 21:14:08 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:52.510 21:14:08 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:52.510 21:14:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:52.510 21:14:08 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:52.510 21:14:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:52.510 21:14:08 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:52.510 21:14:08 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:52.510 21:14:08 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:52.510 21:14:08 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:52.510 21:14:08 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:52.510 21:14:08 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:52.510 21:14:08 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:52.510 21:14:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:52.510 21:14:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:52.510 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:52.510 21:14:08 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:52.510 21:14:08 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:52.510 21:14:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.510 21:14:08 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.510 21:14:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:52.510 21:14:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:52.510 21:14:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:52.510 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:52.510 21:14:08 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:52.510 21:14:08 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:52.510 21:14:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.510 21:14:08 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.510 21:14:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:52.510 21:14:08 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:52.510 21:14:08 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:52.510 21:14:08 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:52.510 21:14:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:52.510 21:14:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.510 21:14:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:52.510 21:14:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.510 21:14:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:52.510 Found net devices under 0000:86:00.0: cvl_0_0 00:20:52.510 21:14:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.510 21:14:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:52.510 21:14:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.510 21:14:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:52.510 21:14:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.510 21:14:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:52.510 Found net devices under 0000:86:00.1: cvl_0_1 00:20:52.510 21:14:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.510 21:14:08 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:52.510 21:14:08 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:52.510 21:14:08 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:52.510 21:14:08 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:52.510 21:14:08 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:52.510 21:14:08 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:52.510 21:14:08 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:52.510 21:14:08 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:52.510 21:14:08 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:52.510 21:14:08 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:52.510 21:14:08 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:52.510 21:14:08 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:52.510 21:14:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:52.510 21:14:08 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:52.510 21:14:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:52.510 21:14:08 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:52.510 21:14:08 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:52.510 21:14:08 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:52.510 21:14:08 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:52.510 21:14:08 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:52.510 21:14:08 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:52.510 21:14:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:52.510 21:14:08 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:52.510 21:14:08 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:52.510 21:14:08 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:52.510 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:52.510 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:20:52.510 00:20:52.510 --- 10.0.0.2 ping statistics --- 00:20:52.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.510 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:20:52.510 21:14:08 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:52.510 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:52.510 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.240 ms 00:20:52.510 00:20:52.510 --- 10.0.0.1 ping statistics --- 00:20:52.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.510 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:20:52.510 21:14:08 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:52.510 21:14:08 -- nvmf/common.sh@411 -- # return 0 00:20:52.510 21:14:08 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:52.510 21:14:08 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:52.510 21:14:08 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:52.510 21:14:08 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:52.510 21:14:08 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:52.510 21:14:08 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:52.510 21:14:08 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:52.510 21:14:08 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:52.510 21:14:08 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:52.510 21:14:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:52.510 21:14:08 -- common/autotest_common.sh@10 -- # set +x 00:20:52.510 21:14:08 -- nvmf/common.sh@470 -- # nvmfpid=3111546 00:20:52.510 21:14:08 -- nvmf/common.sh@471 -- # waitforlisten 3111546 00:20:52.510 21:14:08 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:52.510 21:14:08 -- common/autotest_common.sh@817 -- # '[' -z 3111546 ']' 00:20:52.510 21:14:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:52.510 21:14:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:52.510 21:14:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:52.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:52.510 21:14:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:52.510 21:14:08 -- common/autotest_common.sh@10 -- # set +x 00:20:52.510 [2024-04-18 21:14:08.399555] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:20:52.511 [2024-04-18 21:14:08.399599] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:52.511 EAL: No free 2048 kB hugepages reported on node 1 00:20:52.769 [2024-04-18 21:14:08.463870] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:52.769 [2024-04-18 21:14:08.539908] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:52.769 [2024-04-18 21:14:08.539948] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:52.769 [2024-04-18 21:14:08.539955] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:52.769 [2024-04-18 21:14:08.539961] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:52.769 [2024-04-18 21:14:08.539966] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:52.769 [2024-04-18 21:14:08.540072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:52.769 [2024-04-18 21:14:08.540177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:52.769 [2024-04-18 21:14:08.540285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:52.769 [2024-04-18 21:14:08.540286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:53.334 21:14:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:53.334 21:14:09 -- common/autotest_common.sh@850 -- # return 0 00:20:53.334 21:14:09 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:53.334 21:14:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:53.334 21:14:09 -- common/autotest_common.sh@10 -- # set +x 00:20:53.334 21:14:09 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:53.334 21:14:09 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:53.334 21:14:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:53.334 21:14:09 -- common/autotest_common.sh@10 -- # set +x 00:20:53.334 [2024-04-18 21:14:09.225191] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:53.334 21:14:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:53.334 21:14:09 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:53.334 21:14:09 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:53.334 21:14:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:53.334 21:14:09 -- common/autotest_common.sh@10 -- # set +x 00:20:53.334 21:14:09 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:53.334 21:14:09 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:53.334 21:14:09 -- target/shutdown.sh@28 -- # cat 00:20:53.334 21:14:09 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:53.334 21:14:09 -- target/shutdown.sh@28 -- # cat 00:20:53.334 21:14:09 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:53.334 21:14:09 -- target/shutdown.sh@28 -- # cat 00:20:53.334 21:14:09 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:53.334 21:14:09 -- target/shutdown.sh@28 -- # cat 00:20:53.334 21:14:09 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:53.334 21:14:09 -- target/shutdown.sh@28 -- # cat 00:20:53.334 21:14:09 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:53.334 21:14:09 -- target/shutdown.sh@28 -- # cat 00:20:53.592 21:14:09 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:53.592 21:14:09 -- target/shutdown.sh@28 -- # cat 00:20:53.592 21:14:09 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:53.592 21:14:09 -- target/shutdown.sh@28 -- # cat 00:20:53.592 21:14:09 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:53.592 21:14:09 -- target/shutdown.sh@28 -- # cat 00:20:53.592 21:14:09 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:53.592 21:14:09 -- target/shutdown.sh@28 -- # cat 00:20:53.592 21:14:09 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:53.592 21:14:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:53.592 21:14:09 -- common/autotest_common.sh@10 -- # set +x 00:20:53.592 Malloc1 00:20:53.592 [2024-04-18 21:14:09.321259] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:53.592 Malloc2 00:20:53.592 Malloc3 00:20:53.592 Malloc4 00:20:53.592 Malloc5 00:20:53.592 Malloc6 00:20:53.851 Malloc7 00:20:53.851 Malloc8 00:20:53.851 Malloc9 00:20:53.851 Malloc10 00:20:53.851 21:14:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:53.851 21:14:09 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:53.851 21:14:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:53.851 21:14:09 -- common/autotest_common.sh@10 -- # set +x 00:20:53.851 21:14:09 -- target/shutdown.sh@125 -- # perfpid=3111820 00:20:53.851 21:14:09 -- target/shutdown.sh@126 -- # waitforlisten 3111820 /var/tmp/bdevperf.sock 00:20:53.851 21:14:09 -- common/autotest_common.sh@817 -- # '[' -z 3111820 ']' 00:20:53.851 21:14:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:53.851 21:14:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:53.851 21:14:09 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:53.851 21:14:09 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:53.851 21:14:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:53.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:53.851 21:14:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:53.851 21:14:09 -- nvmf/common.sh@521 -- # config=() 00:20:53.851 21:14:09 -- common/autotest_common.sh@10 -- # set +x 00:20:53.851 21:14:09 -- nvmf/common.sh@521 -- # local subsystem config 00:20:53.851 21:14:09 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:53.851 21:14:09 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:53.851 { 00:20:53.851 "params": { 00:20:53.851 "name": "Nvme$subsystem", 00:20:53.851 "trtype": "$TEST_TRANSPORT", 00:20:53.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.851 "adrfam": "ipv4", 00:20:53.851 "trsvcid": "$NVMF_PORT", 00:20:53.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.851 "hdgst": ${hdgst:-false}, 00:20:53.851 "ddgst": ${ddgst:-false} 00:20:53.851 }, 00:20:53.851 "method": "bdev_nvme_attach_controller" 00:20:53.851 } 00:20:53.851 EOF 00:20:53.851 )") 00:20:53.851 21:14:09 -- nvmf/common.sh@543 -- # cat 00:20:53.851 21:14:09 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:53.851 21:14:09 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:53.851 { 00:20:53.851 "params": { 00:20:53.851 "name": "Nvme$subsystem", 00:20:53.851 "trtype": "$TEST_TRANSPORT", 00:20:53.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.851 "adrfam": "ipv4", 00:20:53.851 "trsvcid": "$NVMF_PORT", 00:20:53.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.851 "hdgst": ${hdgst:-false}, 00:20:53.851 "ddgst": ${ddgst:-false} 00:20:53.851 }, 00:20:53.851 "method": "bdev_nvme_attach_controller" 00:20:53.851 } 00:20:53.851 EOF 00:20:53.851 )") 00:20:53.851 21:14:09 -- nvmf/common.sh@543 -- # cat 00:20:53.851 21:14:09 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:53.851 21:14:09 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:53.851 { 00:20:53.851 "params": { 00:20:53.851 "name": "Nvme$subsystem", 00:20:53.851 "trtype": "$TEST_TRANSPORT", 00:20:53.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.851 "adrfam": "ipv4", 00:20:53.851 "trsvcid": "$NVMF_PORT", 00:20:53.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.851 "hdgst": ${hdgst:-false}, 00:20:53.851 "ddgst": ${ddgst:-false} 00:20:53.851 }, 00:20:53.851 "method": "bdev_nvme_attach_controller" 00:20:53.851 } 00:20:53.851 EOF 00:20:53.851 )") 00:20:53.851 21:14:09 -- nvmf/common.sh@543 -- # cat 00:20:53.851 21:14:09 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:53.851 21:14:09 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:53.851 { 00:20:53.851 "params": { 00:20:53.851 "name": "Nvme$subsystem", 00:20:53.851 "trtype": "$TEST_TRANSPORT", 00:20:53.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.851 "adrfam": "ipv4", 00:20:53.851 "trsvcid": "$NVMF_PORT", 00:20:53.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.851 "hdgst": ${hdgst:-false}, 00:20:53.851 "ddgst": ${ddgst:-false} 00:20:53.851 }, 00:20:53.851 "method": "bdev_nvme_attach_controller" 00:20:53.851 } 00:20:53.851 EOF 00:20:53.851 )") 00:20:53.851 21:14:09 -- nvmf/common.sh@543 -- # cat 00:20:53.851 21:14:09 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:53.851 21:14:09 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:53.851 { 00:20:53.851 "params": { 00:20:53.851 "name": "Nvme$subsystem", 00:20:53.851 "trtype": "$TEST_TRANSPORT", 00:20:53.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.851 "adrfam": "ipv4", 00:20:53.851 "trsvcid": "$NVMF_PORT", 00:20:53.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.851 "hdgst": ${hdgst:-false}, 00:20:53.851 "ddgst": ${ddgst:-false} 00:20:53.851 }, 00:20:53.851 "method": "bdev_nvme_attach_controller" 00:20:53.851 } 00:20:53.851 EOF 00:20:53.851 )") 00:20:53.851 21:14:09 -- nvmf/common.sh@543 -- # cat 00:20:54.110 21:14:09 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:54.110 21:14:09 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:54.110 { 00:20:54.110 "params": { 00:20:54.110 "name": "Nvme$subsystem", 00:20:54.110 "trtype": "$TEST_TRANSPORT", 00:20:54.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:54.110 "adrfam": "ipv4", 00:20:54.110 "trsvcid": "$NVMF_PORT", 00:20:54.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:54.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:54.110 "hdgst": ${hdgst:-false}, 00:20:54.110 "ddgst": ${ddgst:-false} 00:20:54.110 }, 00:20:54.110 "method": "bdev_nvme_attach_controller" 00:20:54.110 } 00:20:54.110 EOF 00:20:54.110 )") 00:20:54.110 21:14:09 -- nvmf/common.sh@543 -- # cat 00:20:54.110 21:14:09 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:54.110 21:14:09 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:54.110 { 00:20:54.110 "params": { 00:20:54.110 "name": "Nvme$subsystem", 00:20:54.110 "trtype": "$TEST_TRANSPORT", 00:20:54.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:54.110 "adrfam": "ipv4", 00:20:54.110 "trsvcid": "$NVMF_PORT", 00:20:54.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:54.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:54.110 "hdgst": ${hdgst:-false}, 00:20:54.110 "ddgst": ${ddgst:-false} 00:20:54.110 }, 00:20:54.110 "method": "bdev_nvme_attach_controller" 00:20:54.110 } 00:20:54.110 EOF 00:20:54.110 )") 00:20:54.110 [2024-04-18 21:14:09.792866] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:20:54.110 [2024-04-18 21:14:09.792916] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3111820 ] 00:20:54.110 21:14:09 -- nvmf/common.sh@543 -- # cat 00:20:54.110 21:14:09 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:54.110 21:14:09 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:54.110 { 00:20:54.110 "params": { 00:20:54.110 "name": "Nvme$subsystem", 00:20:54.110 "trtype": "$TEST_TRANSPORT", 00:20:54.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:54.110 "adrfam": "ipv4", 00:20:54.110 "trsvcid": "$NVMF_PORT", 00:20:54.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:54.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:54.110 "hdgst": ${hdgst:-false}, 00:20:54.110 "ddgst": ${ddgst:-false} 00:20:54.110 }, 00:20:54.110 "method": "bdev_nvme_attach_controller" 00:20:54.110 } 00:20:54.110 EOF 00:20:54.110 )") 00:20:54.110 21:14:09 -- nvmf/common.sh@543 -- # cat 00:20:54.110 21:14:09 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:54.110 21:14:09 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:54.110 { 00:20:54.110 "params": { 00:20:54.110 "name": "Nvme$subsystem", 00:20:54.110 "trtype": "$TEST_TRANSPORT", 00:20:54.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:54.110 "adrfam": "ipv4", 00:20:54.110 "trsvcid": "$NVMF_PORT", 00:20:54.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:54.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:54.110 "hdgst": ${hdgst:-false}, 00:20:54.110 "ddgst": ${ddgst:-false} 00:20:54.110 }, 00:20:54.110 "method": "bdev_nvme_attach_controller" 00:20:54.110 } 00:20:54.110 EOF 00:20:54.110 )") 00:20:54.110 21:14:09 -- nvmf/common.sh@543 -- # cat 00:20:54.110 21:14:09 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:54.110 21:14:09 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:54.110 { 00:20:54.110 "params": { 00:20:54.110 "name": "Nvme$subsystem", 00:20:54.110 "trtype": "$TEST_TRANSPORT", 00:20:54.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:54.110 "adrfam": "ipv4", 00:20:54.110 "trsvcid": "$NVMF_PORT", 00:20:54.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:54.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:54.110 "hdgst": ${hdgst:-false}, 00:20:54.110 "ddgst": ${ddgst:-false} 00:20:54.110 }, 00:20:54.110 "method": "bdev_nvme_attach_controller" 00:20:54.110 } 00:20:54.110 EOF 00:20:54.110 )") 00:20:54.110 21:14:09 -- nvmf/common.sh@543 -- # cat 00:20:54.110 21:14:09 -- nvmf/common.sh@545 -- # jq . 00:20:54.110 EAL: No free 2048 kB hugepages reported on node 1 00:20:54.110 21:14:09 -- nvmf/common.sh@546 -- # IFS=, 00:20:54.110 21:14:09 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:54.110 "params": { 00:20:54.110 "name": "Nvme1", 00:20:54.110 "trtype": "tcp", 00:20:54.110 "traddr": "10.0.0.2", 00:20:54.110 "adrfam": "ipv4", 00:20:54.110 "trsvcid": "4420", 00:20:54.110 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:54.110 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:54.110 "hdgst": false, 00:20:54.110 "ddgst": false 00:20:54.110 }, 00:20:54.110 "method": "bdev_nvme_attach_controller" 00:20:54.110 },{ 00:20:54.110 "params": { 00:20:54.110 "name": "Nvme2", 00:20:54.110 "trtype": "tcp", 00:20:54.110 "traddr": "10.0.0.2", 00:20:54.110 "adrfam": "ipv4", 00:20:54.110 "trsvcid": "4420", 00:20:54.110 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:54.110 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:54.110 "hdgst": false, 00:20:54.110 "ddgst": false 00:20:54.110 }, 00:20:54.110 "method": "bdev_nvme_attach_controller" 00:20:54.110 },{ 00:20:54.110 "params": { 00:20:54.110 "name": "Nvme3", 00:20:54.110 "trtype": "tcp", 00:20:54.110 "traddr": "10.0.0.2", 00:20:54.110 "adrfam": "ipv4", 00:20:54.111 "trsvcid": "4420", 00:20:54.111 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:54.111 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:54.111 "hdgst": false, 00:20:54.111 "ddgst": false 00:20:54.111 }, 00:20:54.111 "method": "bdev_nvme_attach_controller" 00:20:54.111 },{ 00:20:54.111 "params": { 00:20:54.111 "name": "Nvme4", 00:20:54.111 "trtype": "tcp", 00:20:54.111 "traddr": "10.0.0.2", 00:20:54.111 "adrfam": "ipv4", 00:20:54.111 "trsvcid": "4420", 00:20:54.111 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:54.111 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:54.111 "hdgst": false, 00:20:54.111 "ddgst": false 00:20:54.111 }, 00:20:54.111 "method": "bdev_nvme_attach_controller" 00:20:54.111 },{ 00:20:54.111 "params": { 00:20:54.111 "name": "Nvme5", 00:20:54.111 "trtype": "tcp", 00:20:54.111 "traddr": "10.0.0.2", 00:20:54.111 "adrfam": "ipv4", 00:20:54.111 "trsvcid": "4420", 00:20:54.111 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:54.111 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:54.111 "hdgst": false, 00:20:54.111 "ddgst": false 00:20:54.111 }, 00:20:54.111 "method": "bdev_nvme_attach_controller" 00:20:54.111 },{ 00:20:54.111 "params": { 00:20:54.111 "name": "Nvme6", 00:20:54.111 "trtype": "tcp", 00:20:54.111 "traddr": "10.0.0.2", 00:20:54.111 "adrfam": "ipv4", 00:20:54.111 "trsvcid": "4420", 00:20:54.111 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:54.111 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:54.111 "hdgst": false, 00:20:54.111 "ddgst": false 00:20:54.111 }, 00:20:54.111 "method": "bdev_nvme_attach_controller" 00:20:54.111 },{ 00:20:54.111 "params": { 00:20:54.111 "name": "Nvme7", 00:20:54.111 "trtype": "tcp", 00:20:54.111 "traddr": "10.0.0.2", 00:20:54.111 "adrfam": "ipv4", 00:20:54.111 "trsvcid": "4420", 00:20:54.111 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:54.111 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:54.111 "hdgst": false, 00:20:54.111 "ddgst": false 00:20:54.111 }, 00:20:54.111 "method": "bdev_nvme_attach_controller" 00:20:54.111 },{ 00:20:54.111 "params": { 00:20:54.111 "name": "Nvme8", 00:20:54.111 "trtype": "tcp", 00:20:54.111 "traddr": "10.0.0.2", 00:20:54.111 "adrfam": "ipv4", 00:20:54.111 "trsvcid": "4420", 00:20:54.111 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:54.111 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:54.111 "hdgst": false, 00:20:54.111 "ddgst": false 00:20:54.111 }, 00:20:54.111 "method": "bdev_nvme_attach_controller" 00:20:54.111 },{ 00:20:54.111 "params": { 00:20:54.111 "name": "Nvme9", 00:20:54.111 "trtype": "tcp", 00:20:54.111 "traddr": "10.0.0.2", 00:20:54.111 "adrfam": "ipv4", 00:20:54.111 "trsvcid": "4420", 00:20:54.111 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:54.111 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:54.111 "hdgst": false, 00:20:54.111 "ddgst": false 00:20:54.111 }, 00:20:54.111 "method": "bdev_nvme_attach_controller" 00:20:54.111 },{ 00:20:54.111 "params": { 00:20:54.111 "name": "Nvme10", 00:20:54.111 "trtype": "tcp", 00:20:54.111 "traddr": "10.0.0.2", 00:20:54.111 "adrfam": "ipv4", 00:20:54.111 "trsvcid": "4420", 00:20:54.111 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:54.111 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:54.111 "hdgst": false, 00:20:54.111 "ddgst": false 00:20:54.111 }, 00:20:54.111 "method": "bdev_nvme_attach_controller" 00:20:54.111 }' 00:20:54.111 [2024-04-18 21:14:09.855447] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.111 [2024-04-18 21:14:09.927696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.012 Running I/O for 10 seconds... 00:20:56.012 21:14:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:56.012 21:14:11 -- common/autotest_common.sh@850 -- # return 0 00:20:56.012 21:14:11 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:56.012 21:14:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:56.012 21:14:11 -- common/autotest_common.sh@10 -- # set +x 00:20:56.012 21:14:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:56.012 21:14:11 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:56.012 21:14:11 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:56.012 21:14:11 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:56.012 21:14:11 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:56.012 21:14:11 -- target/shutdown.sh@57 -- # local ret=1 00:20:56.012 21:14:11 -- target/shutdown.sh@58 -- # local i 00:20:56.012 21:14:11 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:56.012 21:14:11 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:56.012 21:14:11 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:56.012 21:14:11 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:56.012 21:14:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:56.012 21:14:11 -- common/autotest_common.sh@10 -- # set +x 00:20:56.012 21:14:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:56.012 21:14:11 -- target/shutdown.sh@60 -- # read_io_count=3 00:20:56.012 21:14:11 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:20:56.012 21:14:11 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:56.271 21:14:12 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:56.271 21:14:12 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:56.271 21:14:12 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:56.271 21:14:12 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:56.271 21:14:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:56.271 21:14:12 -- common/autotest_common.sh@10 -- # set +x 00:20:56.271 21:14:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:56.271 21:14:12 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:56.271 21:14:12 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:56.271 21:14:12 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:56.536 21:14:12 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:56.536 21:14:12 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:56.536 21:14:12 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:56.536 21:14:12 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:56.536 21:14:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:56.536 21:14:12 -- common/autotest_common.sh@10 -- # set +x 00:20:56.536 21:14:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:56.536 21:14:12 -- target/shutdown.sh@60 -- # read_io_count=131 00:20:56.536 21:14:12 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:20:56.536 21:14:12 -- target/shutdown.sh@64 -- # ret=0 00:20:56.536 21:14:12 -- target/shutdown.sh@65 -- # break 00:20:56.536 21:14:12 -- target/shutdown.sh@69 -- # return 0 00:20:56.536 21:14:12 -- target/shutdown.sh@135 -- # killprocess 3111546 00:20:56.536 21:14:12 -- common/autotest_common.sh@936 -- # '[' -z 3111546 ']' 00:20:56.536 21:14:12 -- common/autotest_common.sh@940 -- # kill -0 3111546 00:20:56.536 21:14:12 -- common/autotest_common.sh@941 -- # uname 00:20:56.536 21:14:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:56.536 21:14:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3111546 00:20:56.536 21:14:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:56.536 21:14:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:56.536 21:14:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3111546' 00:20:56.536 killing process with pid 3111546 00:20:56.536 21:14:12 -- common/autotest_common.sh@955 -- # kill 3111546 00:20:56.536 21:14:12 -- common/autotest_common.sh@960 -- # wait 3111546 00:20:56.536 [2024-04-18 21:14:12.433135] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24165e0 is same with the state(5) to be set 00:20:56.536 [2024-04-18 21:14:12.433199] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24165e0 is same with the state(5) to be set 00:20:56.536 [2024-04-18 21:14:12.434671] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.536 [2024-04-18 21:14:12.434697] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.536 [2024-04-18 21:14:12.434704] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.536 [2024-04-18 21:14:12.434710] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.536 [2024-04-18 21:14:12.434717] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.536 [2024-04-18 21:14:12.434723] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.536 [2024-04-18 21:14:12.434729] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.536 [2024-04-18 21:14:12.434736] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.536 [2024-04-18 21:14:12.434742] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.536 [2024-04-18 21:14:12.434747] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.536 [2024-04-18 21:14:12.434753] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.536 [2024-04-18 21:14:12.434760] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.536 [2024-04-18 21:14:12.434766] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.536 [2024-04-18 21:14:12.434772] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.536 [2024-04-18 21:14:12.434778] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.536 [2024-04-18 21:14:12.434785] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.536 [2024-04-18 21:14:12.434791] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.536 [2024-04-18 21:14:12.434797] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.536 [2024-04-18 21:14:12.434803] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.536 [2024-04-18 21:14:12.434808] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.536 [2024-04-18 21:14:12.434814] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.536 [2024-04-18 21:14:12.434820] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.536 [2024-04-18 21:14:12.434826] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.536 [2024-04-18 21:14:12.434836] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.536 [2024-04-18 21:14:12.434842] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.536 [2024-04-18 21:14:12.434848] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.536 [2024-04-18 21:14:12.434853] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.536 [2024-04-18 21:14:12.434859] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.536 [2024-04-18 21:14:12.434865] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.536 [2024-04-18 21:14:12.434871] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.536 [2024-04-18 21:14:12.434876] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.536 [2024-04-18 21:14:12.434882] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.434888] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.434894] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.434900] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.434905] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.434911] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.434917] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.434923] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.434929] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.434935] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.434941] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.434947] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.434953] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.434959] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.434964] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.434970] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.434976] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.434981] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.434987] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.434995] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.435001] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.435007] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.435012] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.435018] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.435024] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.435030] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.435035] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.435041] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.435047] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.435052] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.435058] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.435064] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416a80 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436263] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436286] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436294] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436300] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436307] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436313] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436319] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436325] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436331] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436337] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436343] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436348] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436354] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436360] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436369] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436375] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436381] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436387] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436393] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436399] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436404] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436411] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436417] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436423] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436428] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436435] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436441] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436447] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436452] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436458] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436464] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436469] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436475] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436482] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436488] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436494] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436499] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436505] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436517] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436523] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436529] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436537] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436543] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436549] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436554] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436560] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436566] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436571] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436577] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436583] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436588] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.537 [2024-04-18 21:14:12.436594] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.436600] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.436606] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.436611] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.436617] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.436623] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.436629] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.436635] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.436640] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.436647] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.436653] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.436658] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416f20 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.437752] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.437767] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.437773] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.437779] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.437786] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.437795] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.437801] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.437808] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.437814] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.437819] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.437825] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.437831] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.437837] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.437843] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.437849] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.437855] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.437860] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.437867] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.437873] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.437879] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.437885] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.437891] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.437897] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.437903] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.437908] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.437914] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.437920] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.437926] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.437932] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.437938] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.437944] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.437950] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.437956] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.437964] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.437970] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.437976] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.437982] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.437988] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.437995] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.438001] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.438007] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.438012] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.438018] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.438024] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.438029] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.438035] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.438041] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.438047] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.438053] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.438060] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.438066] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.438071] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.438077] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.438083] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.438088] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.438094] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.438100] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.438106] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.438111] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.438117] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.438124] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.438130] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.438135] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417860 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.438930] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.438941] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.438947] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.438953] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.438959] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.438949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:1[2024-04-18 21:14:12.438965] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.538 the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.438975] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.538 [2024-04-18 21:14:12.438981] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.539 [2024-04-18 21:14:12.438981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.539 [2024-04-18 21:14:12.438989] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.539 [2024-04-18 21:14:12.438996] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.539 [2024-04-18 21:14:12.439001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:1[2024-04-18 21:14:12.439002] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.539 the state(5) to be set 00:20:56.539 [2024-04-18 21:14:12.439011] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.539 [2024-04-18 21:14:12.439011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.539 [2024-04-18 21:14:12.439018] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.539 [2024-04-18 21:14:12.439023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.539 [2024-04-18 21:14:12.439024] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.539 [2024-04-18 21:14:12.439031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-18 21:14:12.439031] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.539 the state(5) to be set 00:20:56.539 [2024-04-18 21:14:12.439040] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.539 [2024-04-18 21:14:12.439042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:1[2024-04-18 21:14:12.439047] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.539 the state(5) to be set 00:20:56.539 [2024-04-18 21:14:12.439055] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with [2024-04-18 21:14:12.439055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:56.539 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.539 [2024-04-18 21:14:12.439063] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.539 [2024-04-18 21:14:12.439066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.539 [2024-04-18 21:14:12.439070] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.539 [2024-04-18 21:14:12.439075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.539 [2024-04-18 21:14:12.439076] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.539 [2024-04-18 21:14:12.439083] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.539 [2024-04-18 21:14:12.439084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.539 [2024-04-18 21:14:12.439089] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.539 [2024-04-18 21:14:12.439092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.539 [2024-04-18 21:14:12.439096] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.539 [2024-04-18 21:14:12.439101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.539 [2024-04-18 21:14:12.439103] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.539 [2024-04-18 21:14:12.439109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.539 [2024-04-18 21:14:12.439110] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.539 [2024-04-18 21:14:12.439117] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.539 [2024-04-18 21:14:12.439117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.539 [2024-04-18 21:14:12.439123] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.539 [2024-04-18 21:14:12.439125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.539 [2024-04-18 21:14:12.439130] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.539 [2024-04-18 21:14:12.439135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.539 [2024-04-18 21:14:12.439137] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.539 [2024-04-18 21:14:12.439142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.539 [2024-04-18 21:14:12.439145] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.539 [2024-04-18 21:14:12.439151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.539 [2024-04-18 21:14:12.439154] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.539 [2024-04-18 21:14:12.439159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.539 [2024-04-18 21:14:12.439161] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.539 [2024-04-18 21:14:12.439168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:1[2024-04-18 21:14:12.439169] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.539 the state(5) to be set 00:20:56.539 [2024-04-18 21:14:12.439177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-18 21:14:12.439177] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.539 the state(5) to be set 00:20:56.539 [2024-04-18 21:14:12.439187] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.539 [2024-04-18 21:14:12.439188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.539 [2024-04-18 21:14:12.439193] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.539 [2024-04-18 21:14:12.439196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.539 [2024-04-18 21:14:12.439201] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.539 [2024-04-18 21:14:12.439205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.539 [2024-04-18 21:14:12.439208] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.539 [2024-04-18 21:14:12.439212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.539 [2024-04-18 21:14:12.439215] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.539 [2024-04-18 21:14:12.439221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:1[2024-04-18 21:14:12.439221] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.539 the state(5) to be set 00:20:56.539 [2024-04-18 21:14:12.439230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-18 21:14:12.439231] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.539 the state(5) to be set 00:20:56.539 [2024-04-18 21:14:12.439240] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.539 [2024-04-18 21:14:12.439241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.539 [2024-04-18 21:14:12.439246] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.539 [2024-04-18 21:14:12.439250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.539 [2024-04-18 21:14:12.439252] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.539 [2024-04-18 21:14:12.439259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.539 [2024-04-18 21:14:12.439263] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.539 [2024-04-18 21:14:12.439267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.539 [2024-04-18 21:14:12.439271] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.539 [2024-04-18 21:14:12.439276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.539 [2024-04-18 21:14:12.439277] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.539 [2024-04-18 21:14:12.439284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-18 21:14:12.439285] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.539 the state(5) to be set 00:20:56.539 [2024-04-18 21:14:12.439294] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.539 [2024-04-18 21:14:12.439295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.539 [2024-04-18 21:14:12.439300] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.539 [2024-04-18 21:14:12.439303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.539 [2024-04-18 21:14:12.439307] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.539 [2024-04-18 21:14:12.439312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.539 [2024-04-18 21:14:12.439314] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.540 [2024-04-18 21:14:12.439319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.540 [2024-04-18 21:14:12.439321] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.540 [2024-04-18 21:14:12.439328] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.540 [2024-04-18 21:14:12.439328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.540 [2024-04-18 21:14:12.439334] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.540 [2024-04-18 21:14:12.439337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.540 [2024-04-18 21:14:12.439343] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.540 [2024-04-18 21:14:12.439345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.540 [2024-04-18 21:14:12.439350] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.540 [2024-04-18 21:14:12.439352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.540 [2024-04-18 21:14:12.439357] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.540 [2024-04-18 21:14:12.439361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.540 [2024-04-18 21:14:12.439365] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.540 [2024-04-18 21:14:12.439370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.540 [2024-04-18 21:14:12.439372] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with the state(5) to be set 00:20:56.540 [2024-04-18 21:14:12.439379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:1[2024-04-18 21:14:12.439380] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.540 the state(5) to be set 00:20:56.540 [2024-04-18 21:14:12.439390] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417d20 is same with [2024-04-18 21:14:12.439390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:56.540 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.540 [2024-04-18 21:14:12.439401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.540 [2024-04-18 21:14:12.439407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.540 [2024-04-18 21:14:12.439415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.540 [2024-04-18 21:14:12.439421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.540 [2024-04-18 21:14:12.439429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.540 [2024-04-18 21:14:12.439436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.540 [2024-04-18 21:14:12.439444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.540 [2024-04-18 21:14:12.439450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.540 [2024-04-18 21:14:12.439457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.540 [2024-04-18 21:14:12.439464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.540 [2024-04-18 21:14:12.439472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.540 [2024-04-18 21:14:12.439478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.540 [2024-04-18 21:14:12.439487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.540 [2024-04-18 21:14:12.439493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.540 [2024-04-18 21:14:12.439501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.540 [2024-04-18 21:14:12.439507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.540 [2024-04-18 21:14:12.439522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.540 [2024-04-18 21:14:12.439528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.540 [2024-04-18 21:14:12.439538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.540 [2024-04-18 21:14:12.439549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.540 [2024-04-18 21:14:12.439557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.540 [2024-04-18 21:14:12.439563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.540 [2024-04-18 21:14:12.439571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.540 [2024-04-18 21:14:12.439578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.540 [2024-04-18 21:14:12.439585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.540 [2024-04-18 21:14:12.439592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.540 [2024-04-18 21:14:12.439600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.540 [2024-04-18 21:14:12.439606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.540 [2024-04-18 21:14:12.439614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.540 [2024-04-18 21:14:12.439620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.540 [2024-04-18 21:14:12.439628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.540 [2024-04-18 21:14:12.439635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.540 [2024-04-18 21:14:12.439643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.540 [2024-04-18 21:14:12.439649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.540 [2024-04-18 21:14:12.439657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.540 [2024-04-18 21:14:12.439664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.540 [2024-04-18 21:14:12.439672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.540 [2024-04-18 21:14:12.439678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.540 [2024-04-18 21:14:12.439686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.540 [2024-04-18 21:14:12.439693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.540 [2024-04-18 21:14:12.439701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.540 [2024-04-18 21:14:12.439707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.540 [2024-04-18 21:14:12.439715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.540 [2024-04-18 21:14:12.439723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.540 [2024-04-18 21:14:12.439730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.540 [2024-04-18 21:14:12.439737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.540 [2024-04-18 21:14:12.439745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.540 [2024-04-18 21:14:12.439751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.540 [2024-04-18 21:14:12.439759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.540 [2024-04-18 21:14:12.439765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.541 [2024-04-18 21:14:12.439773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.541 [2024-04-18 21:14:12.439781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.541 [2024-04-18 21:14:12.439790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.541 [2024-04-18 21:14:12.439796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.541 [2024-04-18 21:14:12.439806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.541 [2024-04-18 21:14:12.439812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.541 [2024-04-18 21:14:12.439820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.541 [2024-04-18 21:14:12.439827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.541 [2024-04-18 21:14:12.439835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.541 [2024-04-18 21:14:12.439842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.541 [2024-04-18 21:14:12.439850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.541 [2024-04-18 21:14:12.439857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.541 [2024-04-18 21:14:12.439865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.541 [2024-04-18 21:14:12.439872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.541 [2024-04-18 21:14:12.439880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.541 [2024-04-18 21:14:12.439887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.541 [2024-04-18 21:14:12.439896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.541 [2024-04-18 21:14:12.439902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.541 [2024-04-18 21:14:12.439912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.541 [2024-04-18 21:14:12.439919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.541 [2024-04-18 21:14:12.439927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.541 [2024-04-18 21:14:12.439933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.541 [2024-04-18 21:14:12.439941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.541 [2024-04-18 21:14:12.439948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.541 [2024-04-18 21:14:12.439955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.541 [2024-04-18 21:14:12.439962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.541 [2024-04-18 21:14:12.439970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.541 [2024-04-18 21:14:12.439977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.541 [2024-04-18 21:14:12.439984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.541 [2024-04-18 21:14:12.439993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.541 [2024-04-18 21:14:12.440001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.541 [2024-04-18 21:14:12.440008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.541 [2024-04-18 21:14:12.440033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:56.541 [2024-04-18 21:14:12.440367] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24181c0 is same with the state(5) to be set 00:20:56.541 [2024-04-18 21:14:12.440414] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1625f10 was disconnected and freed. reset controller. 00:20:56.541 [2024-04-18 21:14:12.440473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.541 [2024-04-18 21:14:12.440483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.541 [2024-04-18 21:14:12.440490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.541 [2024-04-18 21:14:12.440497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.541 [2024-04-18 21:14:12.440504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.541 [2024-04-18 21:14:12.440517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.541 [2024-04-18 21:14:12.440525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.541 [2024-04-18 21:14:12.440532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.541 [2024-04-18 21:14:12.440540] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3ac0 is same with the state(5) to be set 00:20:56.541 [2024-04-18 21:14:12.440566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.541 [2024-04-18 21:14:12.440574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.541 [2024-04-18 21:14:12.440581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.541 [2024-04-18 21:14:12.440587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.541 [2024-04-18 21:14:12.440595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.541 [2024-04-18 21:14:12.440601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.541 [2024-04-18 21:14:12.440608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.541 [2024-04-18 21:14:12.440614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.541 [2024-04-18 21:14:12.440620] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1517150 is same with the state(5) to be set 00:20:56.541 [2024-04-18 21:14:12.440642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.541 [2024-04-18 21:14:12.440650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.541 [2024-04-18 21:14:12.440657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.541 [2024-04-18 21:14:12.440663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.541 [2024-04-18 21:14:12.440670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.541 [2024-04-18 21:14:12.440677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.541 [2024-04-18 21:14:12.440684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.541 [2024-04-18 21:14:12.440690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.541 [2024-04-18 21:14:12.440697] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec950 is same with the state(5) to be set 00:20:56.541 [2024-04-18 21:14:12.440719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.541 [2024-04-18 21:14:12.440727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.541 [2024-04-18 21:14:12.440734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.541 [2024-04-18 21:14:12.440741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.541 [2024-04-18 21:14:12.440750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.541 [2024-04-18 21:14:12.440756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.541 [2024-04-18 21:14:12.440763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.541 [2024-04-18 21:14:12.440771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.541 [2024-04-18 21:14:12.440777] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7270 is same with the state(5) to be set 00:20:56.541 [2024-04-18 21:14:12.440778] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418660 is same with the state(5) to be set 00:20:56.541 [2024-04-18 21:14:12.440800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.541 [2024-04-18 21:14:12.440808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.541 [2024-04-18 21:14:12.440815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.542 [2024-04-18 21:14:12.440822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.542 [2024-04-18 21:14:12.440829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.542 [2024-04-18 21:14:12.440835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.542 [2024-04-18 21:14:12.440842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.542 [2024-04-18 21:14:12.440848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.542 [2024-04-18 21:14:12.440854] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a7c70 is same with the state(5) to be set 00:20:56.542 [2024-04-18 21:14:12.440880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.542 [2024-04-18 21:14:12.440888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.542 [2024-04-18 21:14:12.440895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.542 [2024-04-18 21:14:12.440902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.542 [2024-04-18 21:14:12.440908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.542 [2024-04-18 21:14:12.440914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.542 [2024-04-18 21:14:12.440921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.542 [2024-04-18 21:14:12.440928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.542 [2024-04-18 21:14:12.440934] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d93b0 is same with the state(5) to be set 00:20:56.542 [2024-04-18 21:14:12.440954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.542 [2024-04-18 21:14:12.440962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.542 [2024-04-18 21:14:12.440969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.542 [2024-04-18 21:14:12.440976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.542 [2024-04-18 21:14:12.440983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.542 [2024-04-18 21:14:12.440992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.542 [2024-04-18 21:14:12.440999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.542 [2024-04-18 21:14:12.441005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.542 [2024-04-18 21:14:12.441011] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b38e0 is same with the state(5) to be set 00:20:56.542 [2024-04-18 21:14:12.441037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.542 [2024-04-18 21:14:12.441045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.542 [2024-04-18 21:14:12.441053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.542 [2024-04-18 21:14:12.441059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.542 [2024-04-18 21:14:12.441066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.542 [2024-04-18 21:14:12.441072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.542 [2024-04-18 21:14:12.441079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.542 [2024-04-18 21:14:12.441087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.542 [2024-04-18 21:14:12.441093] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff9610 is same with the state(5) to be set 00:20:56.542 [2024-04-18 21:14:12.441115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.542 [2024-04-18 21:14:12.441123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.542 [2024-04-18 21:14:12.441130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.542 [2024-04-18 21:14:12.441137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.542 [2024-04-18 21:14:12.441144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.542 [2024-04-18 21:14:12.441150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.542 [2024-04-18 21:14:12.441157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.542 [2024-04-18 21:14:12.441164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.542 [2024-04-18 21:14:12.441170] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ad630 is same with the state(5) to be set 00:20:56.542 [2024-04-18 21:14:12.441264] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.542 [2024-04-18 21:14:12.441278] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.542 [2024-04-18 21:14:12.441285] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.542 [2024-04-18 21:14:12.441294] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.542 [2024-04-18 21:14:12.441300] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.542 [2024-04-18 21:14:12.441306] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.542 [2024-04-18 21:14:12.441312] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.542 [2024-04-18 21:14:12.441319] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.542 [2024-04-18 21:14:12.441324] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.542 [2024-04-18 21:14:12.441330] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.542 [2024-04-18 21:14:12.441337] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.542 [2024-04-18 21:14:12.441342] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.542 [2024-04-18 21:14:12.441349] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.542 [2024-04-18 21:14:12.441355] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.542 [2024-04-18 21:14:12.441360] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.542 [2024-04-18 21:14:12.441366] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.542 [2024-04-18 21:14:12.441372] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.542 [2024-04-18 21:14:12.441378] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.542 [2024-04-18 21:14:12.441384] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.542 [2024-04-18 21:14:12.441390] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.542 [2024-04-18 21:14:12.441397] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.542 [2024-04-18 21:14:12.441403] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.542 [2024-04-18 21:14:12.441408] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.542 [2024-04-18 21:14:12.441414] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.542 [2024-04-18 21:14:12.441420] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.542 [2024-04-18 21:14:12.441426] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.542 [2024-04-18 21:14:12.441432] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.542 [2024-04-18 21:14:12.441438] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.542 [2024-04-18 21:14:12.441444] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.542 [2024-04-18 21:14:12.441450] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.542 [2024-04-18 21:14:12.441457] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.542 [2024-04-18 21:14:12.441463] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.542 [2024-04-18 21:14:12.441462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:1[2024-04-18 21:14:12.441469] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.542 the state(5) to be set 00:20:56.542 [2024-04-18 21:14:12.441479] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.542 [2024-04-18 21:14:12.441482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.542 [2024-04-18 21:14:12.441486] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.542 [2024-04-18 21:14:12.441493] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.542 [2024-04-18 21:14:12.441496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.543 [2024-04-18 21:14:12.441500] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.543 [2024-04-18 21:14:12.441504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.543 [2024-04-18 21:14:12.441506] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.543 [2024-04-18 21:14:12.441518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:1[2024-04-18 21:14:12.441519] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.543 the state(5) to be set 00:20:56.543 [2024-04-18 21:14:12.441527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-18 21:14:12.441528] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.543 the state(5) to be set 00:20:56.543 [2024-04-18 21:14:12.441538] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.543 [2024-04-18 21:14:12.441539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.543 [2024-04-18 21:14:12.441544] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.543 [2024-04-18 21:14:12.441547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.543 [2024-04-18 21:14:12.441551] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.543 [2024-04-18 21:14:12.441556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.543 [2024-04-18 21:14:12.441558] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.543 [2024-04-18 21:14:12.441564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.543 [2024-04-18 21:14:12.441565] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.543 [2024-04-18 21:14:12.441572] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.543 [2024-04-18 21:14:12.441575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:1[2024-04-18 21:14:12.441578] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.543 the state(5) to be set 00:20:56.543 [2024-04-18 21:14:12.441587] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with [2024-04-18 21:14:12.441587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:56.543 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.543 [2024-04-18 21:14:12.441597] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.543 [2024-04-18 21:14:12.441599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.543 [2024-04-18 21:14:12.441603] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.543 [2024-04-18 21:14:12.441607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.543 [2024-04-18 21:14:12.441610] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.543 [2024-04-18 21:14:12.441617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:1[2024-04-18 21:14:12.441617] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.543 the state(5) to be set 00:20:56.543 [2024-04-18 21:14:12.441626] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with [2024-04-18 21:14:12.441626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:56.543 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.543 [2024-04-18 21:14:12.441635] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.543 [2024-04-18 21:14:12.441638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.543 [2024-04-18 21:14:12.441642] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.543 [2024-04-18 21:14:12.441646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.543 [2024-04-18 21:14:12.441649] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.543 [2024-04-18 21:14:12.441655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:1[2024-04-18 21:14:12.441656] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.543 the state(5) to be set 00:20:56.543 [2024-04-18 21:14:12.441664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-18 21:14:12.441664] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.543 the state(5) to be set 00:20:56.543 [2024-04-18 21:14:12.441674] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.543 [2024-04-18 21:14:12.441676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.543 [2024-04-18 21:14:12.441680] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.543 [2024-04-18 21:14:12.441683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.543 [2024-04-18 21:14:12.441687] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.543 [2024-04-18 21:14:12.441693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.543 [2024-04-18 21:14:12.441695] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.543 [2024-04-18 21:14:12.441700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.543 [2024-04-18 21:14:12.441702] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418b00 is same with the state(5) to be set 00:20:56.543 [2024-04-18 21:14:12.441709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.543 [2024-04-18 21:14:12.441717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.543 [2024-04-18 21:14:12.441725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.543 [2024-04-18 21:14:12.441731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.543 [2024-04-18 21:14:12.441739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.543 [2024-04-18 21:14:12.441746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.543 [2024-04-18 21:14:12.441753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.543 [2024-04-18 21:14:12.441760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.543 [2024-04-18 21:14:12.441768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.543 [2024-04-18 21:14:12.441774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.543 [2024-04-18 21:14:12.441782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.543 [2024-04-18 21:14:12.441789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.543 [2024-04-18 21:14:12.441797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.543 [2024-04-18 21:14:12.441804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.543 [2024-04-18 21:14:12.441811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.543 [2024-04-18 21:14:12.441818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.543 [2024-04-18 21:14:12.441825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.543 [2024-04-18 21:14:12.441832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.543 [2024-04-18 21:14:12.441841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.543 [2024-04-18 21:14:12.441848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.543 [2024-04-18 21:14:12.441856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.543 [2024-04-18 21:14:12.441863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.543 [2024-04-18 21:14:12.441873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.543 [2024-04-18 21:14:12.441879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.543 [2024-04-18 21:14:12.441887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.543 [2024-04-18 21:14:12.441893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.543 [2024-04-18 21:14:12.441901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.543 [2024-04-18 21:14:12.441908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.543 [2024-04-18 21:14:12.441916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.543 [2024-04-18 21:14:12.441922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.543 [2024-04-18 21:14:12.441930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.543 [2024-04-18 21:14:12.441937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.543 [2024-04-18 21:14:12.441945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.543 [2024-04-18 21:14:12.441951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.544 [2024-04-18 21:14:12.441959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.544 [2024-04-18 21:14:12.441965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.544 [2024-04-18 21:14:12.441973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.544 [2024-04-18 21:14:12.441979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.544 [2024-04-18 21:14:12.441988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.544 [2024-04-18 21:14:12.441994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.544 [2024-04-18 21:14:12.442002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.544 [2024-04-18 21:14:12.442008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.544 [2024-04-18 21:14:12.442016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.544 [2024-04-18 21:14:12.442023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.544 [2024-04-18 21:14:12.454236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.544 [2024-04-18 21:14:12.454250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.544 [2024-04-18 21:14:12.454263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.544 [2024-04-18 21:14:12.454270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.544 [2024-04-18 21:14:12.454279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.544 [2024-04-18 21:14:12.454286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.544 [2024-04-18 21:14:12.454295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.544 [2024-04-18 21:14:12.454302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.544 [2024-04-18 21:14:12.454310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.544 [2024-04-18 21:14:12.454317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.544 [2024-04-18 21:14:12.454326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.544 [2024-04-18 21:14:12.454333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.544 [2024-04-18 21:14:12.454342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.544 [2024-04-18 21:14:12.454348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.544 [2024-04-18 21:14:12.454357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.544 [2024-04-18 21:14:12.454363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.544 [2024-04-18 21:14:12.454372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.544 [2024-04-18 21:14:12.454378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.544 [2024-04-18 21:14:12.454387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.544 [2024-04-18 21:14:12.454393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.544 [2024-04-18 21:14:12.454402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.544 [2024-04-18 21:14:12.454409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.544 [2024-04-18 21:14:12.454418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.544 [2024-04-18 21:14:12.454424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.544 [2024-04-18 21:14:12.454433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.544 [2024-04-18 21:14:12.454440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.544 [2024-04-18 21:14:12.454448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.544 [2024-04-18 21:14:12.454457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.544 [2024-04-18 21:14:12.454465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.544 [2024-04-18 21:14:12.454471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.544 [2024-04-18 21:14:12.454480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.544 [2024-04-18 21:14:12.454487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.544 [2024-04-18 21:14:12.454495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.544 [2024-04-18 21:14:12.454502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.544 [2024-04-18 21:14:12.454524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.544 [2024-04-18 21:14:12.454532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.544 [2024-04-18 21:14:12.454541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.544 [2024-04-18 21:14:12.454547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.544 [2024-04-18 21:14:12.454556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.544 [2024-04-18 21:14:12.454563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.544 [2024-04-18 21:14:12.454571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.544 [2024-04-18 21:14:12.454578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.544 [2024-04-18 21:14:12.454587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.544 [2024-04-18 21:14:12.454593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.544 [2024-04-18 21:14:12.454602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.544 [2024-04-18 21:14:12.454609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.544 [2024-04-18 21:14:12.454617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.544 [2024-04-18 21:14:12.454624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.544 [2024-04-18 21:14:12.454633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.544 [2024-04-18 21:14:12.454640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.544 [2024-04-18 21:14:12.454648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.544 [2024-04-18 21:14:12.454655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.544 [2024-04-18 21:14:12.454665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.544 [2024-04-18 21:14:12.454672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.544 [2024-04-18 21:14:12.454681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.544 [2024-04-18 21:14:12.454688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.544 [2024-04-18 21:14:12.454696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.544 [2024-04-18 21:14:12.454703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.544 [2024-04-18 21:14:12.454711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.544 [2024-04-18 21:14:12.454717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.544 [2024-04-18 21:14:12.454747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:56.544 [2024-04-18 21:14:12.454803] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x162cbc0 was disconnected and freed. reset controller. 00:20:56.544 [2024-04-18 21:14:12.457569] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16b3ac0 (9): Bad file descriptor 00:20:56.544 [2024-04-18 21:14:12.457615] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1517150 (9): Bad file descriptor 00:20:56.544 [2024-04-18 21:14:12.457637] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ec950 (9): Bad file descriptor 00:20:56.544 [2024-04-18 21:14:12.457653] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16b7270 (9): Bad file descriptor 00:20:56.544 [2024-04-18 21:14:12.457670] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16a7c70 (9): Bad file descriptor 00:20:56.544 [2024-04-18 21:14:12.457708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.544 [2024-04-18 21:14:12.457720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.544 [2024-04-18 21:14:12.457731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.545 [2024-04-18 21:14:12.457740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.545 [2024-04-18 21:14:12.457750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.545 [2024-04-18 21:14:12.457760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.545 [2024-04-18 21:14:12.457770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.545 [2024-04-18 21:14:12.457779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.545 [2024-04-18 21:14:12.457788] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba2b0 is same with the state(5) to be set 00:20:56.545 [2024-04-18 21:14:12.457809] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d93b0 (9): Bad file descriptor 00:20:56.545 [2024-04-18 21:14:12.457828] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16b38e0 (9): Bad file descriptor 00:20:56.545 [2024-04-18 21:14:12.457853] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff9610 (9): Bad file descriptor 00:20:56.545 [2024-04-18 21:14:12.457875] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16ad630 (9): Bad file descriptor 00:20:56.545 [2024-04-18 21:14:12.459357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.545 [2024-04-18 21:14:12.459376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.545 [2024-04-18 21:14:12.459392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.545 [2024-04-18 21:14:12.459402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.545 [2024-04-18 21:14:12.459413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.545 [2024-04-18 21:14:12.459423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.545 [2024-04-18 21:14:12.459434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.545 [2024-04-18 21:14:12.459443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.545 [2024-04-18 21:14:12.459454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.545 [2024-04-18 21:14:12.459463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.545 [2024-04-18 21:14:12.459475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.545 [2024-04-18 21:14:12.459484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.545 [2024-04-18 21:14:12.459495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.545 [2024-04-18 21:14:12.459504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.545 [2024-04-18 21:14:12.459522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.545 [2024-04-18 21:14:12.459532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.545 [2024-04-18 21:14:12.459543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.545 [2024-04-18 21:14:12.459552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.545 [2024-04-18 21:14:12.459563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.545 [2024-04-18 21:14:12.459573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.545 [2024-04-18 21:14:12.459584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.545 [2024-04-18 21:14:12.459593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.545 [2024-04-18 21:14:12.459604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.545 [2024-04-18 21:14:12.459618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.545 [2024-04-18 21:14:12.459630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.545 [2024-04-18 21:14:12.459640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.545 [2024-04-18 21:14:12.459651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.545 [2024-04-18 21:14:12.459661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.545 [2024-04-18 21:14:12.459673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.545 [2024-04-18 21:14:12.459682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.545 [2024-04-18 21:14:12.459693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.545 [2024-04-18 21:14:12.459703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.545 [2024-04-18 21:14:12.459714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.545 [2024-04-18 21:14:12.459723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.545 [2024-04-18 21:14:12.459734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.545 [2024-04-18 21:14:12.459743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.545 [2024-04-18 21:14:12.459754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.545 [2024-04-18 21:14:12.459764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.545 [2024-04-18 21:14:12.459775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.545 [2024-04-18 21:14:12.459784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.545 [2024-04-18 21:14:12.459795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.545 [2024-04-18 21:14:12.459804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.545 [2024-04-18 21:14:12.459816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.545 [2024-04-18 21:14:12.459825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.545 [2024-04-18 21:14:12.459836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.545 [2024-04-18 21:14:12.459845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.545 [2024-04-18 21:14:12.459856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.545 [2024-04-18 21:14:12.459865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.545 [2024-04-18 21:14:12.459878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.545 [2024-04-18 21:14:12.459887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.545 [2024-04-18 21:14:12.459899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.545 [2024-04-18 21:14:12.459908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.545 [2024-04-18 21:14:12.459919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.545 [2024-04-18 21:14:12.459928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.545 [2024-04-18 21:14:12.459939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.545 [2024-04-18 21:14:12.459948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.545 [2024-04-18 21:14:12.459959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.545 [2024-04-18 21:14:12.459968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.546 [2024-04-18 21:14:12.459980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.546 [2024-04-18 21:14:12.459989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.546 [2024-04-18 21:14:12.460001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.546 [2024-04-18 21:14:12.460010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.546 [2024-04-18 21:14:12.460021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.546 [2024-04-18 21:14:12.460030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.546 [2024-04-18 21:14:12.460042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.546 [2024-04-18 21:14:12.460051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.546 [2024-04-18 21:14:12.460062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.546 [2024-04-18 21:14:12.460071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.546 [2024-04-18 21:14:12.460082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.546 [2024-04-18 21:14:12.460092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.546 [2024-04-18 21:14:12.460103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.546 [2024-04-18 21:14:12.460113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.546 [2024-04-18 21:14:12.460124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.546 [2024-04-18 21:14:12.460135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.546 [2024-04-18 21:14:12.460146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.546 [2024-04-18 21:14:12.460156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.546 [2024-04-18 21:14:12.460167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.546 [2024-04-18 21:14:12.460176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.546 [2024-04-18 21:14:12.460187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.546 [2024-04-18 21:14:12.460196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.546 [2024-04-18 21:14:12.460208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.546 [2024-04-18 21:14:12.460217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.546 [2024-04-18 21:14:12.460228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.546 [2024-04-18 21:14:12.460237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.546 [2024-04-18 21:14:12.460248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.546 [2024-04-18 21:14:12.460258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.546 [2024-04-18 21:14:12.460269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.546 [2024-04-18 21:14:12.460278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.546 [2024-04-18 21:14:12.460289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.546 [2024-04-18 21:14:12.460298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.546 [2024-04-18 21:14:12.460309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.546 [2024-04-18 21:14:12.460318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.546 [2024-04-18 21:14:12.460330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.546 [2024-04-18 21:14:12.460339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.546 [2024-04-18 21:14:12.460350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.546 [2024-04-18 21:14:12.460360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.546 [2024-04-18 21:14:12.460371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.546 [2024-04-18 21:14:12.460380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.546 [2024-04-18 21:14:12.460396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.546 [2024-04-18 21:14:12.460406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.546 [2024-04-18 21:14:12.460418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.546 [2024-04-18 21:14:12.460427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.546 [2024-04-18 21:14:12.460438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.546 [2024-04-18 21:14:12.460447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.546 [2024-04-18 21:14:12.460458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.546 [2024-04-18 21:14:12.460467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.546 [2024-04-18 21:14:12.460478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.546 [2024-04-18 21:14:12.460488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.814 [2024-04-18 21:14:12.460499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.814 [2024-04-18 21:14:12.460509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.814 [2024-04-18 21:14:12.460527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.814 [2024-04-18 21:14:12.460536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.814 [2024-04-18 21:14:12.460548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.814 [2024-04-18 21:14:12.460557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.814 [2024-04-18 21:14:12.460568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.814 [2024-04-18 21:14:12.460578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.814 [2024-04-18 21:14:12.460590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.814 [2024-04-18 21:14:12.460600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.814 [2024-04-18 21:14:12.460612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.814 [2024-04-18 21:14:12.460621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.814 [2024-04-18 21:14:12.460632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.814 [2024-04-18 21:14:12.460642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.814 [2024-04-18 21:14:12.460653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.814 [2024-04-18 21:14:12.460665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.814 [2024-04-18 21:14:12.460677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.814 [2024-04-18 21:14:12.460686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.814 [2024-04-18 21:14:12.460697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.814 [2024-04-18 21:14:12.460707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.814 [2024-04-18 21:14:12.460779] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x159b2e0 was disconnected and freed. reset controller. 00:20:56.814 [2024-04-18 21:14:12.460931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.814 [2024-04-18 21:14:12.460944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.814 [2024-04-18 21:14:12.460958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.814 [2024-04-18 21:14:12.460968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.814 [2024-04-18 21:14:12.460980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.814 [2024-04-18 21:14:12.460989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.814 [2024-04-18 21:14:12.461000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.814 [2024-04-18 21:14:12.461009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.814 [2024-04-18 21:14:12.461020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.814 [2024-04-18 21:14:12.461029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.814 [2024-04-18 21:14:12.461041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.814 [2024-04-18 21:14:12.461050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.814 [2024-04-18 21:14:12.461061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.814 [2024-04-18 21:14:12.461070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.814 [2024-04-18 21:14:12.461081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.814 [2024-04-18 21:14:12.461090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.814 [2024-04-18 21:14:12.461101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.814 [2024-04-18 21:14:12.461111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.814 [2024-04-18 21:14:12.461122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.815 [2024-04-18 21:14:12.461131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.815 [2024-04-18 21:14:12.461145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.815 [2024-04-18 21:14:12.461154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.815 [2024-04-18 21:14:12.461166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.815 [2024-04-18 21:14:12.461175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.815 [2024-04-18 21:14:12.461186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.815 [2024-04-18 21:14:12.461195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.815 [2024-04-18 21:14:12.461206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.815 [2024-04-18 21:14:12.461216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.815 [2024-04-18 21:14:12.461227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.815 [2024-04-18 21:14:12.461236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.815 [2024-04-18 21:14:12.461247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.815 [2024-04-18 21:14:12.461256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.815 [2024-04-18 21:14:12.461267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.815 [2024-04-18 21:14:12.461276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.815 [2024-04-18 21:14:12.461287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.815 [2024-04-18 21:14:12.461296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.815 [2024-04-18 21:14:12.461308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.815 [2024-04-18 21:14:12.461317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.815 [2024-04-18 21:14:12.461328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.815 [2024-04-18 21:14:12.461337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.815 [2024-04-18 21:14:12.461348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.815 [2024-04-18 21:14:12.461357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.815 [2024-04-18 21:14:12.461368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.815 [2024-04-18 21:14:12.461377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.815 [2024-04-18 21:14:12.461389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.815 [2024-04-18 21:14:12.461399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.815 [2024-04-18 21:14:12.461410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.815 [2024-04-18 21:14:12.461419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.815 [2024-04-18 21:14:12.461431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.815 [2024-04-18 21:14:12.461440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.815 [2024-04-18 21:14:12.461451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.815 [2024-04-18 21:14:12.461460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.815 [2024-04-18 21:14:12.461471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.815 [2024-04-18 21:14:12.461480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.815 [2024-04-18 21:14:12.461491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.815 [2024-04-18 21:14:12.461500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.815 [2024-04-18 21:14:12.461518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.815 [2024-04-18 21:14:12.461527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.815 [2024-04-18 21:14:12.461539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.815 [2024-04-18 21:14:12.461548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.815 [2024-04-18 21:14:12.461559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.815 [2024-04-18 21:14:12.461568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.815 [2024-04-18 21:14:12.461579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.815 [2024-04-18 21:14:12.461588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.815 [2024-04-18 21:14:12.461599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.815 [2024-04-18 21:14:12.461609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.815 [2024-04-18 21:14:12.461621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.815 [2024-04-18 21:14:12.461629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.815 [2024-04-18 21:14:12.461640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.815 [2024-04-18 21:14:12.461650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.815 [2024-04-18 21:14:12.461663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.815 [2024-04-18 21:14:12.461672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.815 [2024-04-18 21:14:12.461683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.815 [2024-04-18 21:14:12.461692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.815 [2024-04-18 21:14:12.461703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.815 [2024-04-18 21:14:12.461712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.815 [2024-04-18 21:14:12.461723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.815 [2024-04-18 21:14:12.461732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.815 [2024-04-18 21:14:12.461743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.815 [2024-04-18 21:14:12.461752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.815 [2024-04-18 21:14:12.461763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.815 [2024-04-18 21:14:12.461772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.815 [2024-04-18 21:14:12.461783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.815 [2024-04-18 21:14:12.461792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.815 [2024-04-18 21:14:12.461803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.815 [2024-04-18 21:14:12.461812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.815 [2024-04-18 21:14:12.461824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.815 [2024-04-18 21:14:12.461833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.815 [2024-04-18 21:14:12.461844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.815 [2024-04-18 21:14:12.461852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.815 [2024-04-18 21:14:12.461864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.815 [2024-04-18 21:14:12.461873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.815 [2024-04-18 21:14:12.461885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.815 [2024-04-18 21:14:12.461894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.815 [2024-04-18 21:14:12.461905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.815 [2024-04-18 21:14:12.461915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.816 [2024-04-18 21:14:12.461927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.816 [2024-04-18 21:14:12.461936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.816 [2024-04-18 21:14:12.461948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.816 [2024-04-18 21:14:12.461956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.816 [2024-04-18 21:14:12.461968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.816 [2024-04-18 21:14:12.461977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.816 [2024-04-18 21:14:12.461988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.816 [2024-04-18 21:14:12.461997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.816 [2024-04-18 21:14:12.462008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.816 [2024-04-18 21:14:12.462017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.816 [2024-04-18 21:14:12.462028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.816 [2024-04-18 21:14:12.462037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.816 [2024-04-18 21:14:12.462048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.816 [2024-04-18 21:14:12.462057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.816 [2024-04-18 21:14:12.462068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.816 [2024-04-18 21:14:12.462077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.816 [2024-04-18 21:14:12.462088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.816 [2024-04-18 21:14:12.462097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.816 [2024-04-18 21:14:12.462108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.816 [2024-04-18 21:14:12.462117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.816 [2024-04-18 21:14:12.462128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.816 [2024-04-18 21:14:12.462137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.816 [2024-04-18 21:14:12.462148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.816 [2024-04-18 21:14:12.462159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.816 [2024-04-18 21:14:12.462170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.816 [2024-04-18 21:14:12.462180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.816 [2024-04-18 21:14:12.462191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.816 [2024-04-18 21:14:12.462200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.816 [2024-04-18 21:14:12.462211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.816 [2024-04-18 21:14:12.462220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.816 [2024-04-18 21:14:12.462231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.816 [2024-04-18 21:14:12.462240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.816 [2024-04-18 21:14:12.462310] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14e6a50 was disconnected and freed. reset controller. 00:20:56.816 [2024-04-18 21:14:12.462394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.816 [2024-04-18 21:14:12.462405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.816 [2024-04-18 21:14:12.462417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.816 [2024-04-18 21:14:12.462426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.816 [2024-04-18 21:14:12.462438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.816 [2024-04-18 21:14:12.462447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.816 [2024-04-18 21:14:12.462458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.816 [2024-04-18 21:14:12.462467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.816 [2024-04-18 21:14:12.462478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.816 [2024-04-18 21:14:12.462487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.816 [2024-04-18 21:14:12.462498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.816 [2024-04-18 21:14:12.462508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.816 [2024-04-18 21:14:12.462524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.816 [2024-04-18 21:14:12.462533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.816 [2024-04-18 21:14:12.462544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.816 [2024-04-18 21:14:12.462554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.816 [2024-04-18 21:14:12.462567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.816 [2024-04-18 21:14:12.462576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.816 [2024-04-18 21:14:12.462587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.816 [2024-04-18 21:14:12.462596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.816 [2024-04-18 21:14:12.462607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.816 [2024-04-18 21:14:12.462617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.816 [2024-04-18 21:14:12.462628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.816 [2024-04-18 21:14:12.462638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.816 [2024-04-18 21:14:12.462649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.816 [2024-04-18 21:14:12.462658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.816 [2024-04-18 21:14:12.462670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.816 [2024-04-18 21:14:12.462679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.816 [2024-04-18 21:14:12.462690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.816 [2024-04-18 21:14:12.462700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.816 [2024-04-18 21:14:12.462711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.816 [2024-04-18 21:14:12.462720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.816 [2024-04-18 21:14:12.462731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.816 [2024-04-18 21:14:12.462740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.816 [2024-04-18 21:14:12.462751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.816 [2024-04-18 21:14:12.462760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.816 [2024-04-18 21:14:12.462771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.816 [2024-04-18 21:14:12.462781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.816 [2024-04-18 21:14:12.462791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.816 [2024-04-18 21:14:12.462801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.816 [2024-04-18 21:14:12.462812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.816 [2024-04-18 21:14:12.462823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.816 [2024-04-18 21:14:12.462834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.816 [2024-04-18 21:14:12.462843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.816 [2024-04-18 21:14:12.462854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.816 [2024-04-18 21:14:12.462863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.817 [2024-04-18 21:14:12.462874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.817 [2024-04-18 21:14:12.462883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.817 [2024-04-18 21:14:12.462895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.817 [2024-04-18 21:14:12.462904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.817 [2024-04-18 21:14:12.462915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.817 [2024-04-18 21:14:12.462924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.817 [2024-04-18 21:14:12.462935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.817 [2024-04-18 21:14:12.462945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.817 [2024-04-18 21:14:12.462956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.817 [2024-04-18 21:14:12.462965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.817 [2024-04-18 21:14:12.462976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.817 [2024-04-18 21:14:12.462985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.817 [2024-04-18 21:14:12.462999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.817 [2024-04-18 21:14:12.463008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.817 [2024-04-18 21:14:12.463019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.817 [2024-04-18 21:14:12.463028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.817 [2024-04-18 21:14:12.463039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.817 [2024-04-18 21:14:12.463049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.817 [2024-04-18 21:14:12.463060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.817 [2024-04-18 21:14:12.463071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.817 [2024-04-18 21:14:12.463082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.817 [2024-04-18 21:14:12.463092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.817 [2024-04-18 21:14:12.463103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.817 [2024-04-18 21:14:12.463112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.817 [2024-04-18 21:14:12.463123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.817 [2024-04-18 21:14:12.463132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.817 [2024-04-18 21:14:12.463143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.817 [2024-04-18 21:14:12.463152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.817 [2024-04-18 21:14:12.463166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.817 [2024-04-18 21:14:12.463175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.817 [2024-04-18 21:14:12.463187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.817 [2024-04-18 21:14:12.463196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.817 [2024-04-18 21:14:12.463207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.817 [2024-04-18 21:14:12.463216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.817 [2024-04-18 21:14:12.463227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.817 [2024-04-18 21:14:12.463237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.817 [2024-04-18 21:14:12.463249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.817 [2024-04-18 21:14:12.463258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.817 [2024-04-18 21:14:12.463269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.817 [2024-04-18 21:14:12.463278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.817 [2024-04-18 21:14:12.463290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.817 [2024-04-18 21:14:12.463299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.817 [2024-04-18 21:14:12.463310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.817 [2024-04-18 21:14:12.463320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.817 [2024-04-18 21:14:12.463334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.817 [2024-04-18 21:14:12.463343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.817 [2024-04-18 21:14:12.463355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.817 [2024-04-18 21:14:12.463364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.817 [2024-04-18 21:14:12.463375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.817 [2024-04-18 21:14:12.463384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.817 [2024-04-18 21:14:12.463395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.817 [2024-04-18 21:14:12.463405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.817 [2024-04-18 21:14:12.463416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.817 [2024-04-18 21:14:12.463425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.817 [2024-04-18 21:14:12.463437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.817 [2024-04-18 21:14:12.463446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.817 [2024-04-18 21:14:12.463457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.817 [2024-04-18 21:14:12.463467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.817 [2024-04-18 21:14:12.463478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.817 [2024-04-18 21:14:12.463487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.817 [2024-04-18 21:14:12.463498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.817 [2024-04-18 21:14:12.463508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.817 [2024-04-18 21:14:12.463523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.817 [2024-04-18 21:14:12.463533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.817 [2024-04-18 21:14:12.463544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.817 [2024-04-18 21:14:12.463553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.817 [2024-04-18 21:14:12.463565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.817 [2024-04-18 21:14:12.463573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.817 [2024-04-18 21:14:12.463585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.817 [2024-04-18 21:14:12.463594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.817 [2024-04-18 21:14:12.463607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.817 [2024-04-18 21:14:12.463617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.817 [2024-04-18 21:14:12.463629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.817 [2024-04-18 21:14:12.463638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.817 [2024-04-18 21:14:12.463649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.817 [2024-04-18 21:14:12.463659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.818 [2024-04-18 21:14:12.463671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.818 [2024-04-18 21:14:12.463680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.818 [2024-04-18 21:14:12.463691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.818 [2024-04-18 21:14:12.463701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.818 [2024-04-18 21:14:12.463712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.818 [2024-04-18 21:14:12.463721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.818 [2024-04-18 21:14:12.463785] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14e7f50 was disconnected and freed. reset controller. 00:20:56.818 [2024-04-18 21:14:12.463884] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:56.818 [2024-04-18 21:14:12.463903] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:56.818 [2024-04-18 21:14:12.468300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.818 [2024-04-18 21:14:12.468550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.818 [2024-04-18 21:14:12.468562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a7c70 with addr=10.0.0.2, port=4420 00:20:56.818 [2024-04-18 21:14:12.468571] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a7c70 is same with the state(5) to be set 00:20:56.818 [2024-04-18 21:14:12.468795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.818 [2024-04-18 21:14:12.469124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.818 [2024-04-18 21:14:12.469133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ec950 with addr=10.0.0.2, port=4420 00:20:56.818 [2024-04-18 21:14:12.469140] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec950 is same with the state(5) to be set 00:20:56.818 [2024-04-18 21:14:12.469158] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:56.818 [2024-04-18 21:14:12.469177] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba2b0 (9): Bad file descriptor 00:20:56.818 [2024-04-18 21:14:12.469204] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:56.818 [2024-04-18 21:14:12.469214] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:56.818 [2024-04-18 21:14:12.469766] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:56.818 [2024-04-18 21:14:12.469789] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:20:56.818 [2024-04-18 21:14:12.469813] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16a7c70 (9): Bad file descriptor 00:20:56.818 [2024-04-18 21:14:12.469824] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ec950 (9): Bad file descriptor 00:20:56.818 [2024-04-18 21:14:12.469881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.818 [2024-04-18 21:14:12.469891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.818 [2024-04-18 21:14:12.469902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.818 [2024-04-18 21:14:12.469909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.818 [2024-04-18 21:14:12.469919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.818 [2024-04-18 21:14:12.469925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.818 [2024-04-18 21:14:12.469934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.818 [2024-04-18 21:14:12.469941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.818 [2024-04-18 21:14:12.469949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.818 [2024-04-18 21:14:12.469956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.818 [2024-04-18 21:14:12.469964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.818 [2024-04-18 21:14:12.469970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.818 [2024-04-18 21:14:12.469978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.818 [2024-04-18 21:14:12.469985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.818 [2024-04-18 21:14:12.469993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.818 [2024-04-18 21:14:12.469999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.818 [2024-04-18 21:14:12.470008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.818 [2024-04-18 21:14:12.470014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.818 [2024-04-18 21:14:12.470022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.818 [2024-04-18 21:14:12.470029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.818 [2024-04-18 21:14:12.470037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.818 [2024-04-18 21:14:12.470044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.818 [2024-04-18 21:14:12.470056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.818 [2024-04-18 21:14:12.470063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.818 [2024-04-18 21:14:12.470071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.818 [2024-04-18 21:14:12.470077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.818 [2024-04-18 21:14:12.470085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.818 [2024-04-18 21:14:12.470092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.818 [2024-04-18 21:14:12.470100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.818 [2024-04-18 21:14:12.470106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.818 [2024-04-18 21:14:12.470114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.818 [2024-04-18 21:14:12.470121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.818 [2024-04-18 21:14:12.470128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.818 [2024-04-18 21:14:12.470135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.818 [2024-04-18 21:14:12.470143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.818 [2024-04-18 21:14:12.470149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.818 [2024-04-18 21:14:12.470157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.818 [2024-04-18 21:14:12.470163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.818 [2024-04-18 21:14:12.470172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.818 [2024-04-18 21:14:12.470178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.818 [2024-04-18 21:14:12.470186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.818 [2024-04-18 21:14:12.470192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.818 [2024-04-18 21:14:12.470200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.818 [2024-04-18 21:14:12.470207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.818 [2024-04-18 21:14:12.470215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.818 [2024-04-18 21:14:12.470221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.818 [2024-04-18 21:14:12.470229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.818 [2024-04-18 21:14:12.470237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.818 [2024-04-18 21:14:12.470245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.818 [2024-04-18 21:14:12.470251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.819 [2024-04-18 21:14:12.470259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.819 [2024-04-18 21:14:12.470266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.819 [2024-04-18 21:14:12.470274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.819 [2024-04-18 21:14:12.470280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.819 [2024-04-18 21:14:12.470288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.819 [2024-04-18 21:14:12.470295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.819 [2024-04-18 21:14:12.470302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.819 [2024-04-18 21:14:12.470309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.819 [2024-04-18 21:14:12.470317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.819 [2024-04-18 21:14:12.470323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.819 [2024-04-18 21:14:12.470332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.819 [2024-04-18 21:14:12.470339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.819 [2024-04-18 21:14:12.470347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.819 [2024-04-18 21:14:12.470354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.819 [2024-04-18 21:14:12.470361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.819 [2024-04-18 21:14:12.470368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.819 [2024-04-18 21:14:12.470376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.819 [2024-04-18 21:14:12.470382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.819 [2024-04-18 21:14:12.470390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.819 [2024-04-18 21:14:12.470397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.819 [2024-04-18 21:14:12.470405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.819 [2024-04-18 21:14:12.470411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.819 [2024-04-18 21:14:12.470423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.819 [2024-04-18 21:14:12.470430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.819 [2024-04-18 21:14:12.470438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.819 [2024-04-18 21:14:12.470444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.819 [2024-04-18 21:14:12.470453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.819 [2024-04-18 21:14:12.470459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.819 [2024-04-18 21:14:12.470467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.819 [2024-04-18 21:14:12.470473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.819 [2024-04-18 21:14:12.470481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.819 [2024-04-18 21:14:12.470488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.819 [2024-04-18 21:14:12.470496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.819 [2024-04-18 21:14:12.470502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.819 [2024-04-18 21:14:12.470517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.819 [2024-04-18 21:14:12.470524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.819 [2024-04-18 21:14:12.470531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.819 [2024-04-18 21:14:12.470538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.819 [2024-04-18 21:14:12.470546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.819 [2024-04-18 21:14:12.470553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.819 [2024-04-18 21:14:12.470560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.819 [2024-04-18 21:14:12.470567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.819 [2024-04-18 21:14:12.470575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.819 [2024-04-18 21:14:12.470582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.819 [2024-04-18 21:14:12.470590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.819 [2024-04-18 21:14:12.470596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.819 [2024-04-18 21:14:12.470604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.819 [2024-04-18 21:14:12.470612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.819 [2024-04-18 21:14:12.470621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.819 [2024-04-18 21:14:12.470627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.819 [2024-04-18 21:14:12.470636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.819 [2024-04-18 21:14:12.470643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.819 [2024-04-18 21:14:12.470651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.819 [2024-04-18 21:14:12.470658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.819 [2024-04-18 21:14:12.470666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.819 [2024-04-18 21:14:12.470672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.819 [2024-04-18 21:14:12.470680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.819 [2024-04-18 21:14:12.470687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.819 [2024-04-18 21:14:12.470695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.819 [2024-04-18 21:14:12.470701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.819 [2024-04-18 21:14:12.470709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.819 [2024-04-18 21:14:12.470715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.819 [2024-04-18 21:14:12.470723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.819 [2024-04-18 21:14:12.470730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.820 [2024-04-18 21:14:12.470738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.820 [2024-04-18 21:14:12.470744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.820 [2024-04-18 21:14:12.470752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.820 [2024-04-18 21:14:12.470758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.820 [2024-04-18 21:14:12.470767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.820 [2024-04-18 21:14:12.470774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.820 [2024-04-18 21:14:12.470781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.820 [2024-04-18 21:14:12.470788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.820 [2024-04-18 21:14:12.470797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.820 [2024-04-18 21:14:12.470803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.820 [2024-04-18 21:14:12.470811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.820 [2024-04-18 21:14:12.470819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.820 [2024-04-18 21:14:12.470826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.820 [2024-04-18 21:14:12.470833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.820 [2024-04-18 21:14:12.471835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.820 [2024-04-18 21:14:12.471846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.820 [2024-04-18 21:14:12.471857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.820 [2024-04-18 21:14:12.471864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.820 [2024-04-18 21:14:12.471872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.820 [2024-04-18 21:14:12.471879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.820 [2024-04-18 21:14:12.471887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.820 [2024-04-18 21:14:12.471894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.820 [2024-04-18 21:14:12.471902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.820 [2024-04-18 21:14:12.471908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.820 [2024-04-18 21:14:12.471917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.820 [2024-04-18 21:14:12.471923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.820 [2024-04-18 21:14:12.471931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.820 [2024-04-18 21:14:12.471938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.820 [2024-04-18 21:14:12.471946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.820 [2024-04-18 21:14:12.471952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.820 [2024-04-18 21:14:12.471961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.820 [2024-04-18 21:14:12.471967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.820 [2024-04-18 21:14:12.471975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.820 [2024-04-18 21:14:12.471983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.820 [2024-04-18 21:14:12.471992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.820 [2024-04-18 21:14:12.471998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.820 [2024-04-18 21:14:12.472006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.820 [2024-04-18 21:14:12.472013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.820 [2024-04-18 21:14:12.472021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.820 [2024-04-18 21:14:12.472027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.820 [2024-04-18 21:14:12.472035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.820 [2024-04-18 21:14:12.472042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.820 [2024-04-18 21:14:12.472049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.820 [2024-04-18 21:14:12.472056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.820 [2024-04-18 21:14:12.472064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.820 [2024-04-18 21:14:12.472071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.820 [2024-04-18 21:14:12.472079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.820 [2024-04-18 21:14:12.472085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.820 [2024-04-18 21:14:12.472093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.820 [2024-04-18 21:14:12.472099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.820 [2024-04-18 21:14:12.472107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.820 [2024-04-18 21:14:12.472114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.820 [2024-04-18 21:14:12.472121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.820 [2024-04-18 21:14:12.472128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.820 [2024-04-18 21:14:12.472136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.820 [2024-04-18 21:14:12.472143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.820 [2024-04-18 21:14:12.472152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.820 [2024-04-18 21:14:12.472159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.820 [2024-04-18 21:14:12.472167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.820 [2024-04-18 21:14:12.472175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.820 [2024-04-18 21:14:12.472182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.820 [2024-04-18 21:14:12.472189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.820 [2024-04-18 21:14:12.472197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.820 [2024-04-18 21:14:12.472204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.820 [2024-04-18 21:14:12.472212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.820 [2024-04-18 21:14:12.472218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.820 [2024-04-18 21:14:12.472226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.820 [2024-04-18 21:14:12.472233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.820 [2024-04-18 21:14:12.472241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.820 [2024-04-18 21:14:12.472248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.820 [2024-04-18 21:14:12.472255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.820 [2024-04-18 21:14:12.472262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.820 [2024-04-18 21:14:12.472269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.820 [2024-04-18 21:14:12.472276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.820 [2024-04-18 21:14:12.472284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.820 [2024-04-18 21:14:12.472291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.820 [2024-04-18 21:14:12.472299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.820 [2024-04-18 21:14:12.472305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.821 [2024-04-18 21:14:12.472313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.821 [2024-04-18 21:14:12.472319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.821 [2024-04-18 21:14:12.472327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.821 [2024-04-18 21:14:12.472333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.821 [2024-04-18 21:14:12.472341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.821 [2024-04-18 21:14:12.472348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.821 [2024-04-18 21:14:12.472357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.821 [2024-04-18 21:14:12.472364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.821 [2024-04-18 21:14:12.472372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.821 [2024-04-18 21:14:12.472378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.821 [2024-04-18 21:14:12.472386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.821 [2024-04-18 21:14:12.472393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.821 [2024-04-18 21:14:12.472401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.821 [2024-04-18 21:14:12.472407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.821 [2024-04-18 21:14:12.472415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.821 [2024-04-18 21:14:12.472421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.821 [2024-04-18 21:14:12.472429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.821 [2024-04-18 21:14:12.472436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.821 [2024-04-18 21:14:12.472444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.821 [2024-04-18 21:14:12.472450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.821 [2024-04-18 21:14:12.472458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.821 [2024-04-18 21:14:12.472465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.821 [2024-04-18 21:14:12.472473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.821 [2024-04-18 21:14:12.472479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.821 [2024-04-18 21:14:12.472487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.821 [2024-04-18 21:14:12.472494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.821 [2024-04-18 21:14:12.472502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.821 [2024-04-18 21:14:12.472509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.821 [2024-04-18 21:14:12.472523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.821 [2024-04-18 21:14:12.472529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.821 [2024-04-18 21:14:12.472537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.821 [2024-04-18 21:14:12.472545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.821 [2024-04-18 21:14:12.472553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.821 [2024-04-18 21:14:12.472559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.821 [2024-04-18 21:14:12.472568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.821 [2024-04-18 21:14:12.472575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.821 [2024-04-18 21:14:12.472583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.821 [2024-04-18 21:14:12.472589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.821 [2024-04-18 21:14:12.472597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.821 [2024-04-18 21:14:12.472603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.821 [2024-04-18 21:14:12.472611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.821 [2024-04-18 21:14:12.472618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.821 [2024-04-18 21:14:12.472626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.821 [2024-04-18 21:14:12.472632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.821 [2024-04-18 21:14:12.472640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.821 [2024-04-18 21:14:12.472647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.821 [2024-04-18 21:14:12.472654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.821 [2024-04-18 21:14:12.472661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.821 [2024-04-18 21:14:12.472669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.821 [2024-04-18 21:14:12.472675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.821 [2024-04-18 21:14:12.472684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.821 [2024-04-18 21:14:12.472690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.821 [2024-04-18 21:14:12.472698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.821 [2024-04-18 21:14:12.472705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.821 [2024-04-18 21:14:12.472713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.821 [2024-04-18 21:14:12.472719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.821 [2024-04-18 21:14:12.472729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.821 [2024-04-18 21:14:12.472735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.821 [2024-04-18 21:14:12.472743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.821 [2024-04-18 21:14:12.472749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.821 [2024-04-18 21:14:12.472758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.821 [2024-04-18 21:14:12.472764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.821 [2024-04-18 21:14:12.472772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.821 [2024-04-18 21:14:12.472779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.821 [2024-04-18 21:14:12.474019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.821 [2024-04-18 21:14:12.474032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.821 [2024-04-18 21:14:12.474043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.821 [2024-04-18 21:14:12.474049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.821 [2024-04-18 21:14:12.474058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.821 [2024-04-18 21:14:12.474065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.821 [2024-04-18 21:14:12.474073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.821 [2024-04-18 21:14:12.474080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.821 [2024-04-18 21:14:12.474087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.821 [2024-04-18 21:14:12.474094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.821 [2024-04-18 21:14:12.474102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.821 [2024-04-18 21:14:12.474109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.821 [2024-04-18 21:14:12.474117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.821 [2024-04-18 21:14:12.474123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.822 [2024-04-18 21:14:12.474131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.822 [2024-04-18 21:14:12.474138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.822 [2024-04-18 21:14:12.474146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.822 [2024-04-18 21:14:12.474156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.822 [2024-04-18 21:14:12.474164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.822 [2024-04-18 21:14:12.474171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.822 [2024-04-18 21:14:12.474178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.822 [2024-04-18 21:14:12.474185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.822 [2024-04-18 21:14:12.474193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.822 [2024-04-18 21:14:12.474199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.822 [2024-04-18 21:14:12.474207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.822 [2024-04-18 21:14:12.474214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.822 [2024-04-18 21:14:12.474222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.822 [2024-04-18 21:14:12.474229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.822 [2024-04-18 21:14:12.474237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.822 [2024-04-18 21:14:12.474243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.822 [2024-04-18 21:14:12.474251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.822 [2024-04-18 21:14:12.474257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.822 [2024-04-18 21:14:12.474266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.822 [2024-04-18 21:14:12.474272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.822 [2024-04-18 21:14:12.474280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.822 [2024-04-18 21:14:12.474286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.822 [2024-04-18 21:14:12.474294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.822 [2024-04-18 21:14:12.474301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.822 [2024-04-18 21:14:12.474308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.822 [2024-04-18 21:14:12.474315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.822 [2024-04-18 21:14:12.474323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.822 [2024-04-18 21:14:12.474330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.822 [2024-04-18 21:14:12.474339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.822 [2024-04-18 21:14:12.474346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.822 [2024-04-18 21:14:12.474354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.822 [2024-04-18 21:14:12.474360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.822 [2024-04-18 21:14:12.474368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.822 [2024-04-18 21:14:12.474374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.822 [2024-04-18 21:14:12.474382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.822 [2024-04-18 21:14:12.474388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.822 [2024-04-18 21:14:12.474397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.822 [2024-04-18 21:14:12.474403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.822 [2024-04-18 21:14:12.474411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.822 [2024-04-18 21:14:12.474417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.822 [2024-04-18 21:14:12.474426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.822 [2024-04-18 21:14:12.474432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.822 [2024-04-18 21:14:12.474441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.822 [2024-04-18 21:14:12.474447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.822 [2024-04-18 21:14:12.474455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.822 [2024-04-18 21:14:12.474462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.822 [2024-04-18 21:14:12.474470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.822 [2024-04-18 21:14:12.474477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.822 [2024-04-18 21:14:12.474485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.822 [2024-04-18 21:14:12.474491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.822 [2024-04-18 21:14:12.474499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.822 [2024-04-18 21:14:12.474506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.822 [2024-04-18 21:14:12.474521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.822 [2024-04-18 21:14:12.474532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.822 [2024-04-18 21:14:12.474539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.822 [2024-04-18 21:14:12.474546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.822 [2024-04-18 21:14:12.474554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.822 [2024-04-18 21:14:12.474560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.822 [2024-04-18 21:14:12.474569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.822 [2024-04-18 21:14:12.474575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.822 [2024-04-18 21:14:12.474583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.822 [2024-04-18 21:14:12.474589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.822 [2024-04-18 21:14:12.474598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.822 [2024-04-18 21:14:12.474604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.822 [2024-04-18 21:14:12.474612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.822 [2024-04-18 21:14:12.474619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.822 [2024-04-18 21:14:12.474627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.822 [2024-04-18 21:14:12.474633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.822 [2024-04-18 21:14:12.474641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.822 [2024-04-18 21:14:12.474648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.822 [2024-04-18 21:14:12.474656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.822 [2024-04-18 21:14:12.474663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.822 [2024-04-18 21:14:12.474670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.822 [2024-04-18 21:14:12.474677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.822 [2024-04-18 21:14:12.474685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.822 [2024-04-18 21:14:12.474691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.822 [2024-04-18 21:14:12.474699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.822 [2024-04-18 21:14:12.474706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.823 [2024-04-18 21:14:12.474715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.823 [2024-04-18 21:14:12.474722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.823 [2024-04-18 21:14:12.474730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.823 [2024-04-18 21:14:12.474737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.823 [2024-04-18 21:14:12.474745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.823 [2024-04-18 21:14:12.474751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.823 [2024-04-18 21:14:12.474759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.823 [2024-04-18 21:14:12.474765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.823 [2024-04-18 21:14:12.474774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.823 [2024-04-18 21:14:12.474780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.823 [2024-04-18 21:14:12.474788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.823 [2024-04-18 21:14:12.474794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.823 [2024-04-18 21:14:12.474803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.823 [2024-04-18 21:14:12.474809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.823 [2024-04-18 21:14:12.474817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.823 [2024-04-18 21:14:12.474824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.823 [2024-04-18 21:14:12.474832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.823 [2024-04-18 21:14:12.474839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.823 [2024-04-18 21:14:12.474846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.823 [2024-04-18 21:14:12.474853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.823 [2024-04-18 21:14:12.474861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.823 [2024-04-18 21:14:12.474868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.823 [2024-04-18 21:14:12.474876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.823 [2024-04-18 21:14:12.474882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.823 [2024-04-18 21:14:12.474890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.823 [2024-04-18 21:14:12.474898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.823 [2024-04-18 21:14:12.474906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.823 [2024-04-18 21:14:12.474912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.823 [2024-04-18 21:14:12.474920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.823 [2024-04-18 21:14:12.474927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.823 [2024-04-18 21:14:12.474935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.823 [2024-04-18 21:14:12.474942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.823 [2024-04-18 21:14:12.474950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.823 [2024-04-18 21:14:12.474957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.823 [2024-04-18 21:14:12.474965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.823 [2024-04-18 21:14:12.474971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.823 [2024-04-18 21:14:12.475976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.823 [2024-04-18 21:14:12.475987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.823 [2024-04-18 21:14:12.475997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.823 [2024-04-18 21:14:12.476004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.823 [2024-04-18 21:14:12.476012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.823 [2024-04-18 21:14:12.476019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.823 [2024-04-18 21:14:12.476027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.823 [2024-04-18 21:14:12.476034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.823 [2024-04-18 21:14:12.476043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.823 [2024-04-18 21:14:12.476050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.823 [2024-04-18 21:14:12.476058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.823 [2024-04-18 21:14:12.476065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.823 [2024-04-18 21:14:12.476072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.823 [2024-04-18 21:14:12.476079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.823 [2024-04-18 21:14:12.476089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.823 [2024-04-18 21:14:12.476096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.823 [2024-04-18 21:14:12.476104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.823 [2024-04-18 21:14:12.476111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.823 [2024-04-18 21:14:12.476119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.823 [2024-04-18 21:14:12.476125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.823 [2024-04-18 21:14:12.476133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.823 [2024-04-18 21:14:12.476140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.823 [2024-04-18 21:14:12.476148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.823 [2024-04-18 21:14:12.476155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.823 [2024-04-18 21:14:12.476163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.823 [2024-04-18 21:14:12.476170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.823 [2024-04-18 21:14:12.476178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.823 [2024-04-18 21:14:12.476184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.823 [2024-04-18 21:14:12.476192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.823 [2024-04-18 21:14:12.476199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.823 [2024-04-18 21:14:12.476207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.823 [2024-04-18 21:14:12.476213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.824 [2024-04-18 21:14:12.476221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.824 [2024-04-18 21:14:12.476228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.824 [2024-04-18 21:14:12.476236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.824 [2024-04-18 21:14:12.476242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.824 [2024-04-18 21:14:12.476250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.824 [2024-04-18 21:14:12.476257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.824 [2024-04-18 21:14:12.476264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.824 [2024-04-18 21:14:12.476272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.824 [2024-04-18 21:14:12.476280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.824 [2024-04-18 21:14:12.476287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.824 [2024-04-18 21:14:12.476295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.824 [2024-04-18 21:14:12.476302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.824 [2024-04-18 21:14:12.476310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.824 [2024-04-18 21:14:12.476316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.824 [2024-04-18 21:14:12.476324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.824 [2024-04-18 21:14:12.476331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.824 [2024-04-18 21:14:12.476339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.824 [2024-04-18 21:14:12.476346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.824 [2024-04-18 21:14:12.476353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.824 [2024-04-18 21:14:12.476360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.824 [2024-04-18 21:14:12.476368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.824 [2024-04-18 21:14:12.476375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.824 [2024-04-18 21:14:12.476383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.824 [2024-04-18 21:14:12.476390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.824 [2024-04-18 21:14:12.476398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.824 [2024-04-18 21:14:12.476404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.824 [2024-04-18 21:14:12.476412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.824 [2024-04-18 21:14:12.476419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.824 [2024-04-18 21:14:12.476427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.824 [2024-04-18 21:14:12.476433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.824 [2024-04-18 21:14:12.476441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.824 [2024-04-18 21:14:12.476447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.824 [2024-04-18 21:14:12.476459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.824 [2024-04-18 21:14:12.476466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.824 [2024-04-18 21:14:12.476474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.824 [2024-04-18 21:14:12.476481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.824 [2024-04-18 21:14:12.476489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.824 [2024-04-18 21:14:12.476495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.824 [2024-04-18 21:14:12.476503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.824 [2024-04-18 21:14:12.476513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.824 [2024-04-18 21:14:12.476522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.824 [2024-04-18 21:14:12.476529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.824 [2024-04-18 21:14:12.476537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.824 [2024-04-18 21:14:12.476543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.824 [2024-04-18 21:14:12.476551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.824 [2024-04-18 21:14:12.476558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.824 [2024-04-18 21:14:12.476565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.824 [2024-04-18 21:14:12.476572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.824 [2024-04-18 21:14:12.476580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.824 [2024-04-18 21:14:12.476586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.824 [2024-04-18 21:14:12.476595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.824 [2024-04-18 21:14:12.476601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.824 [2024-04-18 21:14:12.476610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.824 [2024-04-18 21:14:12.476616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.824 [2024-04-18 21:14:12.476624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.824 [2024-04-18 21:14:12.476631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.824 [2024-04-18 21:14:12.476639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.824 [2024-04-18 21:14:12.476647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.824 [2024-04-18 21:14:12.476655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.824 [2024-04-18 21:14:12.476662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.824 [2024-04-18 21:14:12.476669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.824 [2024-04-18 21:14:12.476676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.824 [2024-04-18 21:14:12.476684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.824 [2024-04-18 21:14:12.476690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.824 [2024-04-18 21:14:12.476698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.824 [2024-04-18 21:14:12.476705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.824 [2024-04-18 21:14:12.476713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.824 [2024-04-18 21:14:12.476719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.824 [2024-04-18 21:14:12.476727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.824 [2024-04-18 21:14:12.476734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.824 [2024-04-18 21:14:12.476742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.824 [2024-04-18 21:14:12.476748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.824 [2024-04-18 21:14:12.476757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.824 [2024-04-18 21:14:12.476763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.824 [2024-04-18 21:14:12.476771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.824 [2024-04-18 21:14:12.476778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.824 [2024-04-18 21:14:12.476786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.824 [2024-04-18 21:14:12.476792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.824 [2024-04-18 21:14:12.476800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.825 [2024-04-18 21:14:12.476807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.825 [2024-04-18 21:14:12.476815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.825 [2024-04-18 21:14:12.476821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.825 [2024-04-18 21:14:12.476830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.825 [2024-04-18 21:14:12.476837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.825 [2024-04-18 21:14:12.476845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.825 [2024-04-18 21:14:12.476852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.825 [2024-04-18 21:14:12.476860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.825 [2024-04-18 21:14:12.476866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.825 [2024-04-18 21:14:12.476874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.825 [2024-04-18 21:14:12.476881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.825 [2024-04-18 21:14:12.476889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.825 [2024-04-18 21:14:12.476895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.825 [2024-04-18 21:14:12.476903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.825 [2024-04-18 21:14:12.476910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.825 [2024-04-18 21:14:12.476917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.825 [2024-04-18 21:14:12.476924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.825 [2024-04-18 21:14:12.478379] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:56.825 [2024-04-18 21:14:12.478411] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:20:56.825 [2024-04-18 21:14:12.478425] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:56.825 [2024-04-18 21:14:12.478435] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:56.825 [2024-04-18 21:14:12.478443] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:56.825 [2024-04-18 21:14:12.478793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.825 [2024-04-18 21:14:12.479149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.825 [2024-04-18 21:14:12.479159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1517150 with addr=10.0.0.2, port=4420 00:20:56.825 [2024-04-18 21:14:12.479166] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1517150 is same with the state(5) to be set 00:20:56.825 [2024-04-18 21:14:12.479429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.825 [2024-04-18 21:14:12.479749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.825 [2024-04-18 21:14:12.479759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff9610 with addr=10.0.0.2, port=4420 00:20:56.825 [2024-04-18 21:14:12.479766] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff9610 is same with the state(5) to be set 00:20:56.825 [2024-04-18 21:14:12.479773] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:56.825 [2024-04-18 21:14:12.479783] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:56.825 [2024-04-18 21:14:12.479791] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:56.825 [2024-04-18 21:14:12.479805] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:56.825 [2024-04-18 21:14:12.479811] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:56.825 [2024-04-18 21:14:12.479817] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:56.825 [2024-04-18 21:14:12.479842] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:56.825 [2024-04-18 21:14:12.479854] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:56.825 [2024-04-18 21:14:12.479864] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:56.825 [2024-04-18 21:14:12.479967] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:56.825 [2024-04-18 21:14:12.479979] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:56.825 [2024-04-18 21:14:12.479986] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:56.825 [2024-04-18 21:14:12.480296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.825 [2024-04-18 21:14:12.480587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.825 [2024-04-18 21:14:12.480597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16ad630 with addr=10.0.0.2, port=4420 00:20:56.825 [2024-04-18 21:14:12.480605] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ad630 is same with the state(5) to be set 00:20:56.825 [2024-04-18 21:14:12.480955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.825 [2024-04-18 21:14:12.481307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.825 [2024-04-18 21:14:12.481317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16b7270 with addr=10.0.0.2, port=4420 00:20:56.825 [2024-04-18 21:14:12.481324] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7270 is same with the state(5) to be set 00:20:56.825 [2024-04-18 21:14:12.481562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.825 [2024-04-18 21:14:12.481910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.825 [2024-04-18 21:14:12.481920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d93b0 with addr=10.0.0.2, port=4420 00:20:56.825 [2024-04-18 21:14:12.481927] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d93b0 is same with the state(5) to be set 00:20:56.825 [2024-04-18 21:14:12.482221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.825 [2024-04-18 21:14:12.482488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.825 [2024-04-18 21:14:12.482498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16b38e0 with addr=10.0.0.2, port=4420 00:20:56.825 [2024-04-18 21:14:12.482505] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b38e0 is same with the state(5) to be set 00:20:56.825 [2024-04-18 21:14:12.482518] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1517150 (9): Bad file descriptor 00:20:56.825 [2024-04-18 21:14:12.482528] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff9610 (9): Bad file descriptor 00:20:56.825 [2024-04-18 21:14:12.482537] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:56.825 [2024-04-18 21:14:12.482548] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:56.825 [2024-04-18 21:14:12.483498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.825 [2024-04-18 21:14:12.483520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.825 [2024-04-18 21:14:12.483534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.825 [2024-04-18 21:14:12.483540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.825 [2024-04-18 21:14:12.483549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.825 [2024-04-18 21:14:12.483555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.825 [2024-04-18 21:14:12.483563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.825 [2024-04-18 21:14:12.483570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.825 [2024-04-18 21:14:12.483578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.825 [2024-04-18 21:14:12.483584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.825 [2024-04-18 21:14:12.483592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.825 [2024-04-18 21:14:12.483598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.825 [2024-04-18 21:14:12.483606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.825 [2024-04-18 21:14:12.483613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.825 [2024-04-18 21:14:12.483621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.825 [2024-04-18 21:14:12.483627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.825 [2024-04-18 21:14:12.483635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.825 [2024-04-18 21:14:12.483641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.825 [2024-04-18 21:14:12.483649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.825 [2024-04-18 21:14:12.483656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.825 [2024-04-18 21:14:12.483663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.825 [2024-04-18 21:14:12.483670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.825 [2024-04-18 21:14:12.483678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.825 [2024-04-18 21:14:12.483685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.825 [2024-04-18 21:14:12.483692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.826 [2024-04-18 21:14:12.483699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.826 [2024-04-18 21:14:12.483710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.826 [2024-04-18 21:14:12.483716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.826 [2024-04-18 21:14:12.483725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.826 [2024-04-18 21:14:12.483731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.826 [2024-04-18 21:14:12.483739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.826 [2024-04-18 21:14:12.483745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.826 [2024-04-18 21:14:12.483753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.826 [2024-04-18 21:14:12.483760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.826 [2024-04-18 21:14:12.483768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.826 [2024-04-18 21:14:12.483774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.826 [2024-04-18 21:14:12.483782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.826 [2024-04-18 21:14:12.483789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.826 [2024-04-18 21:14:12.483796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.826 [2024-04-18 21:14:12.483803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.826 [2024-04-18 21:14:12.483811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.826 [2024-04-18 21:14:12.483818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.826 [2024-04-18 21:14:12.483825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.826 [2024-04-18 21:14:12.483832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.826 [2024-04-18 21:14:12.483840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.826 [2024-04-18 21:14:12.483846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.826 [2024-04-18 21:14:12.483854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.826 [2024-04-18 21:14:12.483861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.826 [2024-04-18 21:14:12.483868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.826 [2024-04-18 21:14:12.483875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.826 [2024-04-18 21:14:12.483882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.826 [2024-04-18 21:14:12.483890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.826 [2024-04-18 21:14:12.483898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.826 [2024-04-18 21:14:12.483904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.826 [2024-04-18 21:14:12.483912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.826 [2024-04-18 21:14:12.483919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.826 [2024-04-18 21:14:12.483927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.826 [2024-04-18 21:14:12.483933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.826 [2024-04-18 21:14:12.483941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.826 [2024-04-18 21:14:12.483948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.826 [2024-04-18 21:14:12.483956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.826 [2024-04-18 21:14:12.483963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.826 [2024-04-18 21:14:12.483971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.826 [2024-04-18 21:14:12.483977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.826 [2024-04-18 21:14:12.483984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.826 [2024-04-18 21:14:12.483991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.826 [2024-04-18 21:14:12.483999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.826 [2024-04-18 21:14:12.484005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.826 [2024-04-18 21:14:12.484013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.826 [2024-04-18 21:14:12.484019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.826 [2024-04-18 21:14:12.484027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.826 [2024-04-18 21:14:12.484034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.826 [2024-04-18 21:14:12.484041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.826 [2024-04-18 21:14:12.484048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.826 [2024-04-18 21:14:12.484056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.826 [2024-04-18 21:14:12.484062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.826 [2024-04-18 21:14:12.484071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.826 [2024-04-18 21:14:12.484078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.826 [2024-04-18 21:14:12.484086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.826 [2024-04-18 21:14:12.484092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.826 [2024-04-18 21:14:12.484100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.826 [2024-04-18 21:14:12.484106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.826 [2024-04-18 21:14:12.484114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.826 [2024-04-18 21:14:12.484121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.826 [2024-04-18 21:14:12.484129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.826 [2024-04-18 21:14:12.484135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.826 [2024-04-18 21:14:12.484142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.826 [2024-04-18 21:14:12.484149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.826 [2024-04-18 21:14:12.484157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.826 [2024-04-18 21:14:12.484163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.826 [2024-04-18 21:14:12.484171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.826 [2024-04-18 21:14:12.484177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.826 [2024-04-18 21:14:12.484185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.826 [2024-04-18 21:14:12.484192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.826 [2024-04-18 21:14:12.484199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.826 [2024-04-18 21:14:12.484206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.826 [2024-04-18 21:14:12.484214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.826 [2024-04-18 21:14:12.484220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.826 [2024-04-18 21:14:12.484228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.826 [2024-04-18 21:14:12.484234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.826 [2024-04-18 21:14:12.484242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.826 [2024-04-18 21:14:12.484250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.826 [2024-04-18 21:14:12.484258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.826 [2024-04-18 21:14:12.484265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.827 [2024-04-18 21:14:12.484273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.827 [2024-04-18 21:14:12.484279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.827 [2024-04-18 21:14:12.484287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.827 [2024-04-18 21:14:12.484293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.827 [2024-04-18 21:14:12.484301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.827 [2024-04-18 21:14:12.484308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.827 [2024-04-18 21:14:12.484316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.827 [2024-04-18 21:14:12.484322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.827 [2024-04-18 21:14:12.484330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.827 [2024-04-18 21:14:12.484336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.827 [2024-04-18 21:14:12.484344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.827 [2024-04-18 21:14:12.484350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.827 [2024-04-18 21:14:12.484358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.827 [2024-04-18 21:14:12.484364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.827 [2024-04-18 21:14:12.484373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.827 [2024-04-18 21:14:12.484379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.827 [2024-04-18 21:14:12.484387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.827 [2024-04-18 21:14:12.484394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.827 [2024-04-18 21:14:12.484402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.827 [2024-04-18 21:14:12.484408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.827 [2024-04-18 21:14:12.484417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.827 [2024-04-18 21:14:12.484423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.827 [2024-04-18 21:14:12.484433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.827 [2024-04-18 21:14:12.484439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.827 [2024-04-18 21:14:12.484447] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1624a30 is same with the state(5) to be set 00:20:56.827 task offset: 29056 on job bdev=Nvme10n1 fails 00:20:56.827 00:20:56.827 Latency(us) 00:20:56.827 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.827 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.827 Job: Nvme1n1 ended in about 0.88 seconds with error 00:20:56.827 Verification LBA range: start 0x0 length 0x400 00:20:56.827 Nvme1n1 : 0.88 217.06 13.57 72.35 0.00 218884.67 16754.42 240716.58 00:20:56.827 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.827 Job: Nvme2n1 ended in about 0.90 seconds with error 00:20:56.827 Verification LBA range: start 0x0 length 0x400 00:20:56.827 Nvme2n1 : 0.90 215.07 13.44 71.32 0.00 217307.66 18805.98 210627.01 00:20:56.827 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.827 Job: Nvme3n1 ended in about 0.90 seconds with error 00:20:56.827 Verification LBA range: start 0x0 length 0x400 00:20:56.827 Nvme3n1 : 0.90 142.33 8.90 71.16 0.00 286271.81 36244.26 244363.80 00:20:56.827 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.827 Job: Nvme4n1 ended in about 0.89 seconds with error 00:20:56.827 Verification LBA range: start 0x0 length 0x400 00:20:56.827 Nvme4n1 : 0.89 215.56 13.47 71.85 0.00 208553.63 9061.06 223392.28 00:20:56.827 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.827 Job: Nvme5n1 ended in about 0.90 seconds with error 00:20:56.827 Verification LBA range: start 0x0 length 0x400 00:20:56.827 Nvme5n1 : 0.90 141.98 8.87 70.99 0.00 276519.77 21427.42 242540.19 00:20:56.827 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.827 Job: Nvme6n1 ended in about 0.90 seconds with error 00:20:56.827 Verification LBA range: start 0x0 length 0x400 00:20:56.827 Nvme6n1 : 0.90 212.51 13.28 70.84 0.00 203910.90 19603.81 212450.62 00:20:56.827 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.827 Job: Nvme7n1 ended in about 0.89 seconds with error 00:20:56.827 Verification LBA range: start 0x0 length 0x400 00:20:56.827 Nvme7n1 : 0.89 215.23 13.45 71.74 0.00 197106.64 9630.94 240716.58 00:20:56.827 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.827 Job: Nvme8n1 ended in about 0.89 seconds with error 00:20:56.827 Verification LBA range: start 0x0 length 0x400 00:20:56.827 Nvme8n1 : 0.89 286.58 17.91 71.64 0.00 154737.17 10542.75 208803.39 00:20:56.827 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.827 Job: Nvme9n1 ended in about 0.91 seconds with error 00:20:56.827 Verification LBA range: start 0x0 length 0x400 00:20:56.827 Nvme9n1 : 0.91 140.51 8.78 70.25 0.00 258773.41 21541.40 268070.73 00:20:56.827 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.827 Job: Nvme10n1 ended in about 0.88 seconds with error 00:20:56.827 Verification LBA range: start 0x0 length 0x400 00:20:56.827 Nvme10n1 : 0.88 217.43 13.59 72.48 0.00 183179.13 17666.23 199685.34 00:20:56.827 =================================================================================================================== 00:20:56.827 Total : 2004.26 125.27 714.64 0.00 214584.05 9061.06 268070.73 00:20:56.827 [2024-04-18 21:14:12.509470] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:56.827 [2024-04-18 21:14:12.509520] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:20:56.827 [2024-04-18 21:14:12.509864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.827 [2024-04-18 21:14:12.510197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.827 [2024-04-18 21:14:12.510208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16b3ac0 with addr=10.0.0.2, port=4420 00:20:56.827 [2024-04-18 21:14:12.510217] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3ac0 is same with the state(5) to be set 00:20:56.827 [2024-04-18 21:14:12.510230] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16ad630 (9): Bad file descriptor 00:20:56.827 [2024-04-18 21:14:12.510241] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16b7270 (9): Bad file descriptor 00:20:56.827 [2024-04-18 21:14:12.510250] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d93b0 (9): Bad file descriptor 00:20:56.827 [2024-04-18 21:14:12.510258] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16b38e0 (9): Bad file descriptor 00:20:56.827 [2024-04-18 21:14:12.510266] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:56.827 [2024-04-18 21:14:12.510273] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:56.827 [2024-04-18 21:14:12.510280] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:56.827 [2024-04-18 21:14:12.510295] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:20:56.827 [2024-04-18 21:14:12.510302] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:20:56.827 [2024-04-18 21:14:12.510308] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:20:56.827 [2024-04-18 21:14:12.510411] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:56.827 [2024-04-18 21:14:12.510420] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:56.827 [2024-04-18 21:14:12.510672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.827 [2024-04-18 21:14:12.510946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.827 [2024-04-18 21:14:12.510956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba2b0 with addr=10.0.0.2, port=4420 00:20:56.827 [2024-04-18 21:14:12.510963] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba2b0 is same with the state(5) to be set 00:20:56.827 [2024-04-18 21:14:12.510972] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16b3ac0 (9): Bad file descriptor 00:20:56.827 [2024-04-18 21:14:12.510980] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:20:56.827 [2024-04-18 21:14:12.510986] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:20:56.827 [2024-04-18 21:14:12.510992] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:20:56.827 [2024-04-18 21:14:12.511002] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:56.827 [2024-04-18 21:14:12.511008] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:56.827 [2024-04-18 21:14:12.511014] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:56.827 [2024-04-18 21:14:12.511022] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:56.827 [2024-04-18 21:14:12.511028] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:56.827 [2024-04-18 21:14:12.511033] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:56.827 [2024-04-18 21:14:12.511045] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:56.827 [2024-04-18 21:14:12.511051] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:56.827 [2024-04-18 21:14:12.511057] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:56.828 [2024-04-18 21:14:12.511082] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:56.828 [2024-04-18 21:14:12.511092] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:56.828 [2024-04-18 21:14:12.511100] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:56.828 [2024-04-18 21:14:12.511109] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:56.828 [2024-04-18 21:14:12.511134] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:56.828 [2024-04-18 21:14:12.511424] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:56.828 [2024-04-18 21:14:12.511433] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:56.828 [2024-04-18 21:14:12.511439] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:56.828 [2024-04-18 21:14:12.511445] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:56.828 [2024-04-18 21:14:12.511460] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba2b0 (9): Bad file descriptor 00:20:56.828 [2024-04-18 21:14:12.511469] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:20:56.828 [2024-04-18 21:14:12.511474] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:20:56.828 [2024-04-18 21:14:12.511480] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:56.828 [2024-04-18 21:14:12.511525] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:56.828 [2024-04-18 21:14:12.511535] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:56.828 [2024-04-18 21:14:12.511544] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:20:56.828 [2024-04-18 21:14:12.511551] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:56.828 [2024-04-18 21:14:12.511570] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:20:56.828 [2024-04-18 21:14:12.511577] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:20:56.828 [2024-04-18 21:14:12.511583] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:20:56.828 [2024-04-18 21:14:12.511605] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:56.828 [2024-04-18 21:14:12.511620] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:56.828 [2024-04-18 21:14:12.511977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.828 [2024-04-18 21:14:12.512320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.828 [2024-04-18 21:14:12.512331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ec950 with addr=10.0.0.2, port=4420 00:20:56.828 [2024-04-18 21:14:12.512337] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec950 is same with the state(5) to be set 00:20:56.828 [2024-04-18 21:14:12.512673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.828 [2024-04-18 21:14:12.512967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.828 [2024-04-18 21:14:12.512977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a7c70 with addr=10.0.0.2, port=4420 00:20:56.828 [2024-04-18 21:14:12.512987] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a7c70 is same with the state(5) to be set 00:20:56.828 [2024-04-18 21:14:12.513280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.828 [2024-04-18 21:14:12.513578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.828 [2024-04-18 21:14:12.513589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff9610 with addr=10.0.0.2, port=4420 00:20:56.828 [2024-04-18 21:14:12.513595] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff9610 is same with the state(5) to be set 00:20:56.828 [2024-04-18 21:14:12.513887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.828 [2024-04-18 21:14:12.514159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.828 [2024-04-18 21:14:12.514169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1517150 with addr=10.0.0.2, port=4420 00:20:56.828 [2024-04-18 21:14:12.514175] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1517150 is same with the state(5) to be set 00:20:56.828 [2024-04-18 21:14:12.514184] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ec950 (9): Bad file descriptor 00:20:56.828 [2024-04-18 21:14:12.514192] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16a7c70 (9): Bad file descriptor 00:20:56.828 [2024-04-18 21:14:12.514200] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff9610 (9): Bad file descriptor 00:20:56.828 [2024-04-18 21:14:12.514225] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1517150 (9): Bad file descriptor 00:20:56.828 [2024-04-18 21:14:12.514233] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:56.828 [2024-04-18 21:14:12.514239] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:56.828 [2024-04-18 21:14:12.514245] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:56.828 [2024-04-18 21:14:12.514254] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:56.828 [2024-04-18 21:14:12.514260] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:56.828 [2024-04-18 21:14:12.514266] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:56.828 [2024-04-18 21:14:12.514274] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:20:56.828 [2024-04-18 21:14:12.514280] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:20:56.828 [2024-04-18 21:14:12.514287] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:20:56.828 [2024-04-18 21:14:12.514311] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:56.828 [2024-04-18 21:14:12.514317] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:56.828 [2024-04-18 21:14:12.514323] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:56.828 [2024-04-18 21:14:12.514328] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:56.828 [2024-04-18 21:14:12.514334] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:56.828 [2024-04-18 21:14:12.514340] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:56.828 [2024-04-18 21:14:12.514363] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.087 21:14:12 -- target/shutdown.sh@136 -- # nvmfpid= 00:20:57.087 21:14:12 -- target/shutdown.sh@139 -- # sleep 1 00:20:58.024 21:14:13 -- target/shutdown.sh@142 -- # kill -9 3111820 00:20:58.024 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (3111820) - No such process 00:20:58.024 21:14:13 -- target/shutdown.sh@142 -- # true 00:20:58.024 21:14:13 -- target/shutdown.sh@144 -- # stoptarget 00:20:58.024 21:14:13 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:58.024 21:14:13 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:58.024 21:14:13 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:58.024 21:14:13 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:58.024 21:14:13 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:58.024 21:14:13 -- nvmf/common.sh@117 -- # sync 00:20:58.024 21:14:13 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:58.024 21:14:13 -- nvmf/common.sh@120 -- # set +e 00:20:58.024 21:14:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:58.024 21:14:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:58.024 rmmod nvme_tcp 00:20:58.024 rmmod nvme_fabrics 00:20:58.024 rmmod nvme_keyring 00:20:58.024 21:14:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:58.024 21:14:13 -- nvmf/common.sh@124 -- # set -e 00:20:58.024 21:14:13 -- nvmf/common.sh@125 -- # return 0 00:20:58.024 21:14:13 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:20:58.024 21:14:13 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:58.024 21:14:13 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:58.024 21:14:13 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:58.024 21:14:13 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:58.282 21:14:13 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:58.282 21:14:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:58.282 21:14:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:58.282 21:14:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.184 21:14:16 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:00.184 00:21:00.184 real 0m7.991s 00:21:00.185 user 0m20.031s 00:21:00.185 sys 0m1.303s 00:21:00.185 21:14:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:00.185 21:14:16 -- common/autotest_common.sh@10 -- # set +x 00:21:00.185 ************************************ 00:21:00.185 END TEST nvmf_shutdown_tc3 00:21:00.185 ************************************ 00:21:00.185 21:14:16 -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:21:00.185 00:21:00.185 real 0m32.766s 00:21:00.185 user 1m20.164s 00:21:00.185 sys 0m9.254s 00:21:00.185 21:14:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:00.185 21:14:16 -- common/autotest_common.sh@10 -- # set +x 00:21:00.185 ************************************ 00:21:00.185 END TEST nvmf_shutdown 00:21:00.185 ************************************ 00:21:00.185 21:14:16 -- nvmf/nvmf.sh@85 -- # timing_exit target 00:21:00.185 21:14:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:00.185 21:14:16 -- common/autotest_common.sh@10 -- # set +x 00:21:00.185 21:14:16 -- nvmf/nvmf.sh@87 -- # timing_enter host 00:21:00.185 21:14:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:00.185 21:14:16 -- common/autotest_common.sh@10 -- # set +x 00:21:00.443 21:14:16 -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:21:00.443 21:14:16 -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:00.443 21:14:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:00.443 21:14:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:00.443 21:14:16 -- common/autotest_common.sh@10 -- # set +x 00:21:00.443 ************************************ 00:21:00.443 START TEST nvmf_multicontroller 00:21:00.443 ************************************ 00:21:00.443 21:14:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:00.443 * Looking for test storage... 00:21:00.443 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:00.443 21:14:16 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:00.443 21:14:16 -- nvmf/common.sh@7 -- # uname -s 00:21:00.443 21:14:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:00.443 21:14:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:00.443 21:14:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:00.443 21:14:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:00.443 21:14:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:00.443 21:14:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:00.443 21:14:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:00.443 21:14:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:00.443 21:14:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:00.443 21:14:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:00.443 21:14:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:00.443 21:14:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:00.443 21:14:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:00.443 21:14:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:00.443 21:14:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:00.443 21:14:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:00.443 21:14:16 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:00.443 21:14:16 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:00.443 21:14:16 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:00.443 21:14:16 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:00.443 21:14:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.443 21:14:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.443 21:14:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.443 21:14:16 -- paths/export.sh@5 -- # export PATH 00:21:00.443 21:14:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.443 21:14:16 -- nvmf/common.sh@47 -- # : 0 00:21:00.443 21:14:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:00.443 21:14:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:00.443 21:14:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:00.444 21:14:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:00.444 21:14:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:00.444 21:14:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:00.444 21:14:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:00.444 21:14:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:00.444 21:14:16 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:00.444 21:14:16 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:00.444 21:14:16 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:00.444 21:14:16 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:00.444 21:14:16 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:00.444 21:14:16 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:00.444 21:14:16 -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:00.444 21:14:16 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:00.444 21:14:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:00.444 21:14:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:00.444 21:14:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:00.444 21:14:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:00.444 21:14:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.444 21:14:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:00.444 21:14:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.444 21:14:16 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:00.444 21:14:16 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:00.444 21:14:16 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:00.444 21:14:16 -- common/autotest_common.sh@10 -- # set +x 00:21:07.025 21:14:21 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:07.025 21:14:21 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:07.025 21:14:21 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:07.025 21:14:21 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:07.025 21:14:21 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:07.025 21:14:21 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:07.025 21:14:21 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:07.025 21:14:21 -- nvmf/common.sh@295 -- # net_devs=() 00:21:07.025 21:14:21 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:07.025 21:14:21 -- nvmf/common.sh@296 -- # e810=() 00:21:07.025 21:14:21 -- nvmf/common.sh@296 -- # local -ga e810 00:21:07.025 21:14:21 -- nvmf/common.sh@297 -- # x722=() 00:21:07.025 21:14:21 -- nvmf/common.sh@297 -- # local -ga x722 00:21:07.025 21:14:21 -- nvmf/common.sh@298 -- # mlx=() 00:21:07.025 21:14:21 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:07.025 21:14:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:07.025 21:14:21 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:07.025 21:14:21 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:07.025 21:14:21 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:07.025 21:14:21 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:07.025 21:14:21 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:07.025 21:14:21 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:07.025 21:14:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:07.025 21:14:21 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:07.025 21:14:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:07.025 21:14:21 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:07.025 21:14:21 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:07.025 21:14:21 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:07.025 21:14:21 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:07.025 21:14:21 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:07.025 21:14:21 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:07.025 21:14:21 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:07.025 21:14:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:07.025 21:14:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:07.025 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:07.025 21:14:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:07.025 21:14:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:07.025 21:14:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:07.025 21:14:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:07.025 21:14:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:07.025 21:14:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:07.025 21:14:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:07.025 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:07.025 21:14:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:07.025 21:14:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:07.025 21:14:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:07.025 21:14:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:07.025 21:14:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:07.025 21:14:21 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:07.025 21:14:21 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:07.025 21:14:21 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:07.025 21:14:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:07.025 21:14:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.025 21:14:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:07.026 21:14:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.026 21:14:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:07.026 Found net devices under 0000:86:00.0: cvl_0_0 00:21:07.026 21:14:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.026 21:14:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:07.026 21:14:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.026 21:14:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:07.026 21:14:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.026 21:14:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:07.026 Found net devices under 0000:86:00.1: cvl_0_1 00:21:07.026 21:14:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.026 21:14:21 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:07.026 21:14:21 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:07.026 21:14:21 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:07.026 21:14:21 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:07.026 21:14:21 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:07.026 21:14:21 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:07.026 21:14:21 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:07.026 21:14:21 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:07.026 21:14:21 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:07.026 21:14:21 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:07.026 21:14:21 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:07.026 21:14:21 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:07.026 21:14:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:07.026 21:14:21 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:07.026 21:14:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:07.026 21:14:21 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:07.026 21:14:21 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:07.026 21:14:21 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:07.026 21:14:21 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:07.026 21:14:21 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:07.026 21:14:21 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:07.026 21:14:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:07.026 21:14:22 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:07.026 21:14:22 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:07.026 21:14:22 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:07.026 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:07.026 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:21:07.026 00:21:07.026 --- 10.0.0.2 ping statistics --- 00:21:07.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.026 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:21:07.026 21:14:22 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:07.026 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:07.026 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:21:07.026 00:21:07.026 --- 10.0.0.1 ping statistics --- 00:21:07.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.026 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:21:07.026 21:14:22 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:07.026 21:14:22 -- nvmf/common.sh@411 -- # return 0 00:21:07.026 21:14:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:07.026 21:14:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:07.026 21:14:22 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:07.026 21:14:22 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:07.026 21:14:22 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:07.026 21:14:22 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:07.026 21:14:22 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:07.026 21:14:22 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:07.026 21:14:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:07.026 21:14:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:07.026 21:14:22 -- common/autotest_common.sh@10 -- # set +x 00:21:07.026 21:14:22 -- nvmf/common.sh@470 -- # nvmfpid=3116245 00:21:07.026 21:14:22 -- nvmf/common.sh@471 -- # waitforlisten 3116245 00:21:07.026 21:14:22 -- common/autotest_common.sh@817 -- # '[' -z 3116245 ']' 00:21:07.026 21:14:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:07.026 21:14:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:07.026 21:14:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:07.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:07.026 21:14:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:07.026 21:14:22 -- common/autotest_common.sh@10 -- # set +x 00:21:07.026 21:14:22 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:07.026 [2024-04-18 21:14:22.147093] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:21:07.026 [2024-04-18 21:14:22.147134] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:07.026 EAL: No free 2048 kB hugepages reported on node 1 00:21:07.026 [2024-04-18 21:14:22.210344] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:07.026 [2024-04-18 21:14:22.287536] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:07.026 [2024-04-18 21:14:22.287571] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:07.026 [2024-04-18 21:14:22.287578] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:07.026 [2024-04-18 21:14:22.287585] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:07.026 [2024-04-18 21:14:22.287589] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:07.026 [2024-04-18 21:14:22.287628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:07.026 [2024-04-18 21:14:22.287713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:07.026 [2024-04-18 21:14:22.287715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:07.026 21:14:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:07.026 21:14:22 -- common/autotest_common.sh@850 -- # return 0 00:21:07.026 21:14:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:07.026 21:14:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:07.283 21:14:22 -- common/autotest_common.sh@10 -- # set +x 00:21:07.283 21:14:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:07.283 21:14:22 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:07.283 21:14:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:07.283 21:14:22 -- common/autotest_common.sh@10 -- # set +x 00:21:07.283 [2024-04-18 21:14:22.995517] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:07.283 21:14:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:07.283 21:14:23 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:07.283 21:14:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:07.283 21:14:23 -- common/autotest_common.sh@10 -- # set +x 00:21:07.283 Malloc0 00:21:07.284 21:14:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:07.284 21:14:23 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:07.284 21:14:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:07.284 21:14:23 -- common/autotest_common.sh@10 -- # set +x 00:21:07.284 21:14:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:07.284 21:14:23 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:07.284 21:14:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:07.284 21:14:23 -- common/autotest_common.sh@10 -- # set +x 00:21:07.284 21:14:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:07.284 21:14:23 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:07.284 21:14:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:07.284 21:14:23 -- common/autotest_common.sh@10 -- # set +x 00:21:07.284 [2024-04-18 21:14:23.059234] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:07.284 21:14:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:07.284 21:14:23 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:07.284 21:14:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:07.284 21:14:23 -- common/autotest_common.sh@10 -- # set +x 00:21:07.284 [2024-04-18 21:14:23.067168] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:07.284 21:14:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:07.284 21:14:23 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:07.284 21:14:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:07.284 21:14:23 -- common/autotest_common.sh@10 -- # set +x 00:21:07.284 Malloc1 00:21:07.284 21:14:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:07.284 21:14:23 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:07.284 21:14:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:07.284 21:14:23 -- common/autotest_common.sh@10 -- # set +x 00:21:07.284 21:14:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:07.284 21:14:23 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:07.284 21:14:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:07.284 21:14:23 -- common/autotest_common.sh@10 -- # set +x 00:21:07.284 21:14:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:07.284 21:14:23 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:07.284 21:14:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:07.284 21:14:23 -- common/autotest_common.sh@10 -- # set +x 00:21:07.284 21:14:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:07.284 21:14:23 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:07.284 21:14:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:07.284 21:14:23 -- common/autotest_common.sh@10 -- # set +x 00:21:07.284 21:14:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:07.284 21:14:23 -- host/multicontroller.sh@44 -- # bdevperf_pid=3116427 00:21:07.284 21:14:23 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:07.284 21:14:23 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:07.284 21:14:23 -- host/multicontroller.sh@47 -- # waitforlisten 3116427 /var/tmp/bdevperf.sock 00:21:07.284 21:14:23 -- common/autotest_common.sh@817 -- # '[' -z 3116427 ']' 00:21:07.284 21:14:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:07.284 21:14:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:07.284 21:14:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:07.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:07.284 21:14:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:07.284 21:14:23 -- common/autotest_common.sh@10 -- # set +x 00:21:08.215 21:14:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:08.215 21:14:23 -- common/autotest_common.sh@850 -- # return 0 00:21:08.215 21:14:23 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:08.215 21:14:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:08.215 21:14:23 -- common/autotest_common.sh@10 -- # set +x 00:21:08.215 NVMe0n1 00:21:08.215 21:14:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:08.215 21:14:24 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:08.215 21:14:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:08.215 21:14:24 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:08.215 21:14:24 -- common/autotest_common.sh@10 -- # set +x 00:21:08.215 21:14:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:08.215 1 00:21:08.215 21:14:24 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:08.215 21:14:24 -- common/autotest_common.sh@638 -- # local es=0 00:21:08.215 21:14:24 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:08.215 21:14:24 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:21:08.215 21:14:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:08.215 21:14:24 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:21:08.215 21:14:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:08.215 21:14:24 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:08.215 21:14:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:08.215 21:14:24 -- common/autotest_common.sh@10 -- # set +x 00:21:08.215 request: 00:21:08.215 { 00:21:08.215 "name": "NVMe0", 00:21:08.215 "trtype": "tcp", 00:21:08.215 "traddr": "10.0.0.2", 00:21:08.215 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:08.215 "hostaddr": "10.0.0.2", 00:21:08.215 "hostsvcid": "60000", 00:21:08.215 "adrfam": "ipv4", 00:21:08.215 "trsvcid": "4420", 00:21:08.215 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.215 "method": "bdev_nvme_attach_controller", 00:21:08.215 "req_id": 1 00:21:08.215 } 00:21:08.215 Got JSON-RPC error response 00:21:08.215 response: 00:21:08.215 { 00:21:08.215 "code": -114, 00:21:08.215 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:08.215 } 00:21:08.215 21:14:24 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:21:08.215 21:14:24 -- common/autotest_common.sh@641 -- # es=1 00:21:08.215 21:14:24 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:08.215 21:14:24 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:08.215 21:14:24 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:08.215 21:14:24 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:08.215 21:14:24 -- common/autotest_common.sh@638 -- # local es=0 00:21:08.215 21:14:24 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:08.215 21:14:24 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:21:08.215 21:14:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:08.215 21:14:24 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:21:08.215 21:14:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:08.215 21:14:24 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:08.215 21:14:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:08.215 21:14:24 -- common/autotest_common.sh@10 -- # set +x 00:21:08.215 request: 00:21:08.215 { 00:21:08.215 "name": "NVMe0", 00:21:08.215 "trtype": "tcp", 00:21:08.215 "traddr": "10.0.0.2", 00:21:08.215 "hostaddr": "10.0.0.2", 00:21:08.215 "hostsvcid": "60000", 00:21:08.215 "adrfam": "ipv4", 00:21:08.215 "trsvcid": "4420", 00:21:08.215 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:08.215 "method": "bdev_nvme_attach_controller", 00:21:08.215 "req_id": 1 00:21:08.215 } 00:21:08.215 Got JSON-RPC error response 00:21:08.215 response: 00:21:08.215 { 00:21:08.215 "code": -114, 00:21:08.215 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:08.215 } 00:21:08.215 21:14:24 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:21:08.215 21:14:24 -- common/autotest_common.sh@641 -- # es=1 00:21:08.215 21:14:24 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:08.215 21:14:24 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:08.215 21:14:24 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:08.215 21:14:24 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:08.215 21:14:24 -- common/autotest_common.sh@638 -- # local es=0 00:21:08.215 21:14:24 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:08.215 21:14:24 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:21:08.215 21:14:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:08.215 21:14:24 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:21:08.215 21:14:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:08.215 21:14:24 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:08.215 21:14:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:08.215 21:14:24 -- common/autotest_common.sh@10 -- # set +x 00:21:08.473 request: 00:21:08.473 { 00:21:08.473 "name": "NVMe0", 00:21:08.473 "trtype": "tcp", 00:21:08.473 "traddr": "10.0.0.2", 00:21:08.473 "hostaddr": "10.0.0.2", 00:21:08.473 "hostsvcid": "60000", 00:21:08.473 "adrfam": "ipv4", 00:21:08.473 "trsvcid": "4420", 00:21:08.473 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.473 "multipath": "disable", 00:21:08.473 "method": "bdev_nvme_attach_controller", 00:21:08.473 "req_id": 1 00:21:08.473 } 00:21:08.473 Got JSON-RPC error response 00:21:08.473 response: 00:21:08.473 { 00:21:08.473 "code": -114, 00:21:08.473 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:21:08.473 } 00:21:08.473 21:14:24 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:21:08.473 21:14:24 -- common/autotest_common.sh@641 -- # es=1 00:21:08.473 21:14:24 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:08.473 21:14:24 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:08.473 21:14:24 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:08.473 21:14:24 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:08.473 21:14:24 -- common/autotest_common.sh@638 -- # local es=0 00:21:08.473 21:14:24 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:08.473 21:14:24 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:21:08.473 21:14:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:08.473 21:14:24 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:21:08.473 21:14:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:08.473 21:14:24 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:08.473 21:14:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:08.473 21:14:24 -- common/autotest_common.sh@10 -- # set +x 00:21:08.473 request: 00:21:08.473 { 00:21:08.473 "name": "NVMe0", 00:21:08.473 "trtype": "tcp", 00:21:08.473 "traddr": "10.0.0.2", 00:21:08.473 "hostaddr": "10.0.0.2", 00:21:08.473 "hostsvcid": "60000", 00:21:08.473 "adrfam": "ipv4", 00:21:08.473 "trsvcid": "4420", 00:21:08.473 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.473 "multipath": "failover", 00:21:08.473 "method": "bdev_nvme_attach_controller", 00:21:08.473 "req_id": 1 00:21:08.473 } 00:21:08.473 Got JSON-RPC error response 00:21:08.473 response: 00:21:08.473 { 00:21:08.473 "code": -114, 00:21:08.473 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:08.473 } 00:21:08.473 21:14:24 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:21:08.473 21:14:24 -- common/autotest_common.sh@641 -- # es=1 00:21:08.473 21:14:24 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:08.473 21:14:24 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:08.473 21:14:24 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:08.473 21:14:24 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:08.473 21:14:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:08.473 21:14:24 -- common/autotest_common.sh@10 -- # set +x 00:21:08.473 00:21:08.473 21:14:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:08.473 21:14:24 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:08.473 21:14:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:08.473 21:14:24 -- common/autotest_common.sh@10 -- # set +x 00:21:08.473 21:14:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:08.473 21:14:24 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:08.473 21:14:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:08.473 21:14:24 -- common/autotest_common.sh@10 -- # set +x 00:21:08.730 00:21:08.730 21:14:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:08.730 21:14:24 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:08.730 21:14:24 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:08.730 21:14:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:08.730 21:14:24 -- common/autotest_common.sh@10 -- # set +x 00:21:08.730 21:14:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:08.730 21:14:24 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:08.730 21:14:24 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:09.661 0 00:21:09.661 21:14:25 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:09.661 21:14:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:09.661 21:14:25 -- common/autotest_common.sh@10 -- # set +x 00:21:09.918 21:14:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:09.918 21:14:25 -- host/multicontroller.sh@100 -- # killprocess 3116427 00:21:09.918 21:14:25 -- common/autotest_common.sh@936 -- # '[' -z 3116427 ']' 00:21:09.918 21:14:25 -- common/autotest_common.sh@940 -- # kill -0 3116427 00:21:09.918 21:14:25 -- common/autotest_common.sh@941 -- # uname 00:21:09.918 21:14:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:09.918 21:14:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3116427 00:21:09.918 21:14:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:09.918 21:14:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:09.918 21:14:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3116427' 00:21:09.918 killing process with pid 3116427 00:21:09.918 21:14:25 -- common/autotest_common.sh@955 -- # kill 3116427 00:21:09.918 21:14:25 -- common/autotest_common.sh@960 -- # wait 3116427 00:21:09.918 21:14:25 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:09.918 21:14:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:09.919 21:14:25 -- common/autotest_common.sh@10 -- # set +x 00:21:10.177 21:14:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:10.177 21:14:25 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:10.177 21:14:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:10.177 21:14:25 -- common/autotest_common.sh@10 -- # set +x 00:21:10.177 21:14:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:10.177 21:14:25 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:21:10.177 21:14:25 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:10.177 21:14:25 -- common/autotest_common.sh@1598 -- # read -r file 00:21:10.177 21:14:25 -- common/autotest_common.sh@1597 -- # sort -u 00:21:10.177 21:14:25 -- common/autotest_common.sh@1597 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:10.177 21:14:25 -- common/autotest_common.sh@1599 -- # cat 00:21:10.177 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:10.177 [2024-04-18 21:14:23.170163] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:21:10.177 [2024-04-18 21:14:23.170210] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3116427 ] 00:21:10.177 EAL: No free 2048 kB hugepages reported on node 1 00:21:10.177 [2024-04-18 21:14:23.229879] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.177 [2024-04-18 21:14:23.302461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:10.177 [2024-04-18 21:14:24.445986] bdev.c:4547:bdev_name_add: *ERROR*: Bdev name a9ca87fe-7721-439d-a776-b10629e90f81 already exists 00:21:10.177 [2024-04-18 21:14:24.446015] bdev.c:7650:bdev_register: *ERROR*: Unable to add uuid:a9ca87fe-7721-439d-a776-b10629e90f81 alias for bdev NVMe1n1 00:21:10.177 [2024-04-18 21:14:24.446024] bdev_nvme.c:4273:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:10.177 Running I/O for 1 seconds... 00:21:10.177 00:21:10.177 Latency(us) 00:21:10.177 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.177 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:10.177 NVMe0n1 : 1.01 23141.27 90.40 0.00 0.00 5518.33 3376.53 16184.54 00:21:10.177 =================================================================================================================== 00:21:10.177 Total : 23141.27 90.40 0.00 0.00 5518.33 3376.53 16184.54 00:21:10.177 Received shutdown signal, test time was about 1.000000 seconds 00:21:10.177 00:21:10.177 Latency(us) 00:21:10.177 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.177 =================================================================================================================== 00:21:10.177 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:10.177 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:10.177 21:14:25 -- common/autotest_common.sh@1604 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:10.177 21:14:25 -- common/autotest_common.sh@1598 -- # read -r file 00:21:10.177 21:14:25 -- host/multicontroller.sh@108 -- # nvmftestfini 00:21:10.177 21:14:25 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:10.177 21:14:25 -- nvmf/common.sh@117 -- # sync 00:21:10.178 21:14:25 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:10.178 21:14:25 -- nvmf/common.sh@120 -- # set +e 00:21:10.178 21:14:25 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:10.178 21:14:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:10.178 rmmod nvme_tcp 00:21:10.178 rmmod nvme_fabrics 00:21:10.178 rmmod nvme_keyring 00:21:10.178 21:14:25 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:10.178 21:14:25 -- nvmf/common.sh@124 -- # set -e 00:21:10.178 21:14:25 -- nvmf/common.sh@125 -- # return 0 00:21:10.178 21:14:25 -- nvmf/common.sh@478 -- # '[' -n 3116245 ']' 00:21:10.178 21:14:25 -- nvmf/common.sh@479 -- # killprocess 3116245 00:21:10.178 21:14:25 -- common/autotest_common.sh@936 -- # '[' -z 3116245 ']' 00:21:10.178 21:14:25 -- common/autotest_common.sh@940 -- # kill -0 3116245 00:21:10.178 21:14:25 -- common/autotest_common.sh@941 -- # uname 00:21:10.178 21:14:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:10.178 21:14:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3116245 00:21:10.178 21:14:25 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:10.178 21:14:25 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:10.178 21:14:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3116245' 00:21:10.178 killing process with pid 3116245 00:21:10.178 21:14:25 -- common/autotest_common.sh@955 -- # kill 3116245 00:21:10.178 21:14:25 -- common/autotest_common.sh@960 -- # wait 3116245 00:21:10.436 21:14:26 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:10.436 21:14:26 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:10.436 21:14:26 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:10.436 21:14:26 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:10.436 21:14:26 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:10.436 21:14:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.436 21:14:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:10.436 21:14:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:12.965 21:14:28 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:12.965 00:21:12.965 real 0m12.074s 00:21:12.965 user 0m16.311s 00:21:12.965 sys 0m5.067s 00:21:12.965 21:14:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:12.965 21:14:28 -- common/autotest_common.sh@10 -- # set +x 00:21:12.965 ************************************ 00:21:12.965 END TEST nvmf_multicontroller 00:21:12.965 ************************************ 00:21:12.965 21:14:28 -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:12.965 21:14:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:12.965 21:14:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:12.965 21:14:28 -- common/autotest_common.sh@10 -- # set +x 00:21:12.965 ************************************ 00:21:12.965 START TEST nvmf_aer 00:21:12.965 ************************************ 00:21:12.965 21:14:28 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:12.965 * Looking for test storage... 00:21:12.965 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:12.965 21:14:28 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:12.965 21:14:28 -- nvmf/common.sh@7 -- # uname -s 00:21:12.965 21:14:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:12.965 21:14:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:12.965 21:14:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:12.965 21:14:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:12.965 21:14:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:12.965 21:14:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:12.965 21:14:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:12.965 21:14:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:12.965 21:14:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:12.965 21:14:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:12.965 21:14:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:12.965 21:14:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:12.965 21:14:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:12.965 21:14:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:12.965 21:14:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:12.965 21:14:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:12.965 21:14:28 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:12.965 21:14:28 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:12.965 21:14:28 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:12.965 21:14:28 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:12.965 21:14:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.965 21:14:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.965 21:14:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.965 21:14:28 -- paths/export.sh@5 -- # export PATH 00:21:12.965 21:14:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.965 21:14:28 -- nvmf/common.sh@47 -- # : 0 00:21:12.965 21:14:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:12.965 21:14:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:12.965 21:14:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:12.965 21:14:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:12.965 21:14:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:12.965 21:14:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:12.965 21:14:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:12.965 21:14:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:12.965 21:14:28 -- host/aer.sh@11 -- # nvmftestinit 00:21:12.965 21:14:28 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:12.965 21:14:28 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:12.965 21:14:28 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:12.965 21:14:28 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:12.965 21:14:28 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:12.965 21:14:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:12.965 21:14:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:12.965 21:14:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:12.965 21:14:28 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:12.965 21:14:28 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:12.965 21:14:28 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:12.965 21:14:28 -- common/autotest_common.sh@10 -- # set +x 00:21:18.232 21:14:33 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:18.232 21:14:33 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:18.232 21:14:33 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:18.232 21:14:33 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:18.232 21:14:33 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:18.232 21:14:33 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:18.232 21:14:33 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:18.232 21:14:33 -- nvmf/common.sh@295 -- # net_devs=() 00:21:18.232 21:14:33 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:18.232 21:14:33 -- nvmf/common.sh@296 -- # e810=() 00:21:18.232 21:14:33 -- nvmf/common.sh@296 -- # local -ga e810 00:21:18.232 21:14:33 -- nvmf/common.sh@297 -- # x722=() 00:21:18.232 21:14:33 -- nvmf/common.sh@297 -- # local -ga x722 00:21:18.232 21:14:33 -- nvmf/common.sh@298 -- # mlx=() 00:21:18.232 21:14:33 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:18.232 21:14:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:18.232 21:14:33 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:18.232 21:14:33 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:18.232 21:14:33 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:18.232 21:14:33 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:18.232 21:14:33 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:18.232 21:14:33 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:18.232 21:14:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:18.232 21:14:33 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:18.232 21:14:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:18.232 21:14:33 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:18.232 21:14:33 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:18.232 21:14:33 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:18.232 21:14:33 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:18.232 21:14:33 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:18.232 21:14:33 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:18.232 21:14:33 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:18.232 21:14:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:18.232 21:14:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:18.232 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:18.232 21:14:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:18.232 21:14:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:18.232 21:14:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:18.232 21:14:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:18.232 21:14:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:18.232 21:14:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:18.232 21:14:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:18.232 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:18.232 21:14:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:18.232 21:14:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:18.232 21:14:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:18.232 21:14:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:18.232 21:14:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:18.232 21:14:33 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:18.232 21:14:33 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:18.232 21:14:33 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:18.232 21:14:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:18.232 21:14:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:18.232 21:14:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:18.232 21:14:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:18.232 21:14:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:18.232 Found net devices under 0000:86:00.0: cvl_0_0 00:21:18.232 21:14:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:18.232 21:14:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:18.232 21:14:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:18.232 21:14:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:18.232 21:14:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:18.232 21:14:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:18.232 Found net devices under 0000:86:00.1: cvl_0_1 00:21:18.232 21:14:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:18.232 21:14:33 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:18.232 21:14:33 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:18.232 21:14:33 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:18.232 21:14:33 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:18.232 21:14:33 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:18.232 21:14:33 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:18.232 21:14:33 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:18.232 21:14:33 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:18.232 21:14:33 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:18.232 21:14:33 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:18.232 21:14:33 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:18.232 21:14:33 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:18.232 21:14:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:18.232 21:14:33 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:18.232 21:14:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:18.232 21:14:33 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:18.232 21:14:33 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:18.232 21:14:33 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:18.232 21:14:34 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:18.232 21:14:34 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:18.232 21:14:34 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:18.232 21:14:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:18.491 21:14:34 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:18.491 21:14:34 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:18.491 21:14:34 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:18.491 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:18.491 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:21:18.491 00:21:18.491 --- 10.0.0.2 ping statistics --- 00:21:18.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.491 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:21:18.491 21:14:34 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:18.491 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:18.491 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:21:18.491 00:21:18.491 --- 10.0.0.1 ping statistics --- 00:21:18.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.491 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:21:18.491 21:14:34 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:18.491 21:14:34 -- nvmf/common.sh@411 -- # return 0 00:21:18.491 21:14:34 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:18.491 21:14:34 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:18.491 21:14:34 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:18.491 21:14:34 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:18.491 21:14:34 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:18.491 21:14:34 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:18.491 21:14:34 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:18.491 21:14:34 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:18.491 21:14:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:18.491 21:14:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:18.491 21:14:34 -- common/autotest_common.sh@10 -- # set +x 00:21:18.491 21:14:34 -- nvmf/common.sh@470 -- # nvmfpid=3120712 00:21:18.491 21:14:34 -- nvmf/common.sh@471 -- # waitforlisten 3120712 00:21:18.491 21:14:34 -- common/autotest_common.sh@817 -- # '[' -z 3120712 ']' 00:21:18.491 21:14:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:18.491 21:14:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:18.491 21:14:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:18.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:18.491 21:14:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:18.491 21:14:34 -- common/autotest_common.sh@10 -- # set +x 00:21:18.491 21:14:34 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:18.491 [2024-04-18 21:14:34.306236] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:21:18.491 [2024-04-18 21:14:34.306279] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:18.491 EAL: No free 2048 kB hugepages reported on node 1 00:21:18.491 [2024-04-18 21:14:34.370252] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:18.749 [2024-04-18 21:14:34.450006] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:18.749 [2024-04-18 21:14:34.450040] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:18.749 [2024-04-18 21:14:34.450047] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:18.749 [2024-04-18 21:14:34.450053] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:18.749 [2024-04-18 21:14:34.450058] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:18.749 [2024-04-18 21:14:34.450096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:18.749 [2024-04-18 21:14:34.450112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:18.749 [2024-04-18 21:14:34.450204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:18.749 [2024-04-18 21:14:34.450205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.316 21:14:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:19.316 21:14:35 -- common/autotest_common.sh@850 -- # return 0 00:21:19.316 21:14:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:19.316 21:14:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:19.316 21:14:35 -- common/autotest_common.sh@10 -- # set +x 00:21:19.316 21:14:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:19.316 21:14:35 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:19.316 21:14:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.316 21:14:35 -- common/autotest_common.sh@10 -- # set +x 00:21:19.316 [2024-04-18 21:14:35.157442] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:19.316 21:14:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.316 21:14:35 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:19.316 21:14:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.316 21:14:35 -- common/autotest_common.sh@10 -- # set +x 00:21:19.316 Malloc0 00:21:19.316 21:14:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.316 21:14:35 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:19.316 21:14:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.316 21:14:35 -- common/autotest_common.sh@10 -- # set +x 00:21:19.316 21:14:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.316 21:14:35 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:19.316 21:14:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.316 21:14:35 -- common/autotest_common.sh@10 -- # set +x 00:21:19.316 21:14:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.316 21:14:35 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:19.316 21:14:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.316 21:14:35 -- common/autotest_common.sh@10 -- # set +x 00:21:19.316 [2024-04-18 21:14:35.209143] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:19.316 21:14:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.316 21:14:35 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:19.316 21:14:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.316 21:14:35 -- common/autotest_common.sh@10 -- # set +x 00:21:19.316 [2024-04-18 21:14:35.216943] nvmf_rpc.c: 279:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:21:19.316 [ 00:21:19.316 { 00:21:19.316 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:19.316 "subtype": "Discovery", 00:21:19.316 "listen_addresses": [], 00:21:19.316 "allow_any_host": true, 00:21:19.316 "hosts": [] 00:21:19.316 }, 00:21:19.316 { 00:21:19.316 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:19.316 "subtype": "NVMe", 00:21:19.316 "listen_addresses": [ 00:21:19.316 { 00:21:19.316 "transport": "TCP", 00:21:19.316 "trtype": "TCP", 00:21:19.316 "adrfam": "IPv4", 00:21:19.316 "traddr": "10.0.0.2", 00:21:19.316 "trsvcid": "4420" 00:21:19.316 } 00:21:19.316 ], 00:21:19.316 "allow_any_host": true, 00:21:19.316 "hosts": [], 00:21:19.316 "serial_number": "SPDK00000000000001", 00:21:19.316 "model_number": "SPDK bdev Controller", 00:21:19.316 "max_namespaces": 2, 00:21:19.316 "min_cntlid": 1, 00:21:19.316 "max_cntlid": 65519, 00:21:19.316 "namespaces": [ 00:21:19.316 { 00:21:19.316 "nsid": 1, 00:21:19.316 "bdev_name": "Malloc0", 00:21:19.316 "name": "Malloc0", 00:21:19.316 "nguid": "32F06A4C0DA7444786FA57DA4FA9E61A", 00:21:19.316 "uuid": "32f06a4c-0da7-4447-86fa-57da4fa9e61a" 00:21:19.316 } 00:21:19.316 ] 00:21:19.316 } 00:21:19.316 ] 00:21:19.316 21:14:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.316 21:14:35 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:19.316 21:14:35 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:19.316 21:14:35 -- host/aer.sh@33 -- # aerpid=3120958 00:21:19.316 21:14:35 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:19.316 21:14:35 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:19.316 21:14:35 -- common/autotest_common.sh@1251 -- # local i=0 00:21:19.316 21:14:35 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:19.316 21:14:35 -- common/autotest_common.sh@1253 -- # '[' 0 -lt 200 ']' 00:21:19.316 21:14:35 -- common/autotest_common.sh@1254 -- # i=1 00:21:19.316 21:14:35 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:21:19.575 EAL: No free 2048 kB hugepages reported on node 1 00:21:19.575 21:14:35 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:19.575 21:14:35 -- common/autotest_common.sh@1253 -- # '[' 1 -lt 200 ']' 00:21:19.575 21:14:35 -- common/autotest_common.sh@1254 -- # i=2 00:21:19.575 21:14:35 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:21:19.575 21:14:35 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:19.575 21:14:35 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:19.575 21:14:35 -- common/autotest_common.sh@1262 -- # return 0 00:21:19.575 21:14:35 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:19.575 21:14:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.575 21:14:35 -- common/autotest_common.sh@10 -- # set +x 00:21:19.575 Malloc1 00:21:19.575 21:14:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.575 21:14:35 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:19.575 21:14:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.575 21:14:35 -- common/autotest_common.sh@10 -- # set +x 00:21:19.575 21:14:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.575 21:14:35 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:19.575 21:14:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.575 21:14:35 -- common/autotest_common.sh@10 -- # set +x 00:21:19.575 Asynchronous Event Request test 00:21:19.575 Attaching to 10.0.0.2 00:21:19.575 Attached to 10.0.0.2 00:21:19.575 Registering asynchronous event callbacks... 00:21:19.575 Starting namespace attribute notice tests for all controllers... 00:21:19.575 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:19.575 aer_cb - Changed Namespace 00:21:19.575 Cleaning up... 00:21:19.575 [ 00:21:19.575 { 00:21:19.575 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:19.575 "subtype": "Discovery", 00:21:19.575 "listen_addresses": [], 00:21:19.575 "allow_any_host": true, 00:21:19.575 "hosts": [] 00:21:19.575 }, 00:21:19.575 { 00:21:19.575 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:19.575 "subtype": "NVMe", 00:21:19.575 "listen_addresses": [ 00:21:19.575 { 00:21:19.575 "transport": "TCP", 00:21:19.575 "trtype": "TCP", 00:21:19.575 "adrfam": "IPv4", 00:21:19.575 "traddr": "10.0.0.2", 00:21:19.575 "trsvcid": "4420" 00:21:19.575 } 00:21:19.575 ], 00:21:19.575 "allow_any_host": true, 00:21:19.575 "hosts": [], 00:21:19.575 "serial_number": "SPDK00000000000001", 00:21:19.575 "model_number": "SPDK bdev Controller", 00:21:19.575 "max_namespaces": 2, 00:21:19.575 "min_cntlid": 1, 00:21:19.575 "max_cntlid": 65519, 00:21:19.575 "namespaces": [ 00:21:19.575 { 00:21:19.575 "nsid": 1, 00:21:19.575 "bdev_name": "Malloc0", 00:21:19.575 "name": "Malloc0", 00:21:19.575 "nguid": "32F06A4C0DA7444786FA57DA4FA9E61A", 00:21:19.575 "uuid": "32f06a4c-0da7-4447-86fa-57da4fa9e61a" 00:21:19.575 }, 00:21:19.575 { 00:21:19.575 "nsid": 2, 00:21:19.575 "bdev_name": "Malloc1", 00:21:19.575 "name": "Malloc1", 00:21:19.575 "nguid": "75570966FCA145078FB472A164259720", 00:21:19.575 "uuid": "75570966-fca1-4507-8fb4-72a164259720" 00:21:19.575 } 00:21:19.833 ] 00:21:19.833 } 00:21:19.833 ] 00:21:19.833 21:14:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.833 21:14:35 -- host/aer.sh@43 -- # wait 3120958 00:21:19.833 21:14:35 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:19.833 21:14:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.833 21:14:35 -- common/autotest_common.sh@10 -- # set +x 00:21:19.833 21:14:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.833 21:14:35 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:19.833 21:14:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.833 21:14:35 -- common/autotest_common.sh@10 -- # set +x 00:21:19.833 21:14:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.833 21:14:35 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:19.833 21:14:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.833 21:14:35 -- common/autotest_common.sh@10 -- # set +x 00:21:19.833 21:14:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.833 21:14:35 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:19.833 21:14:35 -- host/aer.sh@51 -- # nvmftestfini 00:21:19.833 21:14:35 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:19.833 21:14:35 -- nvmf/common.sh@117 -- # sync 00:21:19.833 21:14:35 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:19.833 21:14:35 -- nvmf/common.sh@120 -- # set +e 00:21:19.833 21:14:35 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:19.833 21:14:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:19.833 rmmod nvme_tcp 00:21:19.833 rmmod nvme_fabrics 00:21:19.833 rmmod nvme_keyring 00:21:19.833 21:14:35 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:19.833 21:14:35 -- nvmf/common.sh@124 -- # set -e 00:21:19.833 21:14:35 -- nvmf/common.sh@125 -- # return 0 00:21:19.834 21:14:35 -- nvmf/common.sh@478 -- # '[' -n 3120712 ']' 00:21:19.834 21:14:35 -- nvmf/common.sh@479 -- # killprocess 3120712 00:21:19.834 21:14:35 -- common/autotest_common.sh@936 -- # '[' -z 3120712 ']' 00:21:19.834 21:14:35 -- common/autotest_common.sh@940 -- # kill -0 3120712 00:21:19.834 21:14:35 -- common/autotest_common.sh@941 -- # uname 00:21:19.834 21:14:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:19.834 21:14:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3120712 00:21:19.834 21:14:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:19.834 21:14:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:19.834 21:14:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3120712' 00:21:19.834 killing process with pid 3120712 00:21:19.834 21:14:35 -- common/autotest_common.sh@955 -- # kill 3120712 00:21:19.834 [2024-04-18 21:14:35.666417] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:21:19.834 21:14:35 -- common/autotest_common.sh@960 -- # wait 3120712 00:21:20.092 21:14:35 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:20.092 21:14:35 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:20.092 21:14:35 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:20.092 21:14:35 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:20.092 21:14:35 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:20.092 21:14:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:20.092 21:14:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:20.092 21:14:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.628 21:14:37 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:22.628 00:21:22.628 real 0m9.462s 00:21:22.628 user 0m7.126s 00:21:22.628 sys 0m4.679s 00:21:22.628 21:14:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:22.628 21:14:37 -- common/autotest_common.sh@10 -- # set +x 00:21:22.628 ************************************ 00:21:22.628 END TEST nvmf_aer 00:21:22.628 ************************************ 00:21:22.628 21:14:37 -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:22.628 21:14:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:22.628 21:14:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:22.628 21:14:37 -- common/autotest_common.sh@10 -- # set +x 00:21:22.628 ************************************ 00:21:22.628 START TEST nvmf_async_init 00:21:22.628 ************************************ 00:21:22.629 21:14:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:22.629 * Looking for test storage... 00:21:22.629 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:22.629 21:14:38 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:22.629 21:14:38 -- nvmf/common.sh@7 -- # uname -s 00:21:22.629 21:14:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:22.629 21:14:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:22.629 21:14:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:22.629 21:14:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:22.629 21:14:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:22.629 21:14:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:22.629 21:14:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:22.629 21:14:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:22.629 21:14:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:22.629 21:14:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:22.629 21:14:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:22.629 21:14:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:22.629 21:14:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:22.629 21:14:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:22.629 21:14:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:22.629 21:14:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:22.629 21:14:38 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:22.629 21:14:38 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:22.629 21:14:38 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:22.629 21:14:38 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:22.629 21:14:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.629 21:14:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.629 21:14:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.629 21:14:38 -- paths/export.sh@5 -- # export PATH 00:21:22.629 21:14:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.629 21:14:38 -- nvmf/common.sh@47 -- # : 0 00:21:22.629 21:14:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:22.629 21:14:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:22.629 21:14:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:22.629 21:14:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:22.629 21:14:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:22.629 21:14:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:22.629 21:14:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:22.629 21:14:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:22.629 21:14:38 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:22.629 21:14:38 -- host/async_init.sh@14 -- # null_block_size=512 00:21:22.629 21:14:38 -- host/async_init.sh@15 -- # null_bdev=null0 00:21:22.629 21:14:38 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:22.629 21:14:38 -- host/async_init.sh@20 -- # uuidgen 00:21:22.629 21:14:38 -- host/async_init.sh@20 -- # tr -d - 00:21:22.629 21:14:38 -- host/async_init.sh@20 -- # nguid=000beb0aeb1245d497f609c4755aed62 00:21:22.629 21:14:38 -- host/async_init.sh@22 -- # nvmftestinit 00:21:22.629 21:14:38 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:22.629 21:14:38 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:22.629 21:14:38 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:22.629 21:14:38 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:22.629 21:14:38 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:22.629 21:14:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.629 21:14:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:22.629 21:14:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.629 21:14:38 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:22.629 21:14:38 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:22.629 21:14:38 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:22.629 21:14:38 -- common/autotest_common.sh@10 -- # set +x 00:21:29.189 21:14:43 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:29.189 21:14:43 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:29.189 21:14:43 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:29.189 21:14:43 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:29.189 21:14:43 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:29.189 21:14:43 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:29.189 21:14:43 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:29.189 21:14:43 -- nvmf/common.sh@295 -- # net_devs=() 00:21:29.189 21:14:43 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:29.189 21:14:43 -- nvmf/common.sh@296 -- # e810=() 00:21:29.189 21:14:43 -- nvmf/common.sh@296 -- # local -ga e810 00:21:29.189 21:14:43 -- nvmf/common.sh@297 -- # x722=() 00:21:29.189 21:14:43 -- nvmf/common.sh@297 -- # local -ga x722 00:21:29.189 21:14:43 -- nvmf/common.sh@298 -- # mlx=() 00:21:29.189 21:14:43 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:29.189 21:14:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:29.189 21:14:43 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:29.189 21:14:43 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:29.189 21:14:43 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:29.189 21:14:43 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:29.189 21:14:43 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:29.189 21:14:43 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:29.189 21:14:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:29.189 21:14:43 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:29.189 21:14:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:29.189 21:14:43 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:29.189 21:14:43 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:29.189 21:14:43 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:29.189 21:14:43 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:29.189 21:14:43 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:29.189 21:14:43 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:29.189 21:14:43 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:29.189 21:14:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:29.189 21:14:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:29.189 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:29.189 21:14:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:29.189 21:14:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:29.189 21:14:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:29.189 21:14:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:29.189 21:14:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:29.189 21:14:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:29.189 21:14:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:29.189 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:29.189 21:14:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:29.189 21:14:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:29.189 21:14:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:29.189 21:14:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:29.189 21:14:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:29.189 21:14:43 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:29.189 21:14:43 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:29.189 21:14:43 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:29.189 21:14:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:29.189 21:14:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:29.189 21:14:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:29.189 21:14:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:29.189 21:14:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:29.189 Found net devices under 0000:86:00.0: cvl_0_0 00:21:29.189 21:14:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:29.189 21:14:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:29.189 21:14:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:29.189 21:14:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:29.189 21:14:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:29.189 21:14:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:29.189 Found net devices under 0000:86:00.1: cvl_0_1 00:21:29.189 21:14:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:29.189 21:14:43 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:29.189 21:14:43 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:29.189 21:14:43 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:29.189 21:14:43 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:29.189 21:14:43 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:29.189 21:14:43 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:29.189 21:14:43 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:29.189 21:14:43 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:29.189 21:14:43 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:29.189 21:14:43 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:29.189 21:14:43 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:29.189 21:14:43 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:29.189 21:14:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:29.189 21:14:43 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:29.189 21:14:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:29.189 21:14:43 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:29.189 21:14:43 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:29.189 21:14:43 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:29.189 21:14:44 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:29.189 21:14:44 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:29.189 21:14:44 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:29.189 21:14:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:29.189 21:14:44 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:29.189 21:14:44 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:29.189 21:14:44 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:29.189 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:29.189 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:21:29.189 00:21:29.189 --- 10.0.0.2 ping statistics --- 00:21:29.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:29.190 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:21:29.190 21:14:44 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:29.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:29.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:21:29.190 00:21:29.190 --- 10.0.0.1 ping statistics --- 00:21:29.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:29.190 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:21:29.190 21:14:44 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:29.190 21:14:44 -- nvmf/common.sh@411 -- # return 0 00:21:29.190 21:14:44 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:29.190 21:14:44 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:29.190 21:14:44 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:29.190 21:14:44 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:29.190 21:14:44 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:29.190 21:14:44 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:29.190 21:14:44 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:29.190 21:14:44 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:29.190 21:14:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:29.190 21:14:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:29.190 21:14:44 -- common/autotest_common.sh@10 -- # set +x 00:21:29.190 21:14:44 -- nvmf/common.sh@470 -- # nvmfpid=3124787 00:21:29.190 21:14:44 -- nvmf/common.sh@471 -- # waitforlisten 3124787 00:21:29.190 21:14:44 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:29.190 21:14:44 -- common/autotest_common.sh@817 -- # '[' -z 3124787 ']' 00:21:29.190 21:14:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:29.190 21:14:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:29.190 21:14:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:29.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:29.190 21:14:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:29.190 21:14:44 -- common/autotest_common.sh@10 -- # set +x 00:21:29.190 [2024-04-18 21:14:44.274149] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:21:29.190 [2024-04-18 21:14:44.274192] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:29.190 EAL: No free 2048 kB hugepages reported on node 1 00:21:29.190 [2024-04-18 21:14:44.339436] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.190 [2024-04-18 21:14:44.410064] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:29.190 [2024-04-18 21:14:44.410106] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:29.190 [2024-04-18 21:14:44.410113] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:29.190 [2024-04-18 21:14:44.410119] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:29.190 [2024-04-18 21:14:44.410125] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:29.190 [2024-04-18 21:14:44.410150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:29.190 21:14:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:29.190 21:14:45 -- common/autotest_common.sh@850 -- # return 0 00:21:29.190 21:14:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:29.190 21:14:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:29.190 21:14:45 -- common/autotest_common.sh@10 -- # set +x 00:21:29.190 21:14:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:29.190 21:14:45 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:29.190 21:14:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.190 21:14:45 -- common/autotest_common.sh@10 -- # set +x 00:21:29.190 [2024-04-18 21:14:45.105853] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:29.190 21:14:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.190 21:14:45 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:29.190 21:14:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.190 21:14:45 -- common/autotest_common.sh@10 -- # set +x 00:21:29.190 null0 00:21:29.190 21:14:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.190 21:14:45 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:29.190 21:14:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.190 21:14:45 -- common/autotest_common.sh@10 -- # set +x 00:21:29.521 21:14:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.521 21:14:45 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:29.521 21:14:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.521 21:14:45 -- common/autotest_common.sh@10 -- # set +x 00:21:29.521 21:14:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.521 21:14:45 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 000beb0aeb1245d497f609c4755aed62 00:21:29.521 21:14:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.521 21:14:45 -- common/autotest_common.sh@10 -- # set +x 00:21:29.521 21:14:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.521 21:14:45 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:29.521 21:14:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.521 21:14:45 -- common/autotest_common.sh@10 -- # set +x 00:21:29.521 [2024-04-18 21:14:45.146083] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:29.521 21:14:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.521 21:14:45 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:29.521 21:14:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.521 21:14:45 -- common/autotest_common.sh@10 -- # set +x 00:21:29.521 nvme0n1 00:21:29.521 21:14:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.521 21:14:45 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:29.521 21:14:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.521 21:14:45 -- common/autotest_common.sh@10 -- # set +x 00:21:29.521 [ 00:21:29.521 { 00:21:29.521 "name": "nvme0n1", 00:21:29.521 "aliases": [ 00:21:29.521 "000beb0a-eb12-45d4-97f6-09c4755aed62" 00:21:29.521 ], 00:21:29.521 "product_name": "NVMe disk", 00:21:29.521 "block_size": 512, 00:21:29.521 "num_blocks": 2097152, 00:21:29.521 "uuid": "000beb0a-eb12-45d4-97f6-09c4755aed62", 00:21:29.521 "assigned_rate_limits": { 00:21:29.521 "rw_ios_per_sec": 0, 00:21:29.521 "rw_mbytes_per_sec": 0, 00:21:29.521 "r_mbytes_per_sec": 0, 00:21:29.521 "w_mbytes_per_sec": 0 00:21:29.521 }, 00:21:29.521 "claimed": false, 00:21:29.521 "zoned": false, 00:21:29.521 "supported_io_types": { 00:21:29.521 "read": true, 00:21:29.521 "write": true, 00:21:29.521 "unmap": false, 00:21:29.521 "write_zeroes": true, 00:21:29.521 "flush": true, 00:21:29.521 "reset": true, 00:21:29.521 "compare": true, 00:21:29.521 "compare_and_write": true, 00:21:29.521 "abort": true, 00:21:29.521 "nvme_admin": true, 00:21:29.521 "nvme_io": true 00:21:29.521 }, 00:21:29.521 "memory_domains": [ 00:21:29.521 { 00:21:29.521 "dma_device_id": "system", 00:21:29.521 "dma_device_type": 1 00:21:29.521 } 00:21:29.521 ], 00:21:29.521 "driver_specific": { 00:21:29.521 "nvme": [ 00:21:29.521 { 00:21:29.521 "trid": { 00:21:29.522 "trtype": "TCP", 00:21:29.522 "adrfam": "IPv4", 00:21:29.522 "traddr": "10.0.0.2", 00:21:29.522 "trsvcid": "4420", 00:21:29.522 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:29.522 }, 00:21:29.522 "ctrlr_data": { 00:21:29.522 "cntlid": 1, 00:21:29.522 "vendor_id": "0x8086", 00:21:29.522 "model_number": "SPDK bdev Controller", 00:21:29.522 "serial_number": "00000000000000000000", 00:21:29.522 "firmware_revision": "24.05", 00:21:29.522 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:29.522 "oacs": { 00:21:29.522 "security": 0, 00:21:29.522 "format": 0, 00:21:29.522 "firmware": 0, 00:21:29.522 "ns_manage": 0 00:21:29.522 }, 00:21:29.522 "multi_ctrlr": true, 00:21:29.522 "ana_reporting": false 00:21:29.522 }, 00:21:29.522 "vs": { 00:21:29.522 "nvme_version": "1.3" 00:21:29.522 }, 00:21:29.522 "ns_data": { 00:21:29.522 "id": 1, 00:21:29.522 "can_share": true 00:21:29.522 } 00:21:29.522 } 00:21:29.522 ], 00:21:29.522 "mp_policy": "active_passive" 00:21:29.522 } 00:21:29.522 } 00:21:29.522 ] 00:21:29.522 21:14:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.522 21:14:45 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:29.522 21:14:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.522 21:14:45 -- common/autotest_common.sh@10 -- # set +x 00:21:29.522 [2024-04-18 21:14:45.394587] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:29.522 [2024-04-18 21:14:45.394643] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efe710 (9): Bad file descriptor 00:21:29.786 [2024-04-18 21:14:45.526607] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:29.786 21:14:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.786 21:14:45 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:29.786 21:14:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.786 21:14:45 -- common/autotest_common.sh@10 -- # set +x 00:21:29.786 [ 00:21:29.786 { 00:21:29.786 "name": "nvme0n1", 00:21:29.786 "aliases": [ 00:21:29.786 "000beb0a-eb12-45d4-97f6-09c4755aed62" 00:21:29.786 ], 00:21:29.786 "product_name": "NVMe disk", 00:21:29.786 "block_size": 512, 00:21:29.786 "num_blocks": 2097152, 00:21:29.786 "uuid": "000beb0a-eb12-45d4-97f6-09c4755aed62", 00:21:29.786 "assigned_rate_limits": { 00:21:29.786 "rw_ios_per_sec": 0, 00:21:29.786 "rw_mbytes_per_sec": 0, 00:21:29.786 "r_mbytes_per_sec": 0, 00:21:29.786 "w_mbytes_per_sec": 0 00:21:29.786 }, 00:21:29.786 "claimed": false, 00:21:29.786 "zoned": false, 00:21:29.786 "supported_io_types": { 00:21:29.786 "read": true, 00:21:29.786 "write": true, 00:21:29.786 "unmap": false, 00:21:29.786 "write_zeroes": true, 00:21:29.786 "flush": true, 00:21:29.786 "reset": true, 00:21:29.786 "compare": true, 00:21:29.786 "compare_and_write": true, 00:21:29.786 "abort": true, 00:21:29.786 "nvme_admin": true, 00:21:29.786 "nvme_io": true 00:21:29.786 }, 00:21:29.786 "memory_domains": [ 00:21:29.786 { 00:21:29.786 "dma_device_id": "system", 00:21:29.786 "dma_device_type": 1 00:21:29.786 } 00:21:29.786 ], 00:21:29.786 "driver_specific": { 00:21:29.786 "nvme": [ 00:21:29.786 { 00:21:29.786 "trid": { 00:21:29.786 "trtype": "TCP", 00:21:29.786 "adrfam": "IPv4", 00:21:29.786 "traddr": "10.0.0.2", 00:21:29.786 "trsvcid": "4420", 00:21:29.786 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:29.786 }, 00:21:29.786 "ctrlr_data": { 00:21:29.786 "cntlid": 2, 00:21:29.786 "vendor_id": "0x8086", 00:21:29.786 "model_number": "SPDK bdev Controller", 00:21:29.786 "serial_number": "00000000000000000000", 00:21:29.786 "firmware_revision": "24.05", 00:21:29.786 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:29.786 "oacs": { 00:21:29.786 "security": 0, 00:21:29.786 "format": 0, 00:21:29.786 "firmware": 0, 00:21:29.786 "ns_manage": 0 00:21:29.786 }, 00:21:29.786 "multi_ctrlr": true, 00:21:29.786 "ana_reporting": false 00:21:29.786 }, 00:21:29.786 "vs": { 00:21:29.786 "nvme_version": "1.3" 00:21:29.786 }, 00:21:29.786 "ns_data": { 00:21:29.786 "id": 1, 00:21:29.786 "can_share": true 00:21:29.786 } 00:21:29.786 } 00:21:29.786 ], 00:21:29.786 "mp_policy": "active_passive" 00:21:29.786 } 00:21:29.786 } 00:21:29.786 ] 00:21:29.786 21:14:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.786 21:14:45 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:29.786 21:14:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.786 21:14:45 -- common/autotest_common.sh@10 -- # set +x 00:21:29.786 21:14:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.786 21:14:45 -- host/async_init.sh@53 -- # mktemp 00:21:29.786 21:14:45 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.j9RVZ3sGif 00:21:29.787 21:14:45 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:29.787 21:14:45 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.j9RVZ3sGif 00:21:29.787 21:14:45 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:29.787 21:14:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.787 21:14:45 -- common/autotest_common.sh@10 -- # set +x 00:21:29.787 21:14:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.787 21:14:45 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:29.787 21:14:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.787 21:14:45 -- common/autotest_common.sh@10 -- # set +x 00:21:29.787 [2024-04-18 21:14:45.579165] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:29.787 [2024-04-18 21:14:45.579284] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:29.787 21:14:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.787 21:14:45 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.j9RVZ3sGif 00:21:29.787 21:14:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.787 21:14:45 -- common/autotest_common.sh@10 -- # set +x 00:21:29.787 [2024-04-18 21:14:45.587184] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:29.787 21:14:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.787 21:14:45 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.j9RVZ3sGif 00:21:29.787 21:14:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.787 21:14:45 -- common/autotest_common.sh@10 -- # set +x 00:21:29.787 [2024-04-18 21:14:45.595203] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:29.787 [2024-04-18 21:14:45.595239] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:29.787 nvme0n1 00:21:29.787 21:14:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.787 21:14:45 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:29.787 21:14:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.787 21:14:45 -- common/autotest_common.sh@10 -- # set +x 00:21:29.787 [ 00:21:29.787 { 00:21:29.787 "name": "nvme0n1", 00:21:29.787 "aliases": [ 00:21:29.787 "000beb0a-eb12-45d4-97f6-09c4755aed62" 00:21:29.787 ], 00:21:29.787 "product_name": "NVMe disk", 00:21:29.787 "block_size": 512, 00:21:29.787 "num_blocks": 2097152, 00:21:29.787 "uuid": "000beb0a-eb12-45d4-97f6-09c4755aed62", 00:21:29.787 "assigned_rate_limits": { 00:21:29.787 "rw_ios_per_sec": 0, 00:21:29.787 "rw_mbytes_per_sec": 0, 00:21:29.787 "r_mbytes_per_sec": 0, 00:21:29.787 "w_mbytes_per_sec": 0 00:21:29.787 }, 00:21:29.787 "claimed": false, 00:21:29.787 "zoned": false, 00:21:29.787 "supported_io_types": { 00:21:29.787 "read": true, 00:21:29.787 "write": true, 00:21:29.787 "unmap": false, 00:21:29.787 "write_zeroes": true, 00:21:29.787 "flush": true, 00:21:29.787 "reset": true, 00:21:29.787 "compare": true, 00:21:29.787 "compare_and_write": true, 00:21:29.787 "abort": true, 00:21:29.787 "nvme_admin": true, 00:21:29.787 "nvme_io": true 00:21:29.787 }, 00:21:29.787 "memory_domains": [ 00:21:29.787 { 00:21:29.787 "dma_device_id": "system", 00:21:29.787 "dma_device_type": 1 00:21:29.787 } 00:21:29.787 ], 00:21:29.787 "driver_specific": { 00:21:29.787 "nvme": [ 00:21:29.787 { 00:21:29.787 "trid": { 00:21:29.787 "trtype": "TCP", 00:21:29.787 "adrfam": "IPv4", 00:21:29.787 "traddr": "10.0.0.2", 00:21:29.787 "trsvcid": "4421", 00:21:29.787 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:29.787 }, 00:21:29.787 "ctrlr_data": { 00:21:29.787 "cntlid": 3, 00:21:29.787 "vendor_id": "0x8086", 00:21:29.787 "model_number": "SPDK bdev Controller", 00:21:29.787 "serial_number": "00000000000000000000", 00:21:29.787 "firmware_revision": "24.05", 00:21:29.787 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:29.787 "oacs": { 00:21:29.787 "security": 0, 00:21:29.787 "format": 0, 00:21:29.787 "firmware": 0, 00:21:29.787 "ns_manage": 0 00:21:29.787 }, 00:21:29.787 "multi_ctrlr": true, 00:21:29.787 "ana_reporting": false 00:21:29.787 }, 00:21:29.787 "vs": { 00:21:29.787 "nvme_version": "1.3" 00:21:29.787 }, 00:21:29.787 "ns_data": { 00:21:29.787 "id": 1, 00:21:29.787 "can_share": true 00:21:29.787 } 00:21:29.787 } 00:21:29.787 ], 00:21:29.787 "mp_policy": "active_passive" 00:21:29.787 } 00:21:29.787 } 00:21:29.787 ] 00:21:29.787 21:14:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.787 21:14:45 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:29.787 21:14:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.787 21:14:45 -- common/autotest_common.sh@10 -- # set +x 00:21:29.787 21:14:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.787 21:14:45 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.j9RVZ3sGif 00:21:29.787 21:14:45 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:21:29.787 21:14:45 -- host/async_init.sh@78 -- # nvmftestfini 00:21:29.787 21:14:45 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:29.787 21:14:45 -- nvmf/common.sh@117 -- # sync 00:21:29.787 21:14:45 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:29.787 21:14:45 -- nvmf/common.sh@120 -- # set +e 00:21:29.787 21:14:45 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:29.787 21:14:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:29.787 rmmod nvme_tcp 00:21:29.787 rmmod nvme_fabrics 00:21:30.046 rmmod nvme_keyring 00:21:30.046 21:14:45 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:30.046 21:14:45 -- nvmf/common.sh@124 -- # set -e 00:21:30.046 21:14:45 -- nvmf/common.sh@125 -- # return 0 00:21:30.046 21:14:45 -- nvmf/common.sh@478 -- # '[' -n 3124787 ']' 00:21:30.046 21:14:45 -- nvmf/common.sh@479 -- # killprocess 3124787 00:21:30.046 21:14:45 -- common/autotest_common.sh@936 -- # '[' -z 3124787 ']' 00:21:30.046 21:14:45 -- common/autotest_common.sh@940 -- # kill -0 3124787 00:21:30.046 21:14:45 -- common/autotest_common.sh@941 -- # uname 00:21:30.046 21:14:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:30.046 21:14:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3124787 00:21:30.046 21:14:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:30.046 21:14:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:30.046 21:14:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3124787' 00:21:30.046 killing process with pid 3124787 00:21:30.046 21:14:45 -- common/autotest_common.sh@955 -- # kill 3124787 00:21:30.046 [2024-04-18 21:14:45.791865] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:30.046 [2024-04-18 21:14:45.791893] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:30.046 21:14:45 -- common/autotest_common.sh@960 -- # wait 3124787 00:21:30.304 21:14:45 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:30.304 21:14:45 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:30.304 21:14:45 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:30.305 21:14:45 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:30.305 21:14:45 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:30.305 21:14:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:30.305 21:14:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:30.305 21:14:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:32.206 21:14:48 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:32.206 00:21:32.206 real 0m9.915s 00:21:32.206 user 0m3.522s 00:21:32.206 sys 0m4.864s 00:21:32.206 21:14:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:32.206 21:14:48 -- common/autotest_common.sh@10 -- # set +x 00:21:32.206 ************************************ 00:21:32.206 END TEST nvmf_async_init 00:21:32.206 ************************************ 00:21:32.206 21:14:48 -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:32.206 21:14:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:32.206 21:14:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:32.206 21:14:48 -- common/autotest_common.sh@10 -- # set +x 00:21:32.466 ************************************ 00:21:32.466 START TEST dma 00:21:32.466 ************************************ 00:21:32.466 21:14:48 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:32.466 * Looking for test storage... 00:21:32.466 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:32.466 21:14:48 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:32.466 21:14:48 -- nvmf/common.sh@7 -- # uname -s 00:21:32.466 21:14:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:32.466 21:14:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:32.466 21:14:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:32.466 21:14:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:32.466 21:14:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:32.466 21:14:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:32.466 21:14:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:32.466 21:14:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:32.466 21:14:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:32.466 21:14:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:32.466 21:14:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:32.466 21:14:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:32.466 21:14:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:32.466 21:14:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:32.466 21:14:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:32.466 21:14:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:32.466 21:14:48 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:32.466 21:14:48 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:32.466 21:14:48 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:32.466 21:14:48 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:32.466 21:14:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.466 21:14:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.466 21:14:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.466 21:14:48 -- paths/export.sh@5 -- # export PATH 00:21:32.466 21:14:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.466 21:14:48 -- nvmf/common.sh@47 -- # : 0 00:21:32.466 21:14:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:32.466 21:14:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:32.466 21:14:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:32.466 21:14:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:32.466 21:14:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:32.466 21:14:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:32.466 21:14:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:32.466 21:14:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:32.466 21:14:48 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:32.466 21:14:48 -- host/dma.sh@13 -- # exit 0 00:21:32.466 00:21:32.466 real 0m0.110s 00:21:32.466 user 0m0.046s 00:21:32.466 sys 0m0.073s 00:21:32.466 21:14:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:32.466 21:14:48 -- common/autotest_common.sh@10 -- # set +x 00:21:32.466 ************************************ 00:21:32.466 END TEST dma 00:21:32.466 ************************************ 00:21:32.466 21:14:48 -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:32.466 21:14:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:32.466 21:14:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:32.466 21:14:48 -- common/autotest_common.sh@10 -- # set +x 00:21:32.724 ************************************ 00:21:32.724 START TEST nvmf_identify 00:21:32.724 ************************************ 00:21:32.724 21:14:48 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:32.724 * Looking for test storage... 00:21:32.724 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:32.724 21:14:48 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:32.724 21:14:48 -- nvmf/common.sh@7 -- # uname -s 00:21:32.724 21:14:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:32.724 21:14:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:32.724 21:14:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:32.724 21:14:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:32.724 21:14:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:32.724 21:14:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:32.724 21:14:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:32.724 21:14:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:32.724 21:14:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:32.724 21:14:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:32.724 21:14:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:32.724 21:14:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:32.724 21:14:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:32.724 21:14:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:32.724 21:14:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:32.724 21:14:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:32.724 21:14:48 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:32.724 21:14:48 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:32.724 21:14:48 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:32.724 21:14:48 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:32.724 21:14:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.724 21:14:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.724 21:14:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.724 21:14:48 -- paths/export.sh@5 -- # export PATH 00:21:32.724 21:14:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.724 21:14:48 -- nvmf/common.sh@47 -- # : 0 00:21:32.724 21:14:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:32.724 21:14:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:32.724 21:14:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:32.724 21:14:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:32.724 21:14:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:32.724 21:14:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:32.724 21:14:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:32.724 21:14:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:32.724 21:14:48 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:32.724 21:14:48 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:32.724 21:14:48 -- host/identify.sh@14 -- # nvmftestinit 00:21:32.724 21:14:48 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:32.724 21:14:48 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:32.724 21:14:48 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:32.724 21:14:48 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:32.724 21:14:48 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:32.724 21:14:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:32.724 21:14:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:32.724 21:14:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:32.724 21:14:48 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:32.725 21:14:48 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:32.725 21:14:48 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:32.725 21:14:48 -- common/autotest_common.sh@10 -- # set +x 00:21:39.291 21:14:54 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:39.291 21:14:54 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:39.291 21:14:54 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:39.291 21:14:54 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:39.291 21:14:54 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:39.291 21:14:54 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:39.291 21:14:54 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:39.291 21:14:54 -- nvmf/common.sh@295 -- # net_devs=() 00:21:39.291 21:14:54 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:39.291 21:14:54 -- nvmf/common.sh@296 -- # e810=() 00:21:39.291 21:14:54 -- nvmf/common.sh@296 -- # local -ga e810 00:21:39.291 21:14:54 -- nvmf/common.sh@297 -- # x722=() 00:21:39.291 21:14:54 -- nvmf/common.sh@297 -- # local -ga x722 00:21:39.291 21:14:54 -- nvmf/common.sh@298 -- # mlx=() 00:21:39.291 21:14:54 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:39.291 21:14:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:39.291 21:14:54 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:39.291 21:14:54 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:39.291 21:14:54 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:39.291 21:14:54 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:39.291 21:14:54 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:39.291 21:14:54 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:39.291 21:14:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:39.291 21:14:54 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:39.291 21:14:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:39.291 21:14:54 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:39.291 21:14:54 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:39.291 21:14:54 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:39.291 21:14:54 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:39.291 21:14:54 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:39.291 21:14:54 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:39.291 21:14:54 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:39.291 21:14:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:39.291 21:14:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:39.291 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:39.291 21:14:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:39.291 21:14:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:39.291 21:14:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:39.291 21:14:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:39.291 21:14:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:39.291 21:14:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:39.291 21:14:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:39.291 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:39.291 21:14:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:39.291 21:14:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:39.291 21:14:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:39.291 21:14:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:39.291 21:14:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:39.291 21:14:54 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:39.291 21:14:54 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:39.291 21:14:54 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:39.291 21:14:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:39.291 21:14:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:39.291 21:14:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:39.291 21:14:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:39.291 21:14:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:39.291 Found net devices under 0000:86:00.0: cvl_0_0 00:21:39.291 21:14:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:39.291 21:14:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:39.291 21:14:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:39.291 21:14:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:39.291 21:14:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:39.291 21:14:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:39.291 Found net devices under 0000:86:00.1: cvl_0_1 00:21:39.291 21:14:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:39.291 21:14:54 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:39.291 21:14:54 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:39.291 21:14:54 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:39.291 21:14:54 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:39.291 21:14:54 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:39.291 21:14:54 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:39.291 21:14:54 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:39.291 21:14:54 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:39.291 21:14:54 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:39.291 21:14:54 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:39.291 21:14:54 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:39.291 21:14:54 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:39.291 21:14:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:39.291 21:14:54 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:39.291 21:14:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:39.291 21:14:54 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:39.291 21:14:54 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:39.291 21:14:54 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:39.291 21:14:54 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:39.291 21:14:54 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:39.291 21:14:54 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:39.291 21:14:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:39.291 21:14:54 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:39.291 21:14:54 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:39.291 21:14:55 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:39.291 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:39.291 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:21:39.291 00:21:39.291 --- 10.0.0.2 ping statistics --- 00:21:39.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.291 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:21:39.291 21:14:55 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:39.291 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:39.291 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:21:39.291 00:21:39.291 --- 10.0.0.1 ping statistics --- 00:21:39.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.291 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:21:39.291 21:14:55 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:39.291 21:14:55 -- nvmf/common.sh@411 -- # return 0 00:21:39.291 21:14:55 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:39.291 21:14:55 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:39.291 21:14:55 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:39.291 21:14:55 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:39.291 21:14:55 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:39.291 21:14:55 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:39.291 21:14:55 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:39.291 21:14:55 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:39.291 21:14:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:39.291 21:14:55 -- common/autotest_common.sh@10 -- # set +x 00:21:39.291 21:14:55 -- host/identify.sh@19 -- # nvmfpid=3129118 00:21:39.291 21:14:55 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:39.291 21:14:55 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:39.291 21:14:55 -- host/identify.sh@23 -- # waitforlisten 3129118 00:21:39.291 21:14:55 -- common/autotest_common.sh@817 -- # '[' -z 3129118 ']' 00:21:39.291 21:14:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:39.291 21:14:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:39.291 21:14:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:39.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:39.291 21:14:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:39.291 21:14:55 -- common/autotest_common.sh@10 -- # set +x 00:21:39.291 [2024-04-18 21:14:55.123265] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:21:39.291 [2024-04-18 21:14:55.123308] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:39.291 EAL: No free 2048 kB hugepages reported on node 1 00:21:39.291 [2024-04-18 21:14:55.188365] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:39.550 [2024-04-18 21:14:55.263004] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:39.550 [2024-04-18 21:14:55.263048] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:39.550 [2024-04-18 21:14:55.263055] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:39.550 [2024-04-18 21:14:55.263061] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:39.550 [2024-04-18 21:14:55.263067] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:39.550 [2024-04-18 21:14:55.263109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:39.550 [2024-04-18 21:14:55.263206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:39.550 [2024-04-18 21:14:55.263225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:39.550 [2024-04-18 21:14:55.263227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:40.118 21:14:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:40.118 21:14:55 -- common/autotest_common.sh@850 -- # return 0 00:21:40.118 21:14:55 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:40.118 21:14:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:40.118 21:14:55 -- common/autotest_common.sh@10 -- # set +x 00:21:40.118 [2024-04-18 21:14:55.938349] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:40.118 21:14:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:40.118 21:14:55 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:40.118 21:14:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:40.118 21:14:55 -- common/autotest_common.sh@10 -- # set +x 00:21:40.118 21:14:55 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:40.118 21:14:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:40.118 21:14:55 -- common/autotest_common.sh@10 -- # set +x 00:21:40.118 Malloc0 00:21:40.118 21:14:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:40.118 21:14:56 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:40.118 21:14:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:40.118 21:14:56 -- common/autotest_common.sh@10 -- # set +x 00:21:40.118 21:14:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:40.118 21:14:56 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:40.118 21:14:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:40.118 21:14:56 -- common/autotest_common.sh@10 -- # set +x 00:21:40.118 21:14:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:40.118 21:14:56 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:40.118 21:14:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:40.118 21:14:56 -- common/autotest_common.sh@10 -- # set +x 00:21:40.118 [2024-04-18 21:14:56.030321] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:40.118 21:14:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:40.118 21:14:56 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:40.118 21:14:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:40.118 21:14:56 -- common/autotest_common.sh@10 -- # set +x 00:21:40.118 21:14:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:40.118 21:14:56 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:40.118 21:14:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:40.118 21:14:56 -- common/autotest_common.sh@10 -- # set +x 00:21:40.118 [2024-04-18 21:14:56.046141] nvmf_rpc.c: 279:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:21:40.380 [ 00:21:40.380 { 00:21:40.380 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:40.380 "subtype": "Discovery", 00:21:40.380 "listen_addresses": [ 00:21:40.380 { 00:21:40.380 "transport": "TCP", 00:21:40.380 "trtype": "TCP", 00:21:40.380 "adrfam": "IPv4", 00:21:40.380 "traddr": "10.0.0.2", 00:21:40.380 "trsvcid": "4420" 00:21:40.380 } 00:21:40.380 ], 00:21:40.380 "allow_any_host": true, 00:21:40.380 "hosts": [] 00:21:40.380 }, 00:21:40.380 { 00:21:40.380 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:40.380 "subtype": "NVMe", 00:21:40.380 "listen_addresses": [ 00:21:40.380 { 00:21:40.380 "transport": "TCP", 00:21:40.380 "trtype": "TCP", 00:21:40.380 "adrfam": "IPv4", 00:21:40.380 "traddr": "10.0.0.2", 00:21:40.380 "trsvcid": "4420" 00:21:40.380 } 00:21:40.380 ], 00:21:40.380 "allow_any_host": true, 00:21:40.380 "hosts": [], 00:21:40.380 "serial_number": "SPDK00000000000001", 00:21:40.380 "model_number": "SPDK bdev Controller", 00:21:40.380 "max_namespaces": 32, 00:21:40.380 "min_cntlid": 1, 00:21:40.380 "max_cntlid": 65519, 00:21:40.380 "namespaces": [ 00:21:40.380 { 00:21:40.380 "nsid": 1, 00:21:40.380 "bdev_name": "Malloc0", 00:21:40.380 "name": "Malloc0", 00:21:40.380 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:40.380 "eui64": "ABCDEF0123456789", 00:21:40.380 "uuid": "fe0d24f8-f3cd-46a2-8dd6-9c7c9e45b9ee" 00:21:40.380 } 00:21:40.380 ] 00:21:40.380 } 00:21:40.380 ] 00:21:40.380 21:14:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:40.380 21:14:56 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:40.380 [2024-04-18 21:14:56.080950] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:21:40.380 [2024-04-18 21:14:56.080983] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3129361 ] 00:21:40.380 EAL: No free 2048 kB hugepages reported on node 1 00:21:40.380 [2024-04-18 21:14:56.111086] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:21:40.380 [2024-04-18 21:14:56.111127] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:40.380 [2024-04-18 21:14:56.111133] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:40.380 [2024-04-18 21:14:56.111145] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:40.380 [2024-04-18 21:14:56.111152] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:40.380 [2024-04-18 21:14:56.111594] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:21:40.380 [2024-04-18 21:14:56.111623] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x51acb0 0 00:21:40.380 [2024-04-18 21:14:56.118523] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:40.380 [2024-04-18 21:14:56.118540] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:40.380 [2024-04-18 21:14:56.118544] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:40.380 [2024-04-18 21:14:56.118547] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:40.380 [2024-04-18 21:14:56.118583] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.380 [2024-04-18 21:14:56.118588] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.380 [2024-04-18 21:14:56.118592] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x51acb0) 00:21:40.380 [2024-04-18 21:14:56.118605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:40.380 [2024-04-18 21:14:56.118620] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x582a00, cid 0, qid 0 00:21:40.380 [2024-04-18 21:14:56.126520] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.380 [2024-04-18 21:14:56.126528] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.380 [2024-04-18 21:14:56.126532] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.380 [2024-04-18 21:14:56.126535] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x582a00) on tqpair=0x51acb0 00:21:40.380 [2024-04-18 21:14:56.126546] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:40.380 [2024-04-18 21:14:56.126552] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:21:40.380 [2024-04-18 21:14:56.126557] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:21:40.380 [2024-04-18 21:14:56.126567] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.380 [2024-04-18 21:14:56.126571] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.380 [2024-04-18 21:14:56.126574] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x51acb0) 00:21:40.380 [2024-04-18 21:14:56.126583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.380 [2024-04-18 21:14:56.126596] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x582a00, cid 0, qid 0 00:21:40.380 [2024-04-18 21:14:56.126808] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.380 [2024-04-18 21:14:56.126817] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.380 [2024-04-18 21:14:56.126820] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.380 [2024-04-18 21:14:56.126824] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x582a00) on tqpair=0x51acb0 00:21:40.380 [2024-04-18 21:14:56.126829] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:21:40.380 [2024-04-18 21:14:56.126837] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:21:40.380 [2024-04-18 21:14:56.126844] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.380 [2024-04-18 21:14:56.126847] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.380 [2024-04-18 21:14:56.126850] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x51acb0) 00:21:40.380 [2024-04-18 21:14:56.126857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.380 [2024-04-18 21:14:56.126869] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x582a00, cid 0, qid 0 00:21:40.380 [2024-04-18 21:14:56.126976] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.381 [2024-04-18 21:14:56.126983] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.381 [2024-04-18 21:14:56.126986] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.381 [2024-04-18 21:14:56.126989] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x582a00) on tqpair=0x51acb0 00:21:40.381 [2024-04-18 21:14:56.126994] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:21:40.381 [2024-04-18 21:14:56.127002] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:21:40.381 [2024-04-18 21:14:56.127008] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.381 [2024-04-18 21:14:56.127012] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.381 [2024-04-18 21:14:56.127015] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x51acb0) 00:21:40.381 [2024-04-18 21:14:56.127022] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.381 [2024-04-18 21:14:56.127033] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x582a00, cid 0, qid 0 00:21:40.381 [2024-04-18 21:14:56.127139] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.381 [2024-04-18 21:14:56.127146] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.381 [2024-04-18 21:14:56.127149] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.381 [2024-04-18 21:14:56.127152] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x582a00) on tqpair=0x51acb0 00:21:40.381 [2024-04-18 21:14:56.127157] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:40.381 [2024-04-18 21:14:56.127166] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.381 [2024-04-18 21:14:56.127170] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.381 [2024-04-18 21:14:56.127173] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x51acb0) 00:21:40.381 [2024-04-18 21:14:56.127179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.381 [2024-04-18 21:14:56.127189] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x582a00, cid 0, qid 0 00:21:40.381 [2024-04-18 21:14:56.127291] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.381 [2024-04-18 21:14:56.127297] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.381 [2024-04-18 21:14:56.127300] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.381 [2024-04-18 21:14:56.127304] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x582a00) on tqpair=0x51acb0 00:21:40.381 [2024-04-18 21:14:56.127308] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:21:40.381 [2024-04-18 21:14:56.127312] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:21:40.381 [2024-04-18 21:14:56.127320] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:40.381 [2024-04-18 21:14:56.127425] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:21:40.381 [2024-04-18 21:14:56.127429] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:40.381 [2024-04-18 21:14:56.127436] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.381 [2024-04-18 21:14:56.127440] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.381 [2024-04-18 21:14:56.127443] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x51acb0) 00:21:40.381 [2024-04-18 21:14:56.127449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.381 [2024-04-18 21:14:56.127460] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x582a00, cid 0, qid 0 00:21:40.381 [2024-04-18 21:14:56.127573] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.381 [2024-04-18 21:14:56.127580] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.381 [2024-04-18 21:14:56.127583] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.381 [2024-04-18 21:14:56.127587] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x582a00) on tqpair=0x51acb0 00:21:40.381 [2024-04-18 21:14:56.127591] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:40.381 [2024-04-18 21:14:56.127601] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.381 [2024-04-18 21:14:56.127605] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.381 [2024-04-18 21:14:56.127608] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x51acb0) 00:21:40.381 [2024-04-18 21:14:56.127614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.381 [2024-04-18 21:14:56.127625] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x582a00, cid 0, qid 0 00:21:40.381 [2024-04-18 21:14:56.127736] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.381 [2024-04-18 21:14:56.127742] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.381 [2024-04-18 21:14:56.127745] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.381 [2024-04-18 21:14:56.127748] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x582a00) on tqpair=0x51acb0 00:21:40.381 [2024-04-18 21:14:56.127752] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:40.381 [2024-04-18 21:14:56.127756] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:21:40.381 [2024-04-18 21:14:56.127764] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:21:40.381 [2024-04-18 21:14:56.127773] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:21:40.381 [2024-04-18 21:14:56.127783] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.381 [2024-04-18 21:14:56.127787] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x51acb0) 00:21:40.381 [2024-04-18 21:14:56.127793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.381 [2024-04-18 21:14:56.127805] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x582a00, cid 0, qid 0 00:21:40.381 [2024-04-18 21:14:56.127938] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:40.381 [2024-04-18 21:14:56.127945] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:40.381 [2024-04-18 21:14:56.127948] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:40.381 [2024-04-18 21:14:56.127952] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x51acb0): datao=0, datal=4096, cccid=0 00:21:40.381 [2024-04-18 21:14:56.127956] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x582a00) on tqpair(0x51acb0): expected_datao=0, payload_size=4096 00:21:40.381 [2024-04-18 21:14:56.127960] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.381 [2024-04-18 21:14:56.127966] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:40.381 [2024-04-18 21:14:56.127970] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:40.381 [2024-04-18 21:14:56.128113] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.381 [2024-04-18 21:14:56.128118] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.381 [2024-04-18 21:14:56.128121] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.381 [2024-04-18 21:14:56.128124] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x582a00) on tqpair=0x51acb0 00:21:40.381 [2024-04-18 21:14:56.128131] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:21:40.381 [2024-04-18 21:14:56.128135] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:21:40.382 [2024-04-18 21:14:56.128139] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:21:40.382 [2024-04-18 21:14:56.128158] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:21:40.382 [2024-04-18 21:14:56.128162] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:21:40.382 [2024-04-18 21:14:56.128167] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:21:40.382 [2024-04-18 21:14:56.128179] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:21:40.382 [2024-04-18 21:14:56.128188] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.382 [2024-04-18 21:14:56.128191] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.382 [2024-04-18 21:14:56.128194] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x51acb0) 00:21:40.382 [2024-04-18 21:14:56.128201] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:40.382 [2024-04-18 21:14:56.128213] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x582a00, cid 0, qid 0 00:21:40.382 [2024-04-18 21:14:56.128324] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.382 [2024-04-18 21:14:56.128331] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.382 [2024-04-18 21:14:56.128333] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.382 [2024-04-18 21:14:56.128337] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x582a00) on tqpair=0x51acb0 00:21:40.382 [2024-04-18 21:14:56.128344] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.382 [2024-04-18 21:14:56.128349] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.382 [2024-04-18 21:14:56.128353] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x51acb0) 00:21:40.382 [2024-04-18 21:14:56.128359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:40.382 [2024-04-18 21:14:56.128364] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.382 [2024-04-18 21:14:56.128367] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.382 [2024-04-18 21:14:56.128370] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x51acb0) 00:21:40.382 [2024-04-18 21:14:56.128375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:40.382 [2024-04-18 21:14:56.128380] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.382 [2024-04-18 21:14:56.128383] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.382 [2024-04-18 21:14:56.128386] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x51acb0) 00:21:40.382 [2024-04-18 21:14:56.128391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:40.382 [2024-04-18 21:14:56.128396] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.382 [2024-04-18 21:14:56.128399] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.382 [2024-04-18 21:14:56.128402] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x51acb0) 00:21:40.382 [2024-04-18 21:14:56.128407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:40.382 [2024-04-18 21:14:56.128411] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:21:40.382 [2024-04-18 21:14:56.128422] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:40.382 [2024-04-18 21:14:56.128428] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.382 [2024-04-18 21:14:56.128432] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x51acb0) 00:21:40.382 [2024-04-18 21:14:56.128438] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.382 [2024-04-18 21:14:56.128450] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x582a00, cid 0, qid 0 00:21:40.382 [2024-04-18 21:14:56.128455] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x582b60, cid 1, qid 0 00:21:40.382 [2024-04-18 21:14:56.128459] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x582cc0, cid 2, qid 0 00:21:40.382 [2024-04-18 21:14:56.128463] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x582e20, cid 3, qid 0 00:21:40.382 [2024-04-18 21:14:56.128467] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x582f80, cid 4, qid 0 00:21:40.382 [2024-04-18 21:14:56.128614] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.382 [2024-04-18 21:14:56.128622] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.382 [2024-04-18 21:14:56.128625] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.382 [2024-04-18 21:14:56.128628] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x582f80) on tqpair=0x51acb0 00:21:40.382 [2024-04-18 21:14:56.128633] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:21:40.382 [2024-04-18 21:14:56.128637] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:21:40.382 [2024-04-18 21:14:56.128649] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.382 [2024-04-18 21:14:56.128652] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x51acb0) 00:21:40.382 [2024-04-18 21:14:56.128662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.382 [2024-04-18 21:14:56.128674] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x582f80, cid 4, qid 0 00:21:40.382 [2024-04-18 21:14:56.128788] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:40.382 [2024-04-18 21:14:56.128795] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:40.382 [2024-04-18 21:14:56.128798] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:40.382 [2024-04-18 21:14:56.128801] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x51acb0): datao=0, datal=4096, cccid=4 00:21:40.382 [2024-04-18 21:14:56.128805] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x582f80) on tqpair(0x51acb0): expected_datao=0, payload_size=4096 00:21:40.382 [2024-04-18 21:14:56.128809] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.382 [2024-04-18 21:14:56.128939] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:40.382 [2024-04-18 21:14:56.128943] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:40.382 [2024-04-18 21:14:56.169684] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.382 [2024-04-18 21:14:56.169696] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.382 [2024-04-18 21:14:56.169699] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.382 [2024-04-18 21:14:56.169703] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x582f80) on tqpair=0x51acb0 00:21:40.382 [2024-04-18 21:14:56.169716] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:21:40.382 [2024-04-18 21:14:56.169739] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.382 [2024-04-18 21:14:56.169743] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x51acb0) 00:21:40.382 [2024-04-18 21:14:56.169750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.382 [2024-04-18 21:14:56.169756] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.382 [2024-04-18 21:14:56.169759] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.382 [2024-04-18 21:14:56.169762] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x51acb0) 00:21:40.383 [2024-04-18 21:14:56.169768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:40.383 [2024-04-18 21:14:56.169783] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x582f80, cid 4, qid 0 00:21:40.383 [2024-04-18 21:14:56.169788] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5830e0, cid 5, qid 0 00:21:40.383 [2024-04-18 21:14:56.169932] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:40.383 [2024-04-18 21:14:56.169940] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:40.383 [2024-04-18 21:14:56.169943] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:40.383 [2024-04-18 21:14:56.169946] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x51acb0): datao=0, datal=1024, cccid=4 00:21:40.383 [2024-04-18 21:14:56.169950] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x582f80) on tqpair(0x51acb0): expected_datao=0, payload_size=1024 00:21:40.383 [2024-04-18 21:14:56.169954] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.383 [2024-04-18 21:14:56.169960] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:40.383 [2024-04-18 21:14:56.169963] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:40.383 [2024-04-18 21:14:56.169968] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.383 [2024-04-18 21:14:56.169973] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.383 [2024-04-18 21:14:56.169976] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.383 [2024-04-18 21:14:56.169979] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5830e0) on tqpair=0x51acb0 00:21:40.383 [2024-04-18 21:14:56.214517] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.383 [2024-04-18 21:14:56.214526] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.383 [2024-04-18 21:14:56.214530] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.383 [2024-04-18 21:14:56.214533] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x582f80) on tqpair=0x51acb0 00:21:40.383 [2024-04-18 21:14:56.214544] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.383 [2024-04-18 21:14:56.214548] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x51acb0) 00:21:40.383 [2024-04-18 21:14:56.214554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.383 [2024-04-18 21:14:56.214569] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x582f80, cid 4, qid 0 00:21:40.383 [2024-04-18 21:14:56.214764] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:40.383 [2024-04-18 21:14:56.214772] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:40.383 [2024-04-18 21:14:56.214775] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:40.383 [2024-04-18 21:14:56.214778] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x51acb0): datao=0, datal=3072, cccid=4 00:21:40.383 [2024-04-18 21:14:56.214782] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x582f80) on tqpair(0x51acb0): expected_datao=0, payload_size=3072 00:21:40.383 [2024-04-18 21:14:56.214786] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.383 [2024-04-18 21:14:56.214792] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:40.383 [2024-04-18 21:14:56.214795] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:40.383 [2024-04-18 21:14:56.214957] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.383 [2024-04-18 21:14:56.214962] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.383 [2024-04-18 21:14:56.214965] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.383 [2024-04-18 21:14:56.214968] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x582f80) on tqpair=0x51acb0 00:21:40.383 [2024-04-18 21:14:56.214977] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.383 [2024-04-18 21:14:56.214980] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x51acb0) 00:21:40.383 [2024-04-18 21:14:56.214987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.383 [2024-04-18 21:14:56.215001] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x582f80, cid 4, qid 0 00:21:40.383 [2024-04-18 21:14:56.215147] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:40.383 [2024-04-18 21:14:56.215154] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:40.383 [2024-04-18 21:14:56.215157] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:40.383 [2024-04-18 21:14:56.215160] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x51acb0): datao=0, datal=8, cccid=4 00:21:40.383 [2024-04-18 21:14:56.215164] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x582f80) on tqpair(0x51acb0): expected_datao=0, payload_size=8 00:21:40.383 [2024-04-18 21:14:56.215168] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.383 [2024-04-18 21:14:56.215173] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:40.383 [2024-04-18 21:14:56.215177] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:40.383 [2024-04-18 21:14:56.255732] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.383 [2024-04-18 21:14:56.255746] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.383 [2024-04-18 21:14:56.255749] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.383 [2024-04-18 21:14:56.255753] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x582f80) on tqpair=0x51acb0 00:21:40.383 ===================================================== 00:21:40.383 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:40.383 ===================================================== 00:21:40.383 Controller Capabilities/Features 00:21:40.383 ================================ 00:21:40.383 Vendor ID: 0000 00:21:40.383 Subsystem Vendor ID: 0000 00:21:40.383 Serial Number: .................... 00:21:40.383 Model Number: ........................................ 00:21:40.383 Firmware Version: 24.05 00:21:40.383 Recommended Arb Burst: 0 00:21:40.383 IEEE OUI Identifier: 00 00 00 00:21:40.383 Multi-path I/O 00:21:40.383 May have multiple subsystem ports: No 00:21:40.383 May have multiple controllers: No 00:21:40.383 Associated with SR-IOV VF: No 00:21:40.383 Max Data Transfer Size: 131072 00:21:40.383 Max Number of Namespaces: 0 00:21:40.383 Max Number of I/O Queues: 1024 00:21:40.383 NVMe Specification Version (VS): 1.3 00:21:40.383 NVMe Specification Version (Identify): 1.3 00:21:40.383 Maximum Queue Entries: 128 00:21:40.383 Contiguous Queues Required: Yes 00:21:40.383 Arbitration Mechanisms Supported 00:21:40.383 Weighted Round Robin: Not Supported 00:21:40.383 Vendor Specific: Not Supported 00:21:40.383 Reset Timeout: 15000 ms 00:21:40.383 Doorbell Stride: 4 bytes 00:21:40.383 NVM Subsystem Reset: Not Supported 00:21:40.383 Command Sets Supported 00:21:40.383 NVM Command Set: Supported 00:21:40.383 Boot Partition: Not Supported 00:21:40.383 Memory Page Size Minimum: 4096 bytes 00:21:40.383 Memory Page Size Maximum: 4096 bytes 00:21:40.383 Persistent Memory Region: Not Supported 00:21:40.383 Optional Asynchronous Events Supported 00:21:40.383 Namespace Attribute Notices: Not Supported 00:21:40.383 Firmware Activation Notices: Not Supported 00:21:40.383 ANA Change Notices: Not Supported 00:21:40.384 PLE Aggregate Log Change Notices: Not Supported 00:21:40.384 LBA Status Info Alert Notices: Not Supported 00:21:40.384 EGE Aggregate Log Change Notices: Not Supported 00:21:40.384 Normal NVM Subsystem Shutdown event: Not Supported 00:21:40.384 Zone Descriptor Change Notices: Not Supported 00:21:40.384 Discovery Log Change Notices: Supported 00:21:40.384 Controller Attributes 00:21:40.384 128-bit Host Identifier: Not Supported 00:21:40.384 Non-Operational Permissive Mode: Not Supported 00:21:40.384 NVM Sets: Not Supported 00:21:40.384 Read Recovery Levels: Not Supported 00:21:40.384 Endurance Groups: Not Supported 00:21:40.384 Predictable Latency Mode: Not Supported 00:21:40.384 Traffic Based Keep ALive: Not Supported 00:21:40.384 Namespace Granularity: Not Supported 00:21:40.384 SQ Associations: Not Supported 00:21:40.384 UUID List: Not Supported 00:21:40.384 Multi-Domain Subsystem: Not Supported 00:21:40.384 Fixed Capacity Management: Not Supported 00:21:40.384 Variable Capacity Management: Not Supported 00:21:40.384 Delete Endurance Group: Not Supported 00:21:40.384 Delete NVM Set: Not Supported 00:21:40.384 Extended LBA Formats Supported: Not Supported 00:21:40.384 Flexible Data Placement Supported: Not Supported 00:21:40.384 00:21:40.384 Controller Memory Buffer Support 00:21:40.384 ================================ 00:21:40.384 Supported: No 00:21:40.384 00:21:40.384 Persistent Memory Region Support 00:21:40.384 ================================ 00:21:40.384 Supported: No 00:21:40.384 00:21:40.384 Admin Command Set Attributes 00:21:40.384 ============================ 00:21:40.384 Security Send/Receive: Not Supported 00:21:40.384 Format NVM: Not Supported 00:21:40.384 Firmware Activate/Download: Not Supported 00:21:40.384 Namespace Management: Not Supported 00:21:40.384 Device Self-Test: Not Supported 00:21:40.384 Directives: Not Supported 00:21:40.384 NVMe-MI: Not Supported 00:21:40.384 Virtualization Management: Not Supported 00:21:40.384 Doorbell Buffer Config: Not Supported 00:21:40.384 Get LBA Status Capability: Not Supported 00:21:40.384 Command & Feature Lockdown Capability: Not Supported 00:21:40.384 Abort Command Limit: 1 00:21:40.384 Async Event Request Limit: 4 00:21:40.384 Number of Firmware Slots: N/A 00:21:40.384 Firmware Slot 1 Read-Only: N/A 00:21:40.384 Firmware Activation Without Reset: N/A 00:21:40.384 Multiple Update Detection Support: N/A 00:21:40.384 Firmware Update Granularity: No Information Provided 00:21:40.384 Per-Namespace SMART Log: No 00:21:40.384 Asymmetric Namespace Access Log Page: Not Supported 00:21:40.384 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:40.384 Command Effects Log Page: Not Supported 00:21:40.384 Get Log Page Extended Data: Supported 00:21:40.384 Telemetry Log Pages: Not Supported 00:21:40.384 Persistent Event Log Pages: Not Supported 00:21:40.384 Supported Log Pages Log Page: May Support 00:21:40.384 Commands Supported & Effects Log Page: Not Supported 00:21:40.384 Feature Identifiers & Effects Log Page:May Support 00:21:40.384 NVMe-MI Commands & Effects Log Page: May Support 00:21:40.384 Data Area 4 for Telemetry Log: Not Supported 00:21:40.384 Error Log Page Entries Supported: 128 00:21:40.384 Keep Alive: Not Supported 00:21:40.384 00:21:40.384 NVM Command Set Attributes 00:21:40.384 ========================== 00:21:40.384 Submission Queue Entry Size 00:21:40.384 Max: 1 00:21:40.384 Min: 1 00:21:40.384 Completion Queue Entry Size 00:21:40.384 Max: 1 00:21:40.384 Min: 1 00:21:40.384 Number of Namespaces: 0 00:21:40.384 Compare Command: Not Supported 00:21:40.384 Write Uncorrectable Command: Not Supported 00:21:40.384 Dataset Management Command: Not Supported 00:21:40.384 Write Zeroes Command: Not Supported 00:21:40.384 Set Features Save Field: Not Supported 00:21:40.384 Reservations: Not Supported 00:21:40.384 Timestamp: Not Supported 00:21:40.384 Copy: Not Supported 00:21:40.384 Volatile Write Cache: Not Present 00:21:40.384 Atomic Write Unit (Normal): 1 00:21:40.384 Atomic Write Unit (PFail): 1 00:21:40.384 Atomic Compare & Write Unit: 1 00:21:40.384 Fused Compare & Write: Supported 00:21:40.384 Scatter-Gather List 00:21:40.384 SGL Command Set: Supported 00:21:40.384 SGL Keyed: Supported 00:21:40.384 SGL Bit Bucket Descriptor: Not Supported 00:21:40.384 SGL Metadata Pointer: Not Supported 00:21:40.384 Oversized SGL: Not Supported 00:21:40.384 SGL Metadata Address: Not Supported 00:21:40.384 SGL Offset: Supported 00:21:40.384 Transport SGL Data Block: Not Supported 00:21:40.384 Replay Protected Memory Block: Not Supported 00:21:40.384 00:21:40.384 Firmware Slot Information 00:21:40.384 ========================= 00:21:40.384 Active slot: 0 00:21:40.384 00:21:40.384 00:21:40.384 Error Log 00:21:40.384 ========= 00:21:40.384 00:21:40.384 Active Namespaces 00:21:40.384 ================= 00:21:40.384 Discovery Log Page 00:21:40.384 ================== 00:21:40.384 Generation Counter: 2 00:21:40.384 Number of Records: 2 00:21:40.384 Record Format: 0 00:21:40.384 00:21:40.384 Discovery Log Entry 0 00:21:40.384 ---------------------- 00:21:40.384 Transport Type: 3 (TCP) 00:21:40.384 Address Family: 1 (IPv4) 00:21:40.384 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:40.384 Entry Flags: 00:21:40.384 Duplicate Returned Information: 1 00:21:40.384 Explicit Persistent Connection Support for Discovery: 1 00:21:40.384 Transport Requirements: 00:21:40.384 Secure Channel: Not Required 00:21:40.384 Port ID: 0 (0x0000) 00:21:40.384 Controller ID: 65535 (0xffff) 00:21:40.384 Admin Max SQ Size: 128 00:21:40.384 Transport Service Identifier: 4420 00:21:40.385 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:40.385 Transport Address: 10.0.0.2 00:21:40.385 Discovery Log Entry 1 00:21:40.385 ---------------------- 00:21:40.385 Transport Type: 3 (TCP) 00:21:40.385 Address Family: 1 (IPv4) 00:21:40.385 Subsystem Type: 2 (NVM Subsystem) 00:21:40.385 Entry Flags: 00:21:40.385 Duplicate Returned Information: 0 00:21:40.385 Explicit Persistent Connection Support for Discovery: 0 00:21:40.385 Transport Requirements: 00:21:40.385 Secure Channel: Not Required 00:21:40.385 Port ID: 0 (0x0000) 00:21:40.385 Controller ID: 65535 (0xffff) 00:21:40.385 Admin Max SQ Size: 128 00:21:40.385 Transport Service Identifier: 4420 00:21:40.385 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:40.385 Transport Address: 10.0.0.2 [2024-04-18 21:14:56.255837] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:21:40.385 [2024-04-18 21:14:56.255851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.385 [2024-04-18 21:14:56.255857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.385 [2024-04-18 21:14:56.255862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.385 [2024-04-18 21:14:56.255867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.385 [2024-04-18 21:14:56.255875] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.385 [2024-04-18 21:14:56.255879] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.385 [2024-04-18 21:14:56.255882] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x51acb0) 00:21:40.385 [2024-04-18 21:14:56.255888] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.385 [2024-04-18 21:14:56.255901] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x582e20, cid 3, qid 0 00:21:40.385 [2024-04-18 21:14:56.256012] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.385 [2024-04-18 21:14:56.256019] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.385 [2024-04-18 21:14:56.256022] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.385 [2024-04-18 21:14:56.256026] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x582e20) on tqpair=0x51acb0 00:21:40.385 [2024-04-18 21:14:56.256032] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.385 [2024-04-18 21:14:56.256036] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.385 [2024-04-18 21:14:56.256039] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x51acb0) 00:21:40.385 [2024-04-18 21:14:56.256045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.385 [2024-04-18 21:14:56.256060] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x582e20, cid 3, qid 0 00:21:40.385 [2024-04-18 21:14:56.256213] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.385 [2024-04-18 21:14:56.256220] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.385 [2024-04-18 21:14:56.256223] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.385 [2024-04-18 21:14:56.256226] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x582e20) on tqpair=0x51acb0 00:21:40.385 [2024-04-18 21:14:56.256230] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:21:40.385 [2024-04-18 21:14:56.256234] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:21:40.385 [2024-04-18 21:14:56.256243] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.385 [2024-04-18 21:14:56.256247] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.385 [2024-04-18 21:14:56.256250] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x51acb0) 00:21:40.385 [2024-04-18 21:14:56.256256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.385 [2024-04-18 21:14:56.256266] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x582e20, cid 3, qid 0 00:21:40.385 [2024-04-18 21:14:56.256413] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.385 [2024-04-18 21:14:56.256420] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.385 [2024-04-18 21:14:56.256423] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.385 [2024-04-18 21:14:56.256426] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x582e20) on tqpair=0x51acb0 00:21:40.385 [2024-04-18 21:14:56.256436] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.385 [2024-04-18 21:14:56.256442] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.385 [2024-04-18 21:14:56.256446] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x51acb0) 00:21:40.385 [2024-04-18 21:14:56.256451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.385 [2024-04-18 21:14:56.256462] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x582e20, cid 3, qid 0 00:21:40.385 [2024-04-18 21:14:56.256615] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.385 [2024-04-18 21:14:56.256622] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.385 [2024-04-18 21:14:56.256625] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.385 [2024-04-18 21:14:56.256629] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x582e20) on tqpair=0x51acb0 00:21:40.385 [2024-04-18 21:14:56.256638] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.385 [2024-04-18 21:14:56.256642] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.385 [2024-04-18 21:14:56.256645] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x51acb0) 00:21:40.385 [2024-04-18 21:14:56.256651] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.385 [2024-04-18 21:14:56.256662] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x582e20, cid 3, qid 0 00:21:40.385 [2024-04-18 21:14:56.256766] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.385 [2024-04-18 21:14:56.256772] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.385 [2024-04-18 21:14:56.256775] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.385 [2024-04-18 21:14:56.256779] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x582e20) on tqpair=0x51acb0 00:21:40.385 [2024-04-18 21:14:56.256787] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.385 [2024-04-18 21:14:56.256791] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.385 [2024-04-18 21:14:56.256794] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x51acb0) 00:21:40.385 [2024-04-18 21:14:56.256800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.385 [2024-04-18 21:14:56.256811] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x582e20, cid 3, qid 0 00:21:40.385 [2024-04-18 21:14:56.256916] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.385 [2024-04-18 21:14:56.256922] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.385 [2024-04-18 21:14:56.256925] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.386 [2024-04-18 21:14:56.256928] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x582e20) on tqpair=0x51acb0 00:21:40.386 [2024-04-18 21:14:56.256938] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.386 [2024-04-18 21:14:56.256942] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.386 [2024-04-18 21:14:56.256945] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x51acb0) 00:21:40.386 [2024-04-18 21:14:56.256951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.386 [2024-04-18 21:14:56.256961] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x582e20, cid 3, qid 0 00:21:40.386 [2024-04-18 21:14:56.257069] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.386 [2024-04-18 21:14:56.257075] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.386 [2024-04-18 21:14:56.257078] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.386 [2024-04-18 21:14:56.257081] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x582e20) on tqpair=0x51acb0 00:21:40.386 [2024-04-18 21:14:56.257091] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.386 [2024-04-18 21:14:56.257094] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.386 [2024-04-18 21:14:56.257100] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x51acb0) 00:21:40.386 [2024-04-18 21:14:56.257106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.386 [2024-04-18 21:14:56.257117] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x582e20, cid 3, qid 0 00:21:40.386 [2024-04-18 21:14:56.257270] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.386 [2024-04-18 21:14:56.257276] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.386 [2024-04-18 21:14:56.257279] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.386 [2024-04-18 21:14:56.257282] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x582e20) on tqpair=0x51acb0 00:21:40.386 [2024-04-18 21:14:56.257291] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.386 [2024-04-18 21:14:56.257295] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.386 [2024-04-18 21:14:56.257298] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x51acb0) 00:21:40.386 [2024-04-18 21:14:56.257304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.386 [2024-04-18 21:14:56.257313] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x582e20, cid 3, qid 0 00:21:40.386 [2024-04-18 21:14:56.257421] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.386 [2024-04-18 21:14:56.257427] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.386 [2024-04-18 21:14:56.257430] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.386 [2024-04-18 21:14:56.257433] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x582e20) on tqpair=0x51acb0 00:21:40.386 [2024-04-18 21:14:56.257442] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.386 [2024-04-18 21:14:56.257446] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.386 [2024-04-18 21:14:56.257449] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x51acb0) 00:21:40.386 [2024-04-18 21:14:56.257455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.386 [2024-04-18 21:14:56.257465] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x582e20, cid 3, qid 0 00:21:40.386 [2024-04-18 21:14:56.257623] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.386 [2024-04-18 21:14:56.257630] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.386 [2024-04-18 21:14:56.257633] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.386 [2024-04-18 21:14:56.257636] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x582e20) on tqpair=0x51acb0 00:21:40.386 [2024-04-18 21:14:56.257646] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.386 [2024-04-18 21:14:56.257650] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.386 [2024-04-18 21:14:56.257653] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x51acb0) 00:21:40.386 [2024-04-18 21:14:56.257659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.386 [2024-04-18 21:14:56.257670] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x582e20, cid 3, qid 0 00:21:40.386 [2024-04-18 21:14:56.257826] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.386 [2024-04-18 21:14:56.257832] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.386 [2024-04-18 21:14:56.257835] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.386 [2024-04-18 21:14:56.257838] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x582e20) on tqpair=0x51acb0 00:21:40.386 [2024-04-18 21:14:56.257848] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.386 [2024-04-18 21:14:56.257851] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.386 [2024-04-18 21:14:56.257855] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x51acb0) 00:21:40.386 [2024-04-18 21:14:56.257863] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.386 [2024-04-18 21:14:56.257874] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x582e20, cid 3, qid 0 00:21:40.386 [2024-04-18 21:14:56.258028] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.386 [2024-04-18 21:14:56.258034] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.386 [2024-04-18 21:14:56.258037] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.386 [2024-04-18 21:14:56.258041] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x582e20) on tqpair=0x51acb0 00:21:40.386 [2024-04-18 21:14:56.258050] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.387 [2024-04-18 21:14:56.258053] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.387 [2024-04-18 21:14:56.258056] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x51acb0) 00:21:40.387 [2024-04-18 21:14:56.258062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.387 [2024-04-18 21:14:56.258072] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x582e20, cid 3, qid 0 00:21:40.387 [2024-04-18 21:14:56.258174] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.387 [2024-04-18 21:14:56.258180] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.387 [2024-04-18 21:14:56.258183] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.387 [2024-04-18 21:14:56.258186] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x582e20) on tqpair=0x51acb0 00:21:40.387 [2024-04-18 21:14:56.258196] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.387 [2024-04-18 21:14:56.258199] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.387 [2024-04-18 21:14:56.258203] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x51acb0) 00:21:40.387 [2024-04-18 21:14:56.258208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.387 [2024-04-18 21:14:56.258218] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x582e20, cid 3, qid 0 00:21:40.387 [2024-04-18 21:14:56.258329] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.387 [2024-04-18 21:14:56.258335] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.387 [2024-04-18 21:14:56.258338] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.387 [2024-04-18 21:14:56.258341] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x582e20) on tqpair=0x51acb0 00:21:40.387 [2024-04-18 21:14:56.258350] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.387 [2024-04-18 21:14:56.258354] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.387 [2024-04-18 21:14:56.258357] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x51acb0) 00:21:40.387 [2024-04-18 21:14:56.258363] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.387 [2024-04-18 21:14:56.258373] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x582e20, cid 3, qid 0 00:21:40.387 [2024-04-18 21:14:56.258481] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.387 [2024-04-18 21:14:56.258487] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.387 [2024-04-18 21:14:56.258490] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.387 [2024-04-18 21:14:56.258493] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x582e20) on tqpair=0x51acb0 00:21:40.387 [2024-04-18 21:14:56.258502] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.387 [2024-04-18 21:14:56.258506] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.387 [2024-04-18 21:14:56.258509] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x51acb0) 00:21:40.387 [2024-04-18 21:14:56.262521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.387 [2024-04-18 21:14:56.262538] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x582e20, cid 3, qid 0 00:21:40.387 [2024-04-18 21:14:56.262747] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.387 [2024-04-18 21:14:56.262754] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.387 [2024-04-18 21:14:56.262757] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.387 [2024-04-18 21:14:56.262760] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x582e20) on tqpair=0x51acb0 00:21:40.387 [2024-04-18 21:14:56.262768] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:21:40.387 00:21:40.387 21:14:56 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:40.387 [2024-04-18 21:14:56.296755] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:21:40.387 [2024-04-18 21:14:56.296794] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3129367 ] 00:21:40.387 EAL: No free 2048 kB hugepages reported on node 1 00:21:40.649 [2024-04-18 21:14:56.326799] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:21:40.649 [2024-04-18 21:14:56.326839] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:40.649 [2024-04-18 21:14:56.326844] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:40.649 [2024-04-18 21:14:56.326855] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:40.649 [2024-04-18 21:14:56.326862] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:40.649 [2024-04-18 21:14:56.327299] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:21:40.649 [2024-04-18 21:14:56.327322] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x16dbcb0 0 00:21:40.649 [2024-04-18 21:14:56.333525] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:40.649 [2024-04-18 21:14:56.333540] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:40.649 [2024-04-18 21:14:56.333544] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:40.649 [2024-04-18 21:14:56.333547] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:40.649 [2024-04-18 21:14:56.333575] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.649 [2024-04-18 21:14:56.333580] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.649 [2024-04-18 21:14:56.333583] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16dbcb0) 00:21:40.649 [2024-04-18 21:14:56.333594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:40.649 [2024-04-18 21:14:56.333610] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1743a00, cid 0, qid 0 00:21:40.649 [2024-04-18 21:14:56.341518] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.649 [2024-04-18 21:14:56.341527] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.649 [2024-04-18 21:14:56.341530] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.649 [2024-04-18 21:14:56.341534] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1743a00) on tqpair=0x16dbcb0 00:21:40.649 [2024-04-18 21:14:56.341545] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:40.649 [2024-04-18 21:14:56.341550] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:21:40.649 [2024-04-18 21:14:56.341558] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:21:40.649 [2024-04-18 21:14:56.341566] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.650 [2024-04-18 21:14:56.341570] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.650 [2024-04-18 21:14:56.341573] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16dbcb0) 00:21:40.650 [2024-04-18 21:14:56.341580] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.650 [2024-04-18 21:14:56.341592] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1743a00, cid 0, qid 0 00:21:40.650 [2024-04-18 21:14:56.341795] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.650 [2024-04-18 21:14:56.341803] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.650 [2024-04-18 21:14:56.341806] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.650 [2024-04-18 21:14:56.341809] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1743a00) on tqpair=0x16dbcb0 00:21:40.650 [2024-04-18 21:14:56.341815] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:21:40.650 [2024-04-18 21:14:56.341823] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:21:40.650 [2024-04-18 21:14:56.341830] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.650 [2024-04-18 21:14:56.341833] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.650 [2024-04-18 21:14:56.341836] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16dbcb0) 00:21:40.650 [2024-04-18 21:14:56.341843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.650 [2024-04-18 21:14:56.341855] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1743a00, cid 0, qid 0 00:21:40.650 [2024-04-18 21:14:56.341964] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.650 [2024-04-18 21:14:56.341971] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.650 [2024-04-18 21:14:56.341973] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.650 [2024-04-18 21:14:56.341977] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1743a00) on tqpair=0x16dbcb0 00:21:40.650 [2024-04-18 21:14:56.341982] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:21:40.650 [2024-04-18 21:14:56.341990] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:21:40.650 [2024-04-18 21:14:56.341996] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.650 [2024-04-18 21:14:56.342000] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.650 [2024-04-18 21:14:56.342003] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16dbcb0) 00:21:40.650 [2024-04-18 21:14:56.342009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.650 [2024-04-18 21:14:56.342020] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1743a00, cid 0, qid 0 00:21:40.650 [2024-04-18 21:14:56.342129] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.650 [2024-04-18 21:14:56.342135] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.650 [2024-04-18 21:14:56.342138] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.650 [2024-04-18 21:14:56.342142] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1743a00) on tqpair=0x16dbcb0 00:21:40.650 [2024-04-18 21:14:56.342147] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:40.650 [2024-04-18 21:14:56.342156] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.650 [2024-04-18 21:14:56.342162] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.650 [2024-04-18 21:14:56.342166] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16dbcb0) 00:21:40.650 [2024-04-18 21:14:56.342172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.650 [2024-04-18 21:14:56.342183] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1743a00, cid 0, qid 0 00:21:40.650 [2024-04-18 21:14:56.342295] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.650 [2024-04-18 21:14:56.342302] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.650 [2024-04-18 21:14:56.342305] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.650 [2024-04-18 21:14:56.342308] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1743a00) on tqpair=0x16dbcb0 00:21:40.650 [2024-04-18 21:14:56.342313] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:21:40.650 [2024-04-18 21:14:56.342317] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:21:40.650 [2024-04-18 21:14:56.342324] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:40.650 [2024-04-18 21:14:56.342429] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:21:40.650 [2024-04-18 21:14:56.342433] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:40.650 [2024-04-18 21:14:56.342440] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.650 [2024-04-18 21:14:56.342443] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.650 [2024-04-18 21:14:56.342446] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16dbcb0) 00:21:40.650 [2024-04-18 21:14:56.342452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.650 [2024-04-18 21:14:56.342463] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1743a00, cid 0, qid 0 00:21:40.650 [2024-04-18 21:14:56.342658] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.650 [2024-04-18 21:14:56.342665] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.650 [2024-04-18 21:14:56.342668] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.650 [2024-04-18 21:14:56.342671] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1743a00) on tqpair=0x16dbcb0 00:21:40.650 [2024-04-18 21:14:56.342676] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:40.650 [2024-04-18 21:14:56.342686] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.650 [2024-04-18 21:14:56.342690] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.650 [2024-04-18 21:14:56.342693] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16dbcb0) 00:21:40.650 [2024-04-18 21:14:56.342699] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.650 [2024-04-18 21:14:56.342710] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1743a00, cid 0, qid 0 00:21:40.650 [2024-04-18 21:14:56.342815] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.650 [2024-04-18 21:14:56.342821] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.650 [2024-04-18 21:14:56.342824] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.650 [2024-04-18 21:14:56.342827] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1743a00) on tqpair=0x16dbcb0 00:21:40.650 [2024-04-18 21:14:56.342832] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:40.650 [2024-04-18 21:14:56.342839] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:21:40.650 [2024-04-18 21:14:56.342847] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:21:40.650 [2024-04-18 21:14:56.342855] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:21:40.650 [2024-04-18 21:14:56.342862] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.650 [2024-04-18 21:14:56.342866] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16dbcb0) 00:21:40.650 [2024-04-18 21:14:56.342872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.650 [2024-04-18 21:14:56.342883] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1743a00, cid 0, qid 0 00:21:40.650 [2024-04-18 21:14:56.343033] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:40.650 [2024-04-18 21:14:56.343041] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:40.650 [2024-04-18 21:14:56.343043] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:40.650 [2024-04-18 21:14:56.343047] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16dbcb0): datao=0, datal=4096, cccid=0 00:21:40.650 [2024-04-18 21:14:56.343050] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1743a00) on tqpair(0x16dbcb0): expected_datao=0, payload_size=4096 00:21:40.650 [2024-04-18 21:14:56.343054] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.650 [2024-04-18 21:14:56.343188] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:40.650 [2024-04-18 21:14:56.343192] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:40.650 [2024-04-18 21:14:56.383687] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.650 [2024-04-18 21:14:56.383700] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.650 [2024-04-18 21:14:56.383704] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.650 [2024-04-18 21:14:56.383707] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1743a00) on tqpair=0x16dbcb0 00:21:40.650 [2024-04-18 21:14:56.383715] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:21:40.650 [2024-04-18 21:14:56.383720] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:21:40.650 [2024-04-18 21:14:56.383724] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:21:40.650 [2024-04-18 21:14:56.383727] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:21:40.650 [2024-04-18 21:14:56.383731] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:21:40.650 [2024-04-18 21:14:56.383735] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:21:40.650 [2024-04-18 21:14:56.383747] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:21:40.650 [2024-04-18 21:14:56.383755] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.650 [2024-04-18 21:14:56.383759] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.650 [2024-04-18 21:14:56.383762] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16dbcb0) 00:21:40.650 [2024-04-18 21:14:56.383769] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:40.650 [2024-04-18 21:14:56.383782] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1743a00, cid 0, qid 0 00:21:40.650 [2024-04-18 21:14:56.383889] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.650 [2024-04-18 21:14:56.383896] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.650 [2024-04-18 21:14:56.383902] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.650 [2024-04-18 21:14:56.383906] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1743a00) on tqpair=0x16dbcb0 00:21:40.651 [2024-04-18 21:14:56.383912] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.651 [2024-04-18 21:14:56.383916] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.651 [2024-04-18 21:14:56.383919] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16dbcb0) 00:21:40.651 [2024-04-18 21:14:56.383925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:40.651 [2024-04-18 21:14:56.383930] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.651 [2024-04-18 21:14:56.383933] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.651 [2024-04-18 21:14:56.383936] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x16dbcb0) 00:21:40.651 [2024-04-18 21:14:56.383941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:40.651 [2024-04-18 21:14:56.383946] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.651 [2024-04-18 21:14:56.383949] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.651 [2024-04-18 21:14:56.383951] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x16dbcb0) 00:21:40.651 [2024-04-18 21:14:56.383956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:40.651 [2024-04-18 21:14:56.383961] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.651 [2024-04-18 21:14:56.383964] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.651 [2024-04-18 21:14:56.383967] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16dbcb0) 00:21:40.651 [2024-04-18 21:14:56.383972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:40.651 [2024-04-18 21:14:56.383977] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:40.651 [2024-04-18 21:14:56.383987] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:40.651 [2024-04-18 21:14:56.383993] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.651 [2024-04-18 21:14:56.383996] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16dbcb0) 00:21:40.651 [2024-04-18 21:14:56.384002] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.651 [2024-04-18 21:14:56.384015] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1743a00, cid 0, qid 0 00:21:40.651 [2024-04-18 21:14:56.384019] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1743b60, cid 1, qid 0 00:21:40.651 [2024-04-18 21:14:56.384023] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1743cc0, cid 2, qid 0 00:21:40.651 [2024-04-18 21:14:56.384027] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1743e20, cid 3, qid 0 00:21:40.651 [2024-04-18 21:14:56.384031] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1743f80, cid 4, qid 0 00:21:40.651 [2024-04-18 21:14:56.384173] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.651 [2024-04-18 21:14:56.384181] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.651 [2024-04-18 21:14:56.384184] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.651 [2024-04-18 21:14:56.384187] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1743f80) on tqpair=0x16dbcb0 00:21:40.651 [2024-04-18 21:14:56.384192] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:21:40.651 [2024-04-18 21:14:56.384199] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:40.651 [2024-04-18 21:14:56.384210] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:21:40.651 [2024-04-18 21:14:56.384215] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:40.651 [2024-04-18 21:14:56.384221] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.651 [2024-04-18 21:14:56.384225] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.651 [2024-04-18 21:14:56.384228] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16dbcb0) 00:21:40.651 [2024-04-18 21:14:56.384234] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:40.651 [2024-04-18 21:14:56.384245] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1743f80, cid 4, qid 0 00:21:40.651 [2024-04-18 21:14:56.384354] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.651 [2024-04-18 21:14:56.384361] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.651 [2024-04-18 21:14:56.384363] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.651 [2024-04-18 21:14:56.384367] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1743f80) on tqpair=0x16dbcb0 00:21:40.651 [2024-04-18 21:14:56.384410] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:21:40.651 [2024-04-18 21:14:56.384419] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:40.651 [2024-04-18 21:14:56.384426] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.651 [2024-04-18 21:14:56.384430] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16dbcb0) 00:21:40.651 [2024-04-18 21:14:56.384436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.651 [2024-04-18 21:14:56.384447] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1743f80, cid 4, qid 0 00:21:40.651 [2024-04-18 21:14:56.388518] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:40.651 [2024-04-18 21:14:56.388525] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:40.651 [2024-04-18 21:14:56.388528] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:40.651 [2024-04-18 21:14:56.388531] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16dbcb0): datao=0, datal=4096, cccid=4 00:21:40.651 [2024-04-18 21:14:56.388535] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1743f80) on tqpair(0x16dbcb0): expected_datao=0, payload_size=4096 00:21:40.651 [2024-04-18 21:14:56.388539] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.651 [2024-04-18 21:14:56.388545] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:40.651 [2024-04-18 21:14:56.388548] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:40.651 [2024-04-18 21:14:56.428518] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.651 [2024-04-18 21:14:56.428526] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.651 [2024-04-18 21:14:56.428530] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.651 [2024-04-18 21:14:56.428533] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1743f80) on tqpair=0x16dbcb0 00:21:40.651 [2024-04-18 21:14:56.428546] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:21:40.651 [2024-04-18 21:14:56.428558] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:21:40.651 [2024-04-18 21:14:56.428567] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:21:40.651 [2024-04-18 21:14:56.428577] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.651 [2024-04-18 21:14:56.428581] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16dbcb0) 00:21:40.651 [2024-04-18 21:14:56.428587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.651 [2024-04-18 21:14:56.428599] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1743f80, cid 4, qid 0 00:21:40.651 [2024-04-18 21:14:56.428806] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:40.651 [2024-04-18 21:14:56.428814] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:40.651 [2024-04-18 21:14:56.428817] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:40.651 [2024-04-18 21:14:56.428820] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16dbcb0): datao=0, datal=4096, cccid=4 00:21:40.651 [2024-04-18 21:14:56.428824] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1743f80) on tqpair(0x16dbcb0): expected_datao=0, payload_size=4096 00:21:40.651 [2024-04-18 21:14:56.428828] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.651 [2024-04-18 21:14:56.428973] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:40.651 [2024-04-18 21:14:56.428977] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:40.651 [2024-04-18 21:14:56.469667] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.651 [2024-04-18 21:14:56.469678] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.651 [2024-04-18 21:14:56.469682] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.651 [2024-04-18 21:14:56.469685] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1743f80) on tqpair=0x16dbcb0 00:21:40.651 [2024-04-18 21:14:56.469700] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:40.651 [2024-04-18 21:14:56.469709] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:40.651 [2024-04-18 21:14:56.469717] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.651 [2024-04-18 21:14:56.469721] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16dbcb0) 00:21:40.651 [2024-04-18 21:14:56.469728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.651 [2024-04-18 21:14:56.469740] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1743f80, cid 4, qid 0 00:21:40.651 [2024-04-18 21:14:56.469857] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:40.651 [2024-04-18 21:14:56.469864] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:40.651 [2024-04-18 21:14:56.469867] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:40.651 [2024-04-18 21:14:56.469870] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16dbcb0): datao=0, datal=4096, cccid=4 00:21:40.651 [2024-04-18 21:14:56.469873] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1743f80) on tqpair(0x16dbcb0): expected_datao=0, payload_size=4096 00:21:40.651 [2024-04-18 21:14:56.469877] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.651 [2024-04-18 21:14:56.470025] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:40.651 [2024-04-18 21:14:56.470029] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:40.651 [2024-04-18 21:14:56.510704] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.651 [2024-04-18 21:14:56.510717] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.651 [2024-04-18 21:14:56.510720] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.651 [2024-04-18 21:14:56.510724] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1743f80) on tqpair=0x16dbcb0 00:21:40.651 [2024-04-18 21:14:56.510734] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:40.652 [2024-04-18 21:14:56.510746] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:21:40.652 [2024-04-18 21:14:56.510770] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:21:40.652 [2024-04-18 21:14:56.510775] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:40.652 [2024-04-18 21:14:56.510780] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:21:40.652 [2024-04-18 21:14:56.510784] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:21:40.652 [2024-04-18 21:14:56.510788] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:21:40.652 [2024-04-18 21:14:56.510793] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:21:40.652 [2024-04-18 21:14:56.510806] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.652 [2024-04-18 21:14:56.510810] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16dbcb0) 00:21:40.652 [2024-04-18 21:14:56.510817] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.652 [2024-04-18 21:14:56.510822] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.652 [2024-04-18 21:14:56.510826] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.652 [2024-04-18 21:14:56.510829] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16dbcb0) 00:21:40.652 [2024-04-18 21:14:56.510834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:40.652 [2024-04-18 21:14:56.510848] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1743f80, cid 4, qid 0 00:21:40.652 [2024-04-18 21:14:56.510853] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17440e0, cid 5, qid 0 00:21:40.652 [2024-04-18 21:14:56.510974] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.652 [2024-04-18 21:14:56.510982] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.652 [2024-04-18 21:14:56.510985] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.652 [2024-04-18 21:14:56.510988] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1743f80) on tqpair=0x16dbcb0 00:21:40.652 [2024-04-18 21:14:56.510995] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.652 [2024-04-18 21:14:56.511000] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.652 [2024-04-18 21:14:56.511003] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.652 [2024-04-18 21:14:56.511006] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17440e0) on tqpair=0x16dbcb0 00:21:40.652 [2024-04-18 21:14:56.511016] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.652 [2024-04-18 21:14:56.511020] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16dbcb0) 00:21:40.652 [2024-04-18 21:14:56.511025] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.652 [2024-04-18 21:14:56.511036] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17440e0, cid 5, qid 0 00:21:40.652 [2024-04-18 21:14:56.511153] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.652 [2024-04-18 21:14:56.511160] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.652 [2024-04-18 21:14:56.511163] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.652 [2024-04-18 21:14:56.511166] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17440e0) on tqpair=0x16dbcb0 00:21:40.652 [2024-04-18 21:14:56.511176] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.652 [2024-04-18 21:14:56.511182] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16dbcb0) 00:21:40.652 [2024-04-18 21:14:56.511188] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.652 [2024-04-18 21:14:56.511199] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17440e0, cid 5, qid 0 00:21:40.652 [2024-04-18 21:14:56.511414] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.652 [2024-04-18 21:14:56.511419] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.652 [2024-04-18 21:14:56.511423] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.652 [2024-04-18 21:14:56.511426] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17440e0) on tqpair=0x16dbcb0 00:21:40.652 [2024-04-18 21:14:56.511435] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.652 [2024-04-18 21:14:56.511438] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16dbcb0) 00:21:40.652 [2024-04-18 21:14:56.511444] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.652 [2024-04-18 21:14:56.511453] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17440e0, cid 5, qid 0 00:21:40.652 [2024-04-18 21:14:56.511566] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.652 [2024-04-18 21:14:56.511573] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.652 [2024-04-18 21:14:56.511576] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.652 [2024-04-18 21:14:56.511580] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17440e0) on tqpair=0x16dbcb0 00:21:40.652 [2024-04-18 21:14:56.511592] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.652 [2024-04-18 21:14:56.511596] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16dbcb0) 00:21:40.652 [2024-04-18 21:14:56.511602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.652 [2024-04-18 21:14:56.511608] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.652 [2024-04-18 21:14:56.511611] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16dbcb0) 00:21:40.652 [2024-04-18 21:14:56.511616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.652 [2024-04-18 21:14:56.511622] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.652 [2024-04-18 21:14:56.511625] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x16dbcb0) 00:21:40.652 [2024-04-18 21:14:56.511631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.652 [2024-04-18 21:14:56.511637] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.652 [2024-04-18 21:14:56.511640] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x16dbcb0) 00:21:40.652 [2024-04-18 21:14:56.511645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.652 [2024-04-18 21:14:56.511658] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17440e0, cid 5, qid 0 00:21:40.652 [2024-04-18 21:14:56.511663] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1743f80, cid 4, qid 0 00:21:40.652 [2024-04-18 21:14:56.511667] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1744240, cid 6, qid 0 00:21:40.652 [2024-04-18 21:14:56.511671] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17443a0, cid 7, qid 0 00:21:40.652 [2024-04-18 21:14:56.511947] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:40.652 [2024-04-18 21:14:56.511959] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:40.652 [2024-04-18 21:14:56.511963] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:40.652 [2024-04-18 21:14:56.511966] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16dbcb0): datao=0, datal=8192, cccid=5 00:21:40.652 [2024-04-18 21:14:56.511970] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17440e0) on tqpair(0x16dbcb0): expected_datao=0, payload_size=8192 00:21:40.652 [2024-04-18 21:14:56.511974] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.652 [2024-04-18 21:14:56.511980] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:40.652 [2024-04-18 21:14:56.511984] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:40.652 [2024-04-18 21:14:56.511988] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:40.652 [2024-04-18 21:14:56.511993] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:40.652 [2024-04-18 21:14:56.511996] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:40.652 [2024-04-18 21:14:56.511999] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16dbcb0): datao=0, datal=512, cccid=4 00:21:40.652 [2024-04-18 21:14:56.512003] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1743f80) on tqpair(0x16dbcb0): expected_datao=0, payload_size=512 00:21:40.652 [2024-04-18 21:14:56.512007] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.652 [2024-04-18 21:14:56.512012] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:40.652 [2024-04-18 21:14:56.512015] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:40.652 [2024-04-18 21:14:56.512020] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:40.652 [2024-04-18 21:14:56.512025] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:40.652 [2024-04-18 21:14:56.512028] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:40.652 [2024-04-18 21:14:56.512031] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16dbcb0): datao=0, datal=512, cccid=6 00:21:40.652 [2024-04-18 21:14:56.512034] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1744240) on tqpair(0x16dbcb0): expected_datao=0, payload_size=512 00:21:40.652 [2024-04-18 21:14:56.512038] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.652 [2024-04-18 21:14:56.512043] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:40.652 [2024-04-18 21:14:56.512046] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:40.652 [2024-04-18 21:14:56.512051] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:40.652 [2024-04-18 21:14:56.512056] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:40.652 [2024-04-18 21:14:56.512059] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:40.652 [2024-04-18 21:14:56.512062] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16dbcb0): datao=0, datal=4096, cccid=7 00:21:40.652 [2024-04-18 21:14:56.512066] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17443a0) on tqpair(0x16dbcb0): expected_datao=0, payload_size=4096 00:21:40.652 [2024-04-18 21:14:56.512069] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.652 [2024-04-18 21:14:56.512075] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:40.652 [2024-04-18 21:14:56.512078] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:40.652 [2024-04-18 21:14:56.512127] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.652 [2024-04-18 21:14:56.512133] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.652 [2024-04-18 21:14:56.512136] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.652 [2024-04-18 21:14:56.512139] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17440e0) on tqpair=0x16dbcb0 00:21:40.653 [2024-04-18 21:14:56.512152] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.653 [2024-04-18 21:14:56.512157] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.653 [2024-04-18 21:14:56.512160] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.653 [2024-04-18 21:14:56.512163] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1743f80) on tqpair=0x16dbcb0 00:21:40.653 [2024-04-18 21:14:56.512172] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.653 [2024-04-18 21:14:56.512177] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.653 [2024-04-18 21:14:56.512180] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.653 [2024-04-18 21:14:56.512184] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1744240) on tqpair=0x16dbcb0 00:21:40.653 [2024-04-18 21:14:56.512190] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.653 [2024-04-18 21:14:56.512195] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.653 [2024-04-18 21:14:56.512198] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.653 [2024-04-18 21:14:56.512201] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17443a0) on tqpair=0x16dbcb0 00:21:40.653 ===================================================== 00:21:40.653 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:40.653 ===================================================== 00:21:40.653 Controller Capabilities/Features 00:21:40.653 ================================ 00:21:40.653 Vendor ID: 8086 00:21:40.653 Subsystem Vendor ID: 8086 00:21:40.653 Serial Number: SPDK00000000000001 00:21:40.653 Model Number: SPDK bdev Controller 00:21:40.653 Firmware Version: 24.05 00:21:40.653 Recommended Arb Burst: 6 00:21:40.653 IEEE OUI Identifier: e4 d2 5c 00:21:40.653 Multi-path I/O 00:21:40.653 May have multiple subsystem ports: Yes 00:21:40.653 May have multiple controllers: Yes 00:21:40.653 Associated with SR-IOV VF: No 00:21:40.653 Max Data Transfer Size: 131072 00:21:40.653 Max Number of Namespaces: 32 00:21:40.653 Max Number of I/O Queues: 127 00:21:40.653 NVMe Specification Version (VS): 1.3 00:21:40.653 NVMe Specification Version (Identify): 1.3 00:21:40.653 Maximum Queue Entries: 128 00:21:40.653 Contiguous Queues Required: Yes 00:21:40.653 Arbitration Mechanisms Supported 00:21:40.653 Weighted Round Robin: Not Supported 00:21:40.653 Vendor Specific: Not Supported 00:21:40.653 Reset Timeout: 15000 ms 00:21:40.653 Doorbell Stride: 4 bytes 00:21:40.653 NVM Subsystem Reset: Not Supported 00:21:40.653 Command Sets Supported 00:21:40.653 NVM Command Set: Supported 00:21:40.653 Boot Partition: Not Supported 00:21:40.653 Memory Page Size Minimum: 4096 bytes 00:21:40.653 Memory Page Size Maximum: 4096 bytes 00:21:40.653 Persistent Memory Region: Not Supported 00:21:40.653 Optional Asynchronous Events Supported 00:21:40.653 Namespace Attribute Notices: Supported 00:21:40.653 Firmware Activation Notices: Not Supported 00:21:40.653 ANA Change Notices: Not Supported 00:21:40.653 PLE Aggregate Log Change Notices: Not Supported 00:21:40.653 LBA Status Info Alert Notices: Not Supported 00:21:40.653 EGE Aggregate Log Change Notices: Not Supported 00:21:40.653 Normal NVM Subsystem Shutdown event: Not Supported 00:21:40.653 Zone Descriptor Change Notices: Not Supported 00:21:40.653 Discovery Log Change Notices: Not Supported 00:21:40.653 Controller Attributes 00:21:40.653 128-bit Host Identifier: Supported 00:21:40.653 Non-Operational Permissive Mode: Not Supported 00:21:40.653 NVM Sets: Not Supported 00:21:40.653 Read Recovery Levels: Not Supported 00:21:40.653 Endurance Groups: Not Supported 00:21:40.653 Predictable Latency Mode: Not Supported 00:21:40.653 Traffic Based Keep ALive: Not Supported 00:21:40.653 Namespace Granularity: Not Supported 00:21:40.653 SQ Associations: Not Supported 00:21:40.653 UUID List: Not Supported 00:21:40.653 Multi-Domain Subsystem: Not Supported 00:21:40.653 Fixed Capacity Management: Not Supported 00:21:40.653 Variable Capacity Management: Not Supported 00:21:40.653 Delete Endurance Group: Not Supported 00:21:40.653 Delete NVM Set: Not Supported 00:21:40.653 Extended LBA Formats Supported: Not Supported 00:21:40.653 Flexible Data Placement Supported: Not Supported 00:21:40.653 00:21:40.653 Controller Memory Buffer Support 00:21:40.653 ================================ 00:21:40.653 Supported: No 00:21:40.653 00:21:40.653 Persistent Memory Region Support 00:21:40.653 ================================ 00:21:40.653 Supported: No 00:21:40.653 00:21:40.653 Admin Command Set Attributes 00:21:40.653 ============================ 00:21:40.653 Security Send/Receive: Not Supported 00:21:40.653 Format NVM: Not Supported 00:21:40.653 Firmware Activate/Download: Not Supported 00:21:40.653 Namespace Management: Not Supported 00:21:40.653 Device Self-Test: Not Supported 00:21:40.653 Directives: Not Supported 00:21:40.653 NVMe-MI: Not Supported 00:21:40.653 Virtualization Management: Not Supported 00:21:40.653 Doorbell Buffer Config: Not Supported 00:21:40.653 Get LBA Status Capability: Not Supported 00:21:40.653 Command & Feature Lockdown Capability: Not Supported 00:21:40.653 Abort Command Limit: 4 00:21:40.653 Async Event Request Limit: 4 00:21:40.653 Number of Firmware Slots: N/A 00:21:40.653 Firmware Slot 1 Read-Only: N/A 00:21:40.653 Firmware Activation Without Reset: N/A 00:21:40.653 Multiple Update Detection Support: N/A 00:21:40.653 Firmware Update Granularity: No Information Provided 00:21:40.653 Per-Namespace SMART Log: No 00:21:40.653 Asymmetric Namespace Access Log Page: Not Supported 00:21:40.653 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:40.653 Command Effects Log Page: Supported 00:21:40.653 Get Log Page Extended Data: Supported 00:21:40.653 Telemetry Log Pages: Not Supported 00:21:40.653 Persistent Event Log Pages: Not Supported 00:21:40.653 Supported Log Pages Log Page: May Support 00:21:40.653 Commands Supported & Effects Log Page: Not Supported 00:21:40.653 Feature Identifiers & Effects Log Page:May Support 00:21:40.653 NVMe-MI Commands & Effects Log Page: May Support 00:21:40.653 Data Area 4 for Telemetry Log: Not Supported 00:21:40.653 Error Log Page Entries Supported: 128 00:21:40.653 Keep Alive: Supported 00:21:40.653 Keep Alive Granularity: 10000 ms 00:21:40.653 00:21:40.653 NVM Command Set Attributes 00:21:40.653 ========================== 00:21:40.653 Submission Queue Entry Size 00:21:40.653 Max: 64 00:21:40.653 Min: 64 00:21:40.653 Completion Queue Entry Size 00:21:40.653 Max: 16 00:21:40.653 Min: 16 00:21:40.653 Number of Namespaces: 32 00:21:40.653 Compare Command: Supported 00:21:40.653 Write Uncorrectable Command: Not Supported 00:21:40.653 Dataset Management Command: Supported 00:21:40.653 Write Zeroes Command: Supported 00:21:40.653 Set Features Save Field: Not Supported 00:21:40.653 Reservations: Supported 00:21:40.653 Timestamp: Not Supported 00:21:40.653 Copy: Supported 00:21:40.653 Volatile Write Cache: Present 00:21:40.653 Atomic Write Unit (Normal): 1 00:21:40.653 Atomic Write Unit (PFail): 1 00:21:40.653 Atomic Compare & Write Unit: 1 00:21:40.653 Fused Compare & Write: Supported 00:21:40.653 Scatter-Gather List 00:21:40.653 SGL Command Set: Supported 00:21:40.653 SGL Keyed: Supported 00:21:40.653 SGL Bit Bucket Descriptor: Not Supported 00:21:40.653 SGL Metadata Pointer: Not Supported 00:21:40.653 Oversized SGL: Not Supported 00:21:40.653 SGL Metadata Address: Not Supported 00:21:40.653 SGL Offset: Supported 00:21:40.653 Transport SGL Data Block: Not Supported 00:21:40.653 Replay Protected Memory Block: Not Supported 00:21:40.653 00:21:40.653 Firmware Slot Information 00:21:40.653 ========================= 00:21:40.653 Active slot: 1 00:21:40.653 Slot 1 Firmware Revision: 24.05 00:21:40.653 00:21:40.653 00:21:40.653 Commands Supported and Effects 00:21:40.653 ============================== 00:21:40.653 Admin Commands 00:21:40.653 -------------- 00:21:40.653 Get Log Page (02h): Supported 00:21:40.653 Identify (06h): Supported 00:21:40.653 Abort (08h): Supported 00:21:40.653 Set Features (09h): Supported 00:21:40.653 Get Features (0Ah): Supported 00:21:40.653 Asynchronous Event Request (0Ch): Supported 00:21:40.653 Keep Alive (18h): Supported 00:21:40.653 I/O Commands 00:21:40.653 ------------ 00:21:40.653 Flush (00h): Supported LBA-Change 00:21:40.653 Write (01h): Supported LBA-Change 00:21:40.653 Read (02h): Supported 00:21:40.653 Compare (05h): Supported 00:21:40.653 Write Zeroes (08h): Supported LBA-Change 00:21:40.653 Dataset Management (09h): Supported LBA-Change 00:21:40.653 Copy (19h): Supported LBA-Change 00:21:40.653 Unknown (79h): Supported LBA-Change 00:21:40.653 Unknown (7Ah): Supported 00:21:40.653 00:21:40.653 Error Log 00:21:40.653 ========= 00:21:40.653 00:21:40.653 Arbitration 00:21:40.653 =========== 00:21:40.653 Arbitration Burst: 1 00:21:40.653 00:21:40.653 Power Management 00:21:40.653 ================ 00:21:40.653 Number of Power States: 1 00:21:40.653 Current Power State: Power State #0 00:21:40.653 Power State #0: 00:21:40.653 Max Power: 0.00 W 00:21:40.654 Non-Operational State: Operational 00:21:40.654 Entry Latency: Not Reported 00:21:40.654 Exit Latency: Not Reported 00:21:40.654 Relative Read Throughput: 0 00:21:40.654 Relative Read Latency: 0 00:21:40.654 Relative Write Throughput: 0 00:21:40.654 Relative Write Latency: 0 00:21:40.654 Idle Power: Not Reported 00:21:40.654 Active Power: Not Reported 00:21:40.654 Non-Operational Permissive Mode: Not Supported 00:21:40.654 00:21:40.654 Health Information 00:21:40.654 ================== 00:21:40.654 Critical Warnings: 00:21:40.654 Available Spare Space: OK 00:21:40.654 Temperature: OK 00:21:40.654 Device Reliability: OK 00:21:40.654 Read Only: No 00:21:40.654 Volatile Memory Backup: OK 00:21:40.654 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:40.654 Temperature Threshold: [2024-04-18 21:14:56.512289] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.654 [2024-04-18 21:14:56.512293] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x16dbcb0) 00:21:40.654 [2024-04-18 21:14:56.512300] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.654 [2024-04-18 21:14:56.512312] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17443a0, cid 7, qid 0 00:21:40.654 [2024-04-18 21:14:56.512432] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.654 [2024-04-18 21:14:56.512440] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.654 [2024-04-18 21:14:56.512443] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.654 [2024-04-18 21:14:56.512446] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17443a0) on tqpair=0x16dbcb0 00:21:40.654 [2024-04-18 21:14:56.512476] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:21:40.654 [2024-04-18 21:14:56.512487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.654 [2024-04-18 21:14:56.512492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.654 [2024-04-18 21:14:56.512497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.654 [2024-04-18 21:14:56.512502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.654 [2024-04-18 21:14:56.516515] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.654 [2024-04-18 21:14:56.516520] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.654 [2024-04-18 21:14:56.516523] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16dbcb0) 00:21:40.654 [2024-04-18 21:14:56.516530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.654 [2024-04-18 21:14:56.516544] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1743e20, cid 3, qid 0 00:21:40.654 [2024-04-18 21:14:56.516724] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.654 [2024-04-18 21:14:56.516731] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.654 [2024-04-18 21:14:56.516734] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.654 [2024-04-18 21:14:56.516737] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1743e20) on tqpair=0x16dbcb0 00:21:40.654 [2024-04-18 21:14:56.516744] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.654 [2024-04-18 21:14:56.516748] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.654 [2024-04-18 21:14:56.516751] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16dbcb0) 00:21:40.654 [2024-04-18 21:14:56.516757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.654 [2024-04-18 21:14:56.516775] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1743e20, cid 3, qid 0 00:21:40.654 [2024-04-18 21:14:56.516902] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.654 [2024-04-18 21:14:56.516909] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.654 [2024-04-18 21:14:56.516912] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.654 [2024-04-18 21:14:56.516915] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1743e20) on tqpair=0x16dbcb0 00:21:40.654 [2024-04-18 21:14:56.516920] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:21:40.654 [2024-04-18 21:14:56.516924] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:21:40.654 [2024-04-18 21:14:56.516933] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.654 [2024-04-18 21:14:56.516937] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.654 [2024-04-18 21:14:56.516940] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16dbcb0) 00:21:40.654 [2024-04-18 21:14:56.516946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.654 [2024-04-18 21:14:56.516957] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1743e20, cid 3, qid 0 00:21:40.654 [2024-04-18 21:14:56.517071] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.654 [2024-04-18 21:14:56.517079] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.654 [2024-04-18 21:14:56.517082] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.654 [2024-04-18 21:14:56.517085] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1743e20) on tqpair=0x16dbcb0 00:21:40.654 [2024-04-18 21:14:56.517096] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.654 [2024-04-18 21:14:56.517100] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.654 [2024-04-18 21:14:56.517103] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16dbcb0) 00:21:40.654 [2024-04-18 21:14:56.517109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.654 [2024-04-18 21:14:56.517120] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1743e20, cid 3, qid 0 00:21:40.654 [2024-04-18 21:14:56.517222] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.654 [2024-04-18 21:14:56.517229] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.654 [2024-04-18 21:14:56.517232] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.654 [2024-04-18 21:14:56.517235] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1743e20) on tqpair=0x16dbcb0 00:21:40.654 [2024-04-18 21:14:56.517245] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.654 [2024-04-18 21:14:56.517249] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.654 [2024-04-18 21:14:56.517252] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16dbcb0) 00:21:40.654 [2024-04-18 21:14:56.517258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.654 [2024-04-18 21:14:56.517268] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1743e20, cid 3, qid 0 00:21:40.654 [2024-04-18 21:14:56.517374] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.654 [2024-04-18 21:14:56.517380] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.654 [2024-04-18 21:14:56.517382] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.654 [2024-04-18 21:14:56.517386] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1743e20) on tqpair=0x16dbcb0 00:21:40.654 [2024-04-18 21:14:56.517396] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.654 [2024-04-18 21:14:56.517399] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.654 [2024-04-18 21:14:56.517403] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16dbcb0) 00:21:40.654 [2024-04-18 21:14:56.517411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.654 [2024-04-18 21:14:56.517422] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1743e20, cid 3, qid 0 00:21:40.654 [2024-04-18 21:14:56.517540] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.654 [2024-04-18 21:14:56.517547] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.654 [2024-04-18 21:14:56.517550] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.654 [2024-04-18 21:14:56.517553] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1743e20) on tqpair=0x16dbcb0 00:21:40.654 [2024-04-18 21:14:56.517563] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.654 [2024-04-18 21:14:56.517567] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.654 [2024-04-18 21:14:56.517570] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16dbcb0) 00:21:40.654 [2024-04-18 21:14:56.517576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.654 [2024-04-18 21:14:56.517587] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1743e20, cid 3, qid 0 00:21:40.654 [2024-04-18 21:14:56.517693] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.654 [2024-04-18 21:14:56.517699] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.654 [2024-04-18 21:14:56.517702] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.655 [2024-04-18 21:14:56.517705] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1743e20) on tqpair=0x16dbcb0 00:21:40.655 [2024-04-18 21:14:56.517716] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.655 [2024-04-18 21:14:56.517719] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.655 [2024-04-18 21:14:56.517723] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16dbcb0) 00:21:40.655 [2024-04-18 21:14:56.517728] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.655 [2024-04-18 21:14:56.517738] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1743e20, cid 3, qid 0 00:21:40.655 [2024-04-18 21:14:56.517843] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.655 [2024-04-18 21:14:56.517849] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.655 [2024-04-18 21:14:56.517852] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.655 [2024-04-18 21:14:56.517855] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1743e20) on tqpair=0x16dbcb0 00:21:40.655 [2024-04-18 21:14:56.517866] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.655 [2024-04-18 21:14:56.517869] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.655 [2024-04-18 21:14:56.517873] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16dbcb0) 00:21:40.655 [2024-04-18 21:14:56.517878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.655 [2024-04-18 21:14:56.517889] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1743e20, cid 3, qid 0 00:21:40.655 [2024-04-18 21:14:56.517991] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.655 [2024-04-18 21:14:56.517998] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.655 [2024-04-18 21:14:56.518000] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.655 [2024-04-18 21:14:56.518004] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1743e20) on tqpair=0x16dbcb0 00:21:40.655 [2024-04-18 21:14:56.518013] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.655 [2024-04-18 21:14:56.518017] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.655 [2024-04-18 21:14:56.518020] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16dbcb0) 00:21:40.655 [2024-04-18 21:14:56.518026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.655 [2024-04-18 21:14:56.518039] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1743e20, cid 3, qid 0 00:21:40.655 [2024-04-18 21:14:56.518142] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.655 [2024-04-18 21:14:56.518148] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.655 [2024-04-18 21:14:56.518151] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.655 [2024-04-18 21:14:56.518154] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1743e20) on tqpair=0x16dbcb0 00:21:40.655 [2024-04-18 21:14:56.518165] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.655 [2024-04-18 21:14:56.518168] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.655 [2024-04-18 21:14:56.518171] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16dbcb0) 00:21:40.655 [2024-04-18 21:14:56.518177] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.655 [2024-04-18 21:14:56.518188] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1743e20, cid 3, qid 0 00:21:40.655 [2024-04-18 21:14:56.518405] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.655 [2024-04-18 21:14:56.518411] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.655 [2024-04-18 21:14:56.518414] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.655 [2024-04-18 21:14:56.518417] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1743e20) on tqpair=0x16dbcb0 00:21:40.655 [2024-04-18 21:14:56.518426] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.655 [2024-04-18 21:14:56.518430] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.655 [2024-04-18 21:14:56.518433] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16dbcb0) 00:21:40.655 [2024-04-18 21:14:56.518439] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.655 [2024-04-18 21:14:56.518448] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1743e20, cid 3, qid 0 00:21:40.655 [2024-04-18 21:14:56.518562] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.655 [2024-04-18 21:14:56.518569] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.655 [2024-04-18 21:14:56.518572] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.655 [2024-04-18 21:14:56.518575] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1743e20) on tqpair=0x16dbcb0 00:21:40.655 [2024-04-18 21:14:56.518586] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.655 [2024-04-18 21:14:56.518590] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.655 [2024-04-18 21:14:56.518593] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16dbcb0) 00:21:40.655 [2024-04-18 21:14:56.518599] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.655 [2024-04-18 21:14:56.518609] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1743e20, cid 3, qid 0 00:21:40.655 [2024-04-18 21:14:56.518717] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.655 [2024-04-18 21:14:56.518723] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.655 [2024-04-18 21:14:56.518726] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.655 [2024-04-18 21:14:56.518729] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1743e20) on tqpair=0x16dbcb0 00:21:40.655 [2024-04-18 21:14:56.518739] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.655 [2024-04-18 21:14:56.518743] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.655 [2024-04-18 21:14:56.518746] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16dbcb0) 00:21:40.655 [2024-04-18 21:14:56.518752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.655 [2024-04-18 21:14:56.518765] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1743e20, cid 3, qid 0 00:21:40.655 [2024-04-18 21:14:56.518968] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.655 [2024-04-18 21:14:56.518973] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.655 [2024-04-18 21:14:56.518976] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.655 [2024-04-18 21:14:56.518980] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1743e20) on tqpair=0x16dbcb0 00:21:40.655 [2024-04-18 21:14:56.518989] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.655 [2024-04-18 21:14:56.518992] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.655 [2024-04-18 21:14:56.518995] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16dbcb0) 00:21:40.655 [2024-04-18 21:14:56.519001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.655 [2024-04-18 21:14:56.519011] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1743e20, cid 3, qid 0 00:21:40.655 [2024-04-18 21:14:56.519116] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.655 [2024-04-18 21:14:56.519122] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.655 [2024-04-18 21:14:56.519125] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.655 [2024-04-18 21:14:56.519128] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1743e20) on tqpair=0x16dbcb0 00:21:40.655 [2024-04-18 21:14:56.519138] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.655 [2024-04-18 21:14:56.519142] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.655 [2024-04-18 21:14:56.519145] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16dbcb0) 00:21:40.655 [2024-04-18 21:14:56.519151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.655 [2024-04-18 21:14:56.519161] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1743e20, cid 3, qid 0 00:21:40.655 [2024-04-18 21:14:56.519266] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.655 [2024-04-18 21:14:56.519273] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.655 [2024-04-18 21:14:56.519275] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.655 [2024-04-18 21:14:56.519279] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1743e20) on tqpair=0x16dbcb0 00:21:40.655 [2024-04-18 21:14:56.519289] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.655 [2024-04-18 21:14:56.519292] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.655 [2024-04-18 21:14:56.519295] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16dbcb0) 00:21:40.655 [2024-04-18 21:14:56.519301] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.655 [2024-04-18 21:14:56.519311] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1743e20, cid 3, qid 0 00:21:40.655 [2024-04-18 21:14:56.519418] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.655 [2024-04-18 21:14:56.519425] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.655 [2024-04-18 21:14:56.519428] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.655 [2024-04-18 21:14:56.519431] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1743e20) on tqpair=0x16dbcb0 00:21:40.655 [2024-04-18 21:14:56.519440] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.655 [2024-04-18 21:14:56.519444] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.655 [2024-04-18 21:14:56.519447] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16dbcb0) 00:21:40.655 [2024-04-18 21:14:56.519453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.655 [2024-04-18 21:14:56.519463] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1743e20, cid 3, qid 0 00:21:40.655 [2024-04-18 21:14:56.519672] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.655 [2024-04-18 21:14:56.519678] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.655 [2024-04-18 21:14:56.519682] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.655 [2024-04-18 21:14:56.519685] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1743e20) on tqpair=0x16dbcb0 00:21:40.655 [2024-04-18 21:14:56.519694] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.655 [2024-04-18 21:14:56.519697] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.655 [2024-04-18 21:14:56.519700] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16dbcb0) 00:21:40.655 [2024-04-18 21:14:56.519706] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.655 [2024-04-18 21:14:56.519716] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1743e20, cid 3, qid 0 00:21:40.655 [2024-04-18 21:14:56.519828] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.655 [2024-04-18 21:14:56.519834] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.656 [2024-04-18 21:14:56.519837] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.656 [2024-04-18 21:14:56.519840] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1743e20) on tqpair=0x16dbcb0 00:21:40.656 [2024-04-18 21:14:56.519850] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.656 [2024-04-18 21:14:56.519854] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.656 [2024-04-18 21:14:56.519857] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16dbcb0) 00:21:40.656 [2024-04-18 21:14:56.519863] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.656 [2024-04-18 21:14:56.519873] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1743e20, cid 3, qid 0 00:21:40.656 [2024-04-18 21:14:56.519975] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.656 [2024-04-18 21:14:56.519981] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.656 [2024-04-18 21:14:56.519984] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.656 [2024-04-18 21:14:56.519987] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1743e20) on tqpair=0x16dbcb0 00:21:40.656 [2024-04-18 21:14:56.519997] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.656 [2024-04-18 21:14:56.520001] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.656 [2024-04-18 21:14:56.520004] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16dbcb0) 00:21:40.656 [2024-04-18 21:14:56.520010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.656 [2024-04-18 21:14:56.520020] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1743e20, cid 3, qid 0 00:21:40.656 [2024-04-18 21:14:56.520127] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.656 [2024-04-18 21:14:56.520133] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.656 [2024-04-18 21:14:56.520136] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.656 [2024-04-18 21:14:56.520139] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1743e20) on tqpair=0x16dbcb0 00:21:40.656 [2024-04-18 21:14:56.520149] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.656 [2024-04-18 21:14:56.520153] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.656 [2024-04-18 21:14:56.520156] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16dbcb0) 00:21:40.656 [2024-04-18 21:14:56.520161] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.656 [2024-04-18 21:14:56.520172] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1743e20, cid 3, qid 0 00:21:40.656 [2024-04-18 21:14:56.520277] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.656 [2024-04-18 21:14:56.520286] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.656 [2024-04-18 21:14:56.520289] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.656 [2024-04-18 21:14:56.520292] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1743e20) on tqpair=0x16dbcb0 00:21:40.656 [2024-04-18 21:14:56.520303] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.656 [2024-04-18 21:14:56.520306] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.656 [2024-04-18 21:14:56.520310] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16dbcb0) 00:21:40.656 [2024-04-18 21:14:56.520315] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.656 [2024-04-18 21:14:56.520326] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1743e20, cid 3, qid 0 00:21:40.656 [2024-04-18 21:14:56.524517] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.656 [2024-04-18 21:14:56.524524] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.656 [2024-04-18 21:14:56.524527] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.656 [2024-04-18 21:14:56.524531] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1743e20) on tqpair=0x16dbcb0 00:21:40.656 [2024-04-18 21:14:56.524540] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:40.656 [2024-04-18 21:14:56.524544] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:40.656 [2024-04-18 21:14:56.524547] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16dbcb0) 00:21:40.656 [2024-04-18 21:14:56.524553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.656 [2024-04-18 21:14:56.524565] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1743e20, cid 3, qid 0 00:21:40.656 [2024-04-18 21:14:56.524759] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:40.656 [2024-04-18 21:14:56.524766] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:40.656 [2024-04-18 21:14:56.524769] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:40.656 [2024-04-18 21:14:56.524772] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1743e20) on tqpair=0x16dbcb0 00:21:40.656 [2024-04-18 21:14:56.524781] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:21:40.656 0 Kelvin (-273 Celsius) 00:21:40.656 Available Spare: 0% 00:21:40.656 Available Spare Threshold: 0% 00:21:40.656 Life Percentage Used: 0% 00:21:40.656 Data Units Read: 0 00:21:40.656 Data Units Written: 0 00:21:40.656 Host Read Commands: 0 00:21:40.656 Host Write Commands: 0 00:21:40.656 Controller Busy Time: 0 minutes 00:21:40.656 Power Cycles: 0 00:21:40.656 Power On Hours: 0 hours 00:21:40.656 Unsafe Shutdowns: 0 00:21:40.656 Unrecoverable Media Errors: 0 00:21:40.656 Lifetime Error Log Entries: 0 00:21:40.656 Warning Temperature Time: 0 minutes 00:21:40.656 Critical Temperature Time: 0 minutes 00:21:40.656 00:21:40.656 Number of Queues 00:21:40.656 ================ 00:21:40.656 Number of I/O Submission Queues: 127 00:21:40.656 Number of I/O Completion Queues: 127 00:21:40.656 00:21:40.656 Active Namespaces 00:21:40.656 ================= 00:21:40.656 Namespace ID:1 00:21:40.656 Error Recovery Timeout: Unlimited 00:21:40.656 Command Set Identifier: NVM (00h) 00:21:40.656 Deallocate: Supported 00:21:40.656 Deallocated/Unwritten Error: Not Supported 00:21:40.656 Deallocated Read Value: Unknown 00:21:40.656 Deallocate in Write Zeroes: Not Supported 00:21:40.656 Deallocated Guard Field: 0xFFFF 00:21:40.656 Flush: Supported 00:21:40.656 Reservation: Supported 00:21:40.656 Namespace Sharing Capabilities: Multiple Controllers 00:21:40.656 Size (in LBAs): 131072 (0GiB) 00:21:40.656 Capacity (in LBAs): 131072 (0GiB) 00:21:40.656 Utilization (in LBAs): 131072 (0GiB) 00:21:40.656 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:40.656 EUI64: ABCDEF0123456789 00:21:40.656 UUID: fe0d24f8-f3cd-46a2-8dd6-9c7c9e45b9ee 00:21:40.656 Thin Provisioning: Not Supported 00:21:40.656 Per-NS Atomic Units: Yes 00:21:40.656 Atomic Boundary Size (Normal): 0 00:21:40.656 Atomic Boundary Size (PFail): 0 00:21:40.656 Atomic Boundary Offset: 0 00:21:40.656 Maximum Single Source Range Length: 65535 00:21:40.656 Maximum Copy Length: 65535 00:21:40.656 Maximum Source Range Count: 1 00:21:40.656 NGUID/EUI64 Never Reused: No 00:21:40.656 Namespace Write Protected: No 00:21:40.656 Number of LBA Formats: 1 00:21:40.656 Current LBA Format: LBA Format #00 00:21:40.656 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:40.656 00:21:40.656 21:14:56 -- host/identify.sh@51 -- # sync 00:21:40.656 21:14:56 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:40.656 21:14:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:40.656 21:14:56 -- common/autotest_common.sh@10 -- # set +x 00:21:40.656 21:14:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:40.656 21:14:56 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:40.656 21:14:56 -- host/identify.sh@56 -- # nvmftestfini 00:21:40.656 21:14:56 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:40.656 21:14:56 -- nvmf/common.sh@117 -- # sync 00:21:40.656 21:14:56 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:40.656 21:14:56 -- nvmf/common.sh@120 -- # set +e 00:21:40.656 21:14:56 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:40.656 21:14:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:40.656 rmmod nvme_tcp 00:21:40.656 rmmod nvme_fabrics 00:21:40.915 rmmod nvme_keyring 00:21:40.915 21:14:56 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:40.915 21:14:56 -- nvmf/common.sh@124 -- # set -e 00:21:40.915 21:14:56 -- nvmf/common.sh@125 -- # return 0 00:21:40.915 21:14:56 -- nvmf/common.sh@478 -- # '[' -n 3129118 ']' 00:21:40.915 21:14:56 -- nvmf/common.sh@479 -- # killprocess 3129118 00:21:40.915 21:14:56 -- common/autotest_common.sh@936 -- # '[' -z 3129118 ']' 00:21:40.915 21:14:56 -- common/autotest_common.sh@940 -- # kill -0 3129118 00:21:40.915 21:14:56 -- common/autotest_common.sh@941 -- # uname 00:21:40.915 21:14:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:40.915 21:14:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3129118 00:21:40.915 21:14:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:40.915 21:14:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:40.915 21:14:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3129118' 00:21:40.915 killing process with pid 3129118 00:21:40.915 21:14:56 -- common/autotest_common.sh@955 -- # kill 3129118 00:21:40.915 [2024-04-18 21:14:56.681111] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:21:40.915 21:14:56 -- common/autotest_common.sh@960 -- # wait 3129118 00:21:41.174 21:14:56 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:41.174 21:14:56 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:41.174 21:14:56 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:41.174 21:14:56 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:41.174 21:14:56 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:41.174 21:14:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.174 21:14:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:41.174 21:14:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.076 21:14:58 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:43.076 00:21:43.076 real 0m10.500s 00:21:43.076 user 0m8.397s 00:21:43.076 sys 0m5.234s 00:21:43.076 21:14:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:43.076 21:14:58 -- common/autotest_common.sh@10 -- # set +x 00:21:43.076 ************************************ 00:21:43.076 END TEST nvmf_identify 00:21:43.076 ************************************ 00:21:43.334 21:14:59 -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:43.334 21:14:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:43.334 21:14:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:43.334 21:14:59 -- common/autotest_common.sh@10 -- # set +x 00:21:43.334 ************************************ 00:21:43.334 START TEST nvmf_perf 00:21:43.334 ************************************ 00:21:43.334 21:14:59 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:43.334 * Looking for test storage... 00:21:43.334 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:43.334 21:14:59 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:43.334 21:14:59 -- nvmf/common.sh@7 -- # uname -s 00:21:43.334 21:14:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:43.334 21:14:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:43.334 21:14:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:43.334 21:14:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:43.334 21:14:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:43.334 21:14:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:43.334 21:14:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:43.334 21:14:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:43.334 21:14:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:43.334 21:14:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:43.334 21:14:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:43.334 21:14:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:43.334 21:14:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:43.334 21:14:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:43.334 21:14:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:43.334 21:14:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:43.334 21:14:59 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:43.334 21:14:59 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:43.334 21:14:59 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:43.334 21:14:59 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:43.334 21:14:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.334 21:14:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.334 21:14:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.334 21:14:59 -- paths/export.sh@5 -- # export PATH 00:21:43.334 21:14:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.334 21:14:59 -- nvmf/common.sh@47 -- # : 0 00:21:43.334 21:14:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:43.334 21:14:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:43.334 21:14:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:43.334 21:14:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:43.334 21:14:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:43.334 21:14:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:43.334 21:14:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:43.334 21:14:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:43.334 21:14:59 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:43.334 21:14:59 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:43.334 21:14:59 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:43.334 21:14:59 -- host/perf.sh@17 -- # nvmftestinit 00:21:43.334 21:14:59 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:43.334 21:14:59 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:43.334 21:14:59 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:43.334 21:14:59 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:43.334 21:14:59 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:43.334 21:14:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.334 21:14:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:43.334 21:14:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.334 21:14:59 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:43.334 21:14:59 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:43.334 21:14:59 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:43.334 21:14:59 -- common/autotest_common.sh@10 -- # set +x 00:21:48.601 21:15:04 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:48.601 21:15:04 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:48.601 21:15:04 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:48.601 21:15:04 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:48.601 21:15:04 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:48.601 21:15:04 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:48.601 21:15:04 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:48.601 21:15:04 -- nvmf/common.sh@295 -- # net_devs=() 00:21:48.601 21:15:04 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:48.601 21:15:04 -- nvmf/common.sh@296 -- # e810=() 00:21:48.601 21:15:04 -- nvmf/common.sh@296 -- # local -ga e810 00:21:48.601 21:15:04 -- nvmf/common.sh@297 -- # x722=() 00:21:48.601 21:15:04 -- nvmf/common.sh@297 -- # local -ga x722 00:21:48.601 21:15:04 -- nvmf/common.sh@298 -- # mlx=() 00:21:48.601 21:15:04 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:48.601 21:15:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:48.601 21:15:04 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:48.601 21:15:04 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:48.601 21:15:04 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:48.601 21:15:04 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:48.601 21:15:04 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:48.601 21:15:04 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:48.601 21:15:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:48.601 21:15:04 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:48.601 21:15:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:48.601 21:15:04 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:48.601 21:15:04 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:48.601 21:15:04 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:48.601 21:15:04 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:48.601 21:15:04 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:48.601 21:15:04 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:48.601 21:15:04 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:48.601 21:15:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:48.601 21:15:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:48.601 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:48.601 21:15:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:48.601 21:15:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:48.601 21:15:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:48.601 21:15:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:48.601 21:15:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:48.601 21:15:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:48.601 21:15:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:48.601 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:48.601 21:15:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:48.601 21:15:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:48.601 21:15:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:48.601 21:15:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:48.601 21:15:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:48.601 21:15:04 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:48.601 21:15:04 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:48.601 21:15:04 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:48.601 21:15:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:48.601 21:15:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:48.601 21:15:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:48.601 21:15:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:48.601 21:15:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:48.601 Found net devices under 0000:86:00.0: cvl_0_0 00:21:48.601 21:15:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:48.601 21:15:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:48.601 21:15:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:48.601 21:15:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:48.601 21:15:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:48.601 21:15:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:48.601 Found net devices under 0000:86:00.1: cvl_0_1 00:21:48.601 21:15:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:48.601 21:15:04 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:48.601 21:15:04 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:48.601 21:15:04 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:48.601 21:15:04 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:48.601 21:15:04 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:48.601 21:15:04 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:48.601 21:15:04 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:48.601 21:15:04 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:48.601 21:15:04 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:48.601 21:15:04 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:48.601 21:15:04 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:48.601 21:15:04 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:48.601 21:15:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:48.601 21:15:04 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:48.601 21:15:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:48.601 21:15:04 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:48.602 21:15:04 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:48.602 21:15:04 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:48.860 21:15:04 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:48.860 21:15:04 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:48.860 21:15:04 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:48.860 21:15:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:48.860 21:15:04 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:48.860 21:15:04 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:48.860 21:15:04 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:48.860 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:48.860 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:21:48.860 00:21:48.860 --- 10.0.0.2 ping statistics --- 00:21:48.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.860 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:21:48.860 21:15:04 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:48.860 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:48.860 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:21:48.860 00:21:48.860 --- 10.0.0.1 ping statistics --- 00:21:48.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.860 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:21:48.860 21:15:04 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:48.860 21:15:04 -- nvmf/common.sh@411 -- # return 0 00:21:48.860 21:15:04 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:48.860 21:15:04 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:48.860 21:15:04 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:48.860 21:15:04 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:48.860 21:15:04 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:48.860 21:15:04 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:48.860 21:15:04 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:48.860 21:15:04 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:21:48.860 21:15:04 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:48.860 21:15:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:48.860 21:15:04 -- common/autotest_common.sh@10 -- # set +x 00:21:49.119 21:15:04 -- nvmf/common.sh@470 -- # nvmfpid=3133300 00:21:49.119 21:15:04 -- nvmf/common.sh@471 -- # waitforlisten 3133300 00:21:49.119 21:15:04 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:49.119 21:15:04 -- common/autotest_common.sh@817 -- # '[' -z 3133300 ']' 00:21:49.119 21:15:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.119 21:15:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:49.119 21:15:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.119 21:15:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:49.119 21:15:04 -- common/autotest_common.sh@10 -- # set +x 00:21:49.119 [2024-04-18 21:15:04.839301] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:21:49.119 [2024-04-18 21:15:04.839346] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:49.119 EAL: No free 2048 kB hugepages reported on node 1 00:21:49.119 [2024-04-18 21:15:04.903750] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:49.119 [2024-04-18 21:15:04.982318] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:49.119 [2024-04-18 21:15:04.982353] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:49.119 [2024-04-18 21:15:04.982359] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:49.119 [2024-04-18 21:15:04.982365] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:49.119 [2024-04-18 21:15:04.982371] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:49.119 [2024-04-18 21:15:04.982402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:49.119 [2024-04-18 21:15:04.982498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:49.119 [2024-04-18 21:15:04.982588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:49.119 [2024-04-18 21:15:04.982590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:50.053 21:15:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:50.053 21:15:05 -- common/autotest_common.sh@850 -- # return 0 00:21:50.053 21:15:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:50.053 21:15:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:50.053 21:15:05 -- common/autotest_common.sh@10 -- # set +x 00:21:50.053 21:15:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:50.053 21:15:05 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:21:50.053 21:15:05 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:21:53.370 21:15:08 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:21:53.370 21:15:08 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:21:53.370 21:15:08 -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:21:53.370 21:15:08 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:53.370 21:15:09 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:21:53.370 21:15:09 -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:21:53.370 21:15:09 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:21:53.370 21:15:09 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:21:53.370 21:15:09 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:53.370 [2024-04-18 21:15:09.275447] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:53.628 21:15:09 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:53.628 21:15:09 -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:53.628 21:15:09 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:53.886 21:15:09 -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:53.886 21:15:09 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:54.145 21:15:09 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:54.145 [2024-04-18 21:15:10.046325] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:54.145 21:15:10 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:54.403 21:15:10 -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:21:54.403 21:15:10 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:21:54.403 21:15:10 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:21:54.403 21:15:10 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:21:55.775 Initializing NVMe Controllers 00:21:55.775 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:21:55.775 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:21:55.775 Initialization complete. Launching workers. 00:21:55.775 ======================================================== 00:21:55.775 Latency(us) 00:21:55.775 Device Information : IOPS MiB/s Average min max 00:21:55.775 PCIE (0000:5e:00.0) NSID 1 from core 0: 98942.18 386.49 322.99 35.26 4423.81 00:21:55.775 ======================================================== 00:21:55.775 Total : 98942.18 386.49 322.99 35.26 4423.81 00:21:55.775 00:21:55.776 21:15:11 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:55.776 EAL: No free 2048 kB hugepages reported on node 1 00:21:57.149 Initializing NVMe Controllers 00:21:57.149 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:57.149 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:57.149 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:57.149 Initialization complete. Launching workers. 00:21:57.149 ======================================================== 00:21:57.149 Latency(us) 00:21:57.149 Device Information : IOPS MiB/s Average min max 00:21:57.149 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 106.00 0.41 9489.02 324.59 46189.01 00:21:57.149 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 49.00 0.19 20988.71 6984.55 47899.88 00:21:57.149 ======================================================== 00:21:57.149 Total : 155.00 0.61 13124.40 324.59 47899.88 00:21:57.149 00:21:57.149 21:15:12 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:57.149 EAL: No free 2048 kB hugepages reported on node 1 00:21:58.524 Initializing NVMe Controllers 00:21:58.524 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:58.524 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:58.524 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:58.524 Initialization complete. Launching workers. 00:21:58.524 ======================================================== 00:21:58.524 Latency(us) 00:21:58.524 Device Information : IOPS MiB/s Average min max 00:21:58.524 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8200.00 32.03 3913.10 707.87 8197.63 00:21:58.524 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3887.00 15.18 8269.25 5311.67 15981.80 00:21:58.524 ======================================================== 00:21:58.524 Total : 12087.00 47.21 5313.97 707.87 15981.80 00:21:58.524 00:21:58.524 21:15:14 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:21:58.524 21:15:14 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:21:58.524 21:15:14 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:58.524 EAL: No free 2048 kB hugepages reported on node 1 00:22:01.056 Initializing NVMe Controllers 00:22:01.056 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:01.056 Controller IO queue size 128, less than required. 00:22:01.056 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:01.056 Controller IO queue size 128, less than required. 00:22:01.056 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:01.056 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:01.056 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:01.056 Initialization complete. Launching workers. 00:22:01.056 ======================================================== 00:22:01.056 Latency(us) 00:22:01.056 Device Information : IOPS MiB/s Average min max 00:22:01.056 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 908.38 227.09 150311.00 85647.55 271599.57 00:22:01.056 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 621.92 155.48 210887.74 78070.41 343205.79 00:22:01.056 ======================================================== 00:22:01.056 Total : 1530.29 382.57 174929.58 78070.41 343205.79 00:22:01.056 00:22:01.056 21:15:16 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:01.056 EAL: No free 2048 kB hugepages reported on node 1 00:22:01.056 No valid NVMe controllers or AIO or URING devices found 00:22:01.056 Initializing NVMe Controllers 00:22:01.056 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:01.056 Controller IO queue size 128, less than required. 00:22:01.056 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:01.056 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:01.056 Controller IO queue size 128, less than required. 00:22:01.056 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:01.056 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:01.056 WARNING: Some requested NVMe devices were skipped 00:22:01.056 21:15:16 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:01.056 EAL: No free 2048 kB hugepages reported on node 1 00:22:03.603 Initializing NVMe Controllers 00:22:03.603 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:03.603 Controller IO queue size 128, less than required. 00:22:03.603 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:03.603 Controller IO queue size 128, less than required. 00:22:03.603 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:03.603 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:03.603 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:03.603 Initialization complete. Launching workers. 00:22:03.603 00:22:03.603 ==================== 00:22:03.603 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:03.603 TCP transport: 00:22:03.603 polls: 42728 00:22:03.603 idle_polls: 14335 00:22:03.603 sock_completions: 28393 00:22:03.603 nvme_completions: 3831 00:22:03.603 submitted_requests: 5812 00:22:03.603 queued_requests: 1 00:22:03.603 00:22:03.603 ==================== 00:22:03.603 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:03.603 TCP transport: 00:22:03.603 polls: 43084 00:22:03.603 idle_polls: 13182 00:22:03.603 sock_completions: 29902 00:22:03.603 nvme_completions: 3993 00:22:03.603 submitted_requests: 6010 00:22:03.603 queued_requests: 1 00:22:03.603 ======================================================== 00:22:03.603 Latency(us) 00:22:03.603 Device Information : IOPS MiB/s Average min max 00:22:03.603 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 956.70 239.17 137564.14 68720.87 233915.78 00:22:03.603 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 997.17 249.29 131775.62 61015.27 188461.82 00:22:03.603 ======================================================== 00:22:03.603 Total : 1953.87 488.47 134609.94 61015.27 233915.78 00:22:03.603 00:22:03.603 21:15:19 -- host/perf.sh@66 -- # sync 00:22:03.603 21:15:19 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:03.603 21:15:19 -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:03.603 21:15:19 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:03.603 21:15:19 -- host/perf.sh@114 -- # nvmftestfini 00:22:03.603 21:15:19 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:03.603 21:15:19 -- nvmf/common.sh@117 -- # sync 00:22:03.603 21:15:19 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:03.603 21:15:19 -- nvmf/common.sh@120 -- # set +e 00:22:03.603 21:15:19 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:03.603 21:15:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:03.603 rmmod nvme_tcp 00:22:03.603 rmmod nvme_fabrics 00:22:03.861 rmmod nvme_keyring 00:22:03.861 21:15:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:03.861 21:15:19 -- nvmf/common.sh@124 -- # set -e 00:22:03.861 21:15:19 -- nvmf/common.sh@125 -- # return 0 00:22:03.861 21:15:19 -- nvmf/common.sh@478 -- # '[' -n 3133300 ']' 00:22:03.861 21:15:19 -- nvmf/common.sh@479 -- # killprocess 3133300 00:22:03.861 21:15:19 -- common/autotest_common.sh@936 -- # '[' -z 3133300 ']' 00:22:03.861 21:15:19 -- common/autotest_common.sh@940 -- # kill -0 3133300 00:22:03.861 21:15:19 -- common/autotest_common.sh@941 -- # uname 00:22:03.861 21:15:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:03.861 21:15:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3133300 00:22:03.861 21:15:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:03.861 21:15:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:03.861 21:15:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3133300' 00:22:03.861 killing process with pid 3133300 00:22:03.861 21:15:19 -- common/autotest_common.sh@955 -- # kill 3133300 00:22:03.861 21:15:19 -- common/autotest_common.sh@960 -- # wait 3133300 00:22:05.764 21:15:21 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:05.764 21:15:21 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:05.764 21:15:21 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:05.764 21:15:21 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:05.764 21:15:21 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:05.764 21:15:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:05.764 21:15:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:05.764 21:15:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:07.669 21:15:23 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:07.669 00:22:07.669 real 0m24.088s 00:22:07.669 user 1m4.833s 00:22:07.669 sys 0m7.105s 00:22:07.669 21:15:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:07.669 21:15:23 -- common/autotest_common.sh@10 -- # set +x 00:22:07.669 ************************************ 00:22:07.669 END TEST nvmf_perf 00:22:07.669 ************************************ 00:22:07.669 21:15:23 -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:07.669 21:15:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:07.669 21:15:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:07.669 21:15:23 -- common/autotest_common.sh@10 -- # set +x 00:22:07.669 ************************************ 00:22:07.669 START TEST nvmf_fio_host 00:22:07.669 ************************************ 00:22:07.669 21:15:23 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:07.669 * Looking for test storage... 00:22:07.669 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:07.670 21:15:23 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:07.670 21:15:23 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:07.670 21:15:23 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:07.670 21:15:23 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:07.670 21:15:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.670 21:15:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.670 21:15:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.670 21:15:23 -- paths/export.sh@5 -- # export PATH 00:22:07.670 21:15:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.670 21:15:23 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:07.670 21:15:23 -- nvmf/common.sh@7 -- # uname -s 00:22:07.670 21:15:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:07.670 21:15:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:07.670 21:15:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:07.670 21:15:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:07.670 21:15:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:07.670 21:15:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:07.670 21:15:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:07.670 21:15:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:07.670 21:15:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:07.670 21:15:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:07.670 21:15:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:07.670 21:15:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:07.670 21:15:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:07.670 21:15:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:07.670 21:15:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:07.670 21:15:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:07.670 21:15:23 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:07.670 21:15:23 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:07.670 21:15:23 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:07.670 21:15:23 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:07.670 21:15:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.670 21:15:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.670 21:15:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.670 21:15:23 -- paths/export.sh@5 -- # export PATH 00:22:07.670 21:15:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.670 21:15:23 -- nvmf/common.sh@47 -- # : 0 00:22:07.670 21:15:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:07.670 21:15:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:07.670 21:15:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:07.670 21:15:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:07.670 21:15:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:07.670 21:15:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:07.670 21:15:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:07.670 21:15:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:07.670 21:15:23 -- host/fio.sh@12 -- # nvmftestinit 00:22:07.670 21:15:23 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:07.670 21:15:23 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:07.670 21:15:23 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:07.670 21:15:23 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:07.670 21:15:23 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:07.670 21:15:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:07.670 21:15:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:07.670 21:15:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:07.670 21:15:23 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:07.670 21:15:23 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:07.670 21:15:23 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:07.670 21:15:23 -- common/autotest_common.sh@10 -- # set +x 00:22:14.269 21:15:28 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:14.269 21:15:28 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:14.269 21:15:28 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:14.269 21:15:28 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:14.269 21:15:28 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:14.269 21:15:28 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:14.269 21:15:28 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:14.269 21:15:28 -- nvmf/common.sh@295 -- # net_devs=() 00:22:14.269 21:15:28 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:14.269 21:15:28 -- nvmf/common.sh@296 -- # e810=() 00:22:14.269 21:15:28 -- nvmf/common.sh@296 -- # local -ga e810 00:22:14.269 21:15:28 -- nvmf/common.sh@297 -- # x722=() 00:22:14.269 21:15:28 -- nvmf/common.sh@297 -- # local -ga x722 00:22:14.269 21:15:28 -- nvmf/common.sh@298 -- # mlx=() 00:22:14.269 21:15:28 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:14.269 21:15:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:14.269 21:15:28 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:14.269 21:15:28 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:14.269 21:15:28 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:14.269 21:15:28 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:14.269 21:15:28 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:14.269 21:15:28 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:14.269 21:15:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:14.269 21:15:28 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:14.269 21:15:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:14.269 21:15:28 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:14.269 21:15:28 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:14.269 21:15:28 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:14.269 21:15:28 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:14.269 21:15:28 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:14.269 21:15:28 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:14.269 21:15:28 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:14.269 21:15:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:14.269 21:15:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:14.269 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:14.269 21:15:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:14.269 21:15:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:14.269 21:15:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:14.269 21:15:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:14.269 21:15:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:14.269 21:15:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:14.269 21:15:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:14.269 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:14.269 21:15:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:14.269 21:15:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:14.269 21:15:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:14.269 21:15:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:14.269 21:15:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:14.269 21:15:28 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:14.269 21:15:28 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:14.269 21:15:28 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:14.269 21:15:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:14.269 21:15:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.269 21:15:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:14.269 21:15:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.269 21:15:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:14.269 Found net devices under 0000:86:00.0: cvl_0_0 00:22:14.269 21:15:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.269 21:15:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:14.269 21:15:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.269 21:15:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:14.269 21:15:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.269 21:15:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:14.269 Found net devices under 0000:86:00.1: cvl_0_1 00:22:14.269 21:15:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.270 21:15:28 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:14.270 21:15:28 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:14.270 21:15:28 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:14.270 21:15:28 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:14.270 21:15:28 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:14.270 21:15:28 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:14.270 21:15:28 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:14.270 21:15:28 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:14.270 21:15:28 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:14.270 21:15:28 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:14.270 21:15:28 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:14.270 21:15:28 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:14.270 21:15:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:14.270 21:15:28 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:14.270 21:15:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:14.270 21:15:29 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:14.270 21:15:29 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:14.270 21:15:29 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:14.270 21:15:29 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:14.270 21:15:29 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:14.270 21:15:29 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:14.270 21:15:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:14.270 21:15:29 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:14.270 21:15:29 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:14.270 21:15:29 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:14.270 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:14.270 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:22:14.270 00:22:14.270 --- 10.0.0.2 ping statistics --- 00:22:14.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.270 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:22:14.270 21:15:29 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:14.270 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:14.270 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.338 ms 00:22:14.270 00:22:14.270 --- 10.0.0.1 ping statistics --- 00:22:14.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.270 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:22:14.270 21:15:29 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:14.270 21:15:29 -- nvmf/common.sh@411 -- # return 0 00:22:14.270 21:15:29 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:14.270 21:15:29 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:14.270 21:15:29 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:14.270 21:15:29 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:14.270 21:15:29 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:14.270 21:15:29 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:14.270 21:15:29 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:14.270 21:15:29 -- host/fio.sh@14 -- # [[ y != y ]] 00:22:14.270 21:15:29 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:22:14.270 21:15:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:14.270 21:15:29 -- common/autotest_common.sh@10 -- # set +x 00:22:14.270 21:15:29 -- host/fio.sh@22 -- # nvmfpid=3140094 00:22:14.270 21:15:29 -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:14.270 21:15:29 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:14.270 21:15:29 -- host/fio.sh@26 -- # waitforlisten 3140094 00:22:14.270 21:15:29 -- common/autotest_common.sh@817 -- # '[' -z 3140094 ']' 00:22:14.270 21:15:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:14.270 21:15:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:14.270 21:15:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:14.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:14.270 21:15:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:14.270 21:15:29 -- common/autotest_common.sh@10 -- # set +x 00:22:14.270 [2024-04-18 21:15:29.352497] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:22:14.270 [2024-04-18 21:15:29.352541] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:14.270 EAL: No free 2048 kB hugepages reported on node 1 00:22:14.270 [2024-04-18 21:15:29.415401] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:14.270 [2024-04-18 21:15:29.493600] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:14.270 [2024-04-18 21:15:29.493634] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:14.270 [2024-04-18 21:15:29.493641] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:14.270 [2024-04-18 21:15:29.493648] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:14.270 [2024-04-18 21:15:29.493653] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:14.270 [2024-04-18 21:15:29.493696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:14.270 [2024-04-18 21:15:29.493791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:14.270 [2024-04-18 21:15:29.493877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:14.270 [2024-04-18 21:15:29.493878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:14.270 21:15:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:14.270 21:15:30 -- common/autotest_common.sh@850 -- # return 0 00:22:14.270 21:15:30 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:14.270 21:15:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.270 21:15:30 -- common/autotest_common.sh@10 -- # set +x 00:22:14.270 [2024-04-18 21:15:30.167277] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:14.270 21:15:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.270 21:15:30 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:22:14.270 21:15:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:14.270 21:15:30 -- common/autotest_common.sh@10 -- # set +x 00:22:14.530 21:15:30 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:14.530 21:15:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.530 21:15:30 -- common/autotest_common.sh@10 -- # set +x 00:22:14.530 Malloc1 00:22:14.530 21:15:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.530 21:15:30 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:14.530 21:15:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.530 21:15:30 -- common/autotest_common.sh@10 -- # set +x 00:22:14.530 21:15:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.530 21:15:30 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:14.530 21:15:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.530 21:15:30 -- common/autotest_common.sh@10 -- # set +x 00:22:14.530 21:15:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.530 21:15:30 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:14.530 21:15:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.531 21:15:30 -- common/autotest_common.sh@10 -- # set +x 00:22:14.531 [2024-04-18 21:15:30.259373] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:14.531 21:15:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.531 21:15:30 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:14.531 21:15:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.531 21:15:30 -- common/autotest_common.sh@10 -- # set +x 00:22:14.531 21:15:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.531 21:15:30 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:14.531 21:15:30 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:14.531 21:15:30 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:14.531 21:15:30 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:14.531 21:15:30 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:14.531 21:15:30 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:14.531 21:15:30 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:14.531 21:15:30 -- common/autotest_common.sh@1327 -- # shift 00:22:14.531 21:15:30 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:14.531 21:15:30 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:14.531 21:15:30 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:14.531 21:15:30 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:14.531 21:15:30 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:14.531 21:15:30 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:14.531 21:15:30 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:14.531 21:15:30 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:14.531 21:15:30 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:14.531 21:15:30 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:22:14.531 21:15:30 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:14.531 21:15:30 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:14.531 21:15:30 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:14.531 21:15:30 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:14.531 21:15:30 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:14.790 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:14.790 fio-3.35 00:22:14.790 Starting 1 thread 00:22:14.790 EAL: No free 2048 kB hugepages reported on node 1 00:22:17.327 00:22:17.327 test: (groupid=0, jobs=1): err= 0: pid=3140460: Thu Apr 18 21:15:32 2024 00:22:17.327 read: IOPS=11.1k, BW=43.3MiB/s (45.4MB/s)(86.8MiB/2006msec) 00:22:17.327 slat (nsec): min=1566, max=241109, avg=1770.15, stdev=2319.75 00:22:17.327 clat (usec): min=3489, max=14183, avg=6495.37, stdev=874.31 00:22:17.327 lat (usec): min=3518, max=14194, avg=6497.14, stdev=874.55 00:22:17.327 clat percentiles (usec): 00:22:17.327 | 1.00th=[ 4948], 5.00th=[ 5473], 10.00th=[ 5735], 20.00th=[ 5997], 00:22:17.327 | 30.00th=[ 6128], 40.00th=[ 6259], 50.00th=[ 6390], 60.00th=[ 6521], 00:22:17.327 | 70.00th=[ 6652], 80.00th=[ 6849], 90.00th=[ 7177], 95.00th=[ 7635], 00:22:17.327 | 99.00th=[10552], 99.50th=[11076], 99.90th=[13698], 99.95th=[14091], 00:22:17.327 | 99.99th=[14222] 00:22:17.327 bw ( KiB/s): min=43168, max=44976, per=99.98%, avg=44300.00, stdev=846.50, samples=4 00:22:17.327 iops : min=10792, max=11244, avg=11075.00, stdev=211.63, samples=4 00:22:17.327 write: IOPS=11.0k, BW=43.1MiB/s (45.2MB/s)(86.6MiB/2006msec); 0 zone resets 00:22:17.327 slat (nsec): min=1651, max=225647, avg=1890.45, stdev=1699.15 00:22:17.327 clat (usec): min=2403, max=10336, avg=5028.11, stdev=520.55 00:22:17.327 lat (usec): min=2406, max=10338, avg=5030.00, stdev=520.58 00:22:17.327 clat percentiles (usec): 00:22:17.327 | 1.00th=[ 3392], 5.00th=[ 4228], 10.00th=[ 4490], 20.00th=[ 4686], 00:22:17.327 | 30.00th=[ 4817], 40.00th=[ 4948], 50.00th=[ 5080], 60.00th=[ 5145], 00:22:17.327 | 70.00th=[ 5276], 80.00th=[ 5342], 90.00th=[ 5538], 95.00th=[ 5735], 00:22:17.327 | 99.00th=[ 6456], 99.50th=[ 6849], 99.90th=[ 8717], 99.95th=[ 9110], 00:22:17.327 | 99.99th=[10290] 00:22:17.327 bw ( KiB/s): min=43520, max=44776, per=100.00%, avg=44204.00, stdev=521.35, samples=4 00:22:17.327 iops : min=10880, max=11194, avg=11051.00, stdev=130.34, samples=4 00:22:17.327 lat (msec) : 4=1.58%, 10=97.72%, 20=0.70% 00:22:17.327 cpu : usr=60.90%, sys=31.02%, ctx=51, majf=0, minf=4 00:22:17.327 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:17.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:17.327 issued rwts: total=22221,22158,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:17.327 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:17.327 00:22:17.327 Run status group 0 (all jobs): 00:22:17.327 READ: bw=43.3MiB/s (45.4MB/s), 43.3MiB/s-43.3MiB/s (45.4MB/s-45.4MB/s), io=86.8MiB (91.0MB), run=2006-2006msec 00:22:17.327 WRITE: bw=43.1MiB/s (45.2MB/s), 43.1MiB/s-43.1MiB/s (45.2MB/s-45.2MB/s), io=86.6MiB (90.8MB), run=2006-2006msec 00:22:17.327 21:15:32 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:17.327 21:15:32 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:17.327 21:15:32 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:17.327 21:15:32 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:17.327 21:15:32 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:17.327 21:15:32 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:17.327 21:15:32 -- common/autotest_common.sh@1327 -- # shift 00:22:17.327 21:15:32 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:17.327 21:15:32 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:17.327 21:15:32 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:17.327 21:15:32 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:17.327 21:15:32 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:17.327 21:15:32 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:17.327 21:15:32 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:17.327 21:15:32 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:17.327 21:15:32 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:17.327 21:15:32 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:22:17.327 21:15:32 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:17.327 21:15:32 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:17.327 21:15:32 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:17.327 21:15:32 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:17.327 21:15:32 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:17.327 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:17.327 fio-3.35 00:22:17.327 Starting 1 thread 00:22:17.586 EAL: No free 2048 kB hugepages reported on node 1 00:22:20.121 00:22:20.121 test: (groupid=0, jobs=1): err= 0: pid=3141033: Thu Apr 18 21:15:35 2024 00:22:20.121 read: IOPS=9678, BW=151MiB/s (159MB/s)(303MiB/2003msec) 00:22:20.121 slat (usec): min=2, max=107, avg= 2.85, stdev= 1.35 00:22:20.121 clat (usec): min=2110, max=28182, avg=8025.48, stdev=2594.16 00:22:20.121 lat (usec): min=2113, max=28185, avg=8028.33, stdev=2594.47 00:22:20.121 clat percentiles (usec): 00:22:20.121 | 1.00th=[ 3752], 5.00th=[ 4686], 10.00th=[ 5276], 20.00th=[ 6063], 00:22:20.121 | 30.00th=[ 6587], 40.00th=[ 7111], 50.00th=[ 7635], 60.00th=[ 8160], 00:22:20.121 | 70.00th=[ 8717], 80.00th=[ 9634], 90.00th=[11207], 95.00th=[12911], 00:22:20.121 | 99.00th=[17695], 99.50th=[18220], 99.90th=[19268], 99.95th=[19792], 00:22:20.121 | 99.99th=[27657] 00:22:20.121 bw ( KiB/s): min=72576, max=86496, per=49.58%, avg=76776.00, stdev=6518.81, samples=4 00:22:20.121 iops : min= 4536, max= 5406, avg=4798.50, stdev=407.43, samples=4 00:22:20.121 write: IOPS=5627, BW=87.9MiB/s (92.2MB/s)(157MiB/1785msec); 0 zone resets 00:22:20.121 slat (usec): min=30, max=378, avg=31.97, stdev= 6.92 00:22:20.121 clat (usec): min=2491, max=24802, avg=9188.83, stdev=2295.16 00:22:20.121 lat (usec): min=2523, max=24840, avg=9220.80, stdev=2297.81 00:22:20.121 clat percentiles (usec): 00:22:20.121 | 1.00th=[ 5932], 5.00th=[ 6521], 10.00th=[ 6980], 20.00th=[ 7635], 00:22:20.121 | 30.00th=[ 8029], 40.00th=[ 8455], 50.00th=[ 8848], 60.00th=[ 9241], 00:22:20.121 | 70.00th=[ 9634], 80.00th=[10290], 90.00th=[11338], 95.00th=[12387], 00:22:20.121 | 99.00th=[19268], 99.50th=[20317], 99.90th=[22152], 99.95th=[22414], 00:22:20.121 | 99.99th=[24773] 00:22:20.121 bw ( KiB/s): min=76320, max=88960, per=88.83%, avg=79984.00, stdev=6018.30, samples=4 00:22:20.121 iops : min= 4770, max= 5560, avg=4999.00, stdev=376.14, samples=4 00:22:20.121 lat (msec) : 4=1.09%, 10=80.41%, 20=18.28%, 50=0.21% 00:22:20.121 cpu : usr=83.37%, sys=13.43%, ctx=40, majf=0, minf=1 00:22:20.121 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:22:20.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:20.121 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:20.121 issued rwts: total=19386,10045,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:20.121 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:20.121 00:22:20.121 Run status group 0 (all jobs): 00:22:20.121 READ: bw=151MiB/s (159MB/s), 151MiB/s-151MiB/s (159MB/s-159MB/s), io=303MiB (318MB), run=2003-2003msec 00:22:20.121 WRITE: bw=87.9MiB/s (92.2MB/s), 87.9MiB/s-87.9MiB/s (92.2MB/s-92.2MB/s), io=157MiB (165MB), run=1785-1785msec 00:22:20.121 21:15:35 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:20.121 21:15:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:20.121 21:15:35 -- common/autotest_common.sh@10 -- # set +x 00:22:20.121 21:15:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:20.121 21:15:35 -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:22:20.121 21:15:35 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:22:20.121 21:15:35 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:22:20.121 21:15:35 -- host/fio.sh@84 -- # nvmftestfini 00:22:20.121 21:15:35 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:20.121 21:15:35 -- nvmf/common.sh@117 -- # sync 00:22:20.121 21:15:35 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:20.121 21:15:35 -- nvmf/common.sh@120 -- # set +e 00:22:20.121 21:15:35 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:20.121 21:15:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:20.121 rmmod nvme_tcp 00:22:20.121 rmmod nvme_fabrics 00:22:20.121 rmmod nvme_keyring 00:22:20.121 21:15:35 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:20.121 21:15:35 -- nvmf/common.sh@124 -- # set -e 00:22:20.121 21:15:35 -- nvmf/common.sh@125 -- # return 0 00:22:20.121 21:15:35 -- nvmf/common.sh@478 -- # '[' -n 3140094 ']' 00:22:20.121 21:15:35 -- nvmf/common.sh@479 -- # killprocess 3140094 00:22:20.121 21:15:35 -- common/autotest_common.sh@936 -- # '[' -z 3140094 ']' 00:22:20.122 21:15:35 -- common/autotest_common.sh@940 -- # kill -0 3140094 00:22:20.122 21:15:35 -- common/autotest_common.sh@941 -- # uname 00:22:20.122 21:15:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:20.122 21:15:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3140094 00:22:20.122 21:15:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:20.122 21:15:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:20.122 21:15:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3140094' 00:22:20.122 killing process with pid 3140094 00:22:20.122 21:15:35 -- common/autotest_common.sh@955 -- # kill 3140094 00:22:20.122 21:15:35 -- common/autotest_common.sh@960 -- # wait 3140094 00:22:20.122 21:15:35 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:20.122 21:15:35 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:20.122 21:15:35 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:20.122 21:15:35 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:20.122 21:15:35 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:20.122 21:15:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.122 21:15:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:20.122 21:15:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.028 21:15:37 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:22.287 00:22:22.287 real 0m14.559s 00:22:22.287 user 0m40.509s 00:22:22.287 sys 0m6.470s 00:22:22.287 21:15:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:22.287 21:15:37 -- common/autotest_common.sh@10 -- # set +x 00:22:22.287 ************************************ 00:22:22.287 END TEST nvmf_fio_host 00:22:22.287 ************************************ 00:22:22.287 21:15:37 -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:22.287 21:15:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:22.287 21:15:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:22.287 21:15:37 -- common/autotest_common.sh@10 -- # set +x 00:22:22.287 ************************************ 00:22:22.287 START TEST nvmf_failover 00:22:22.287 ************************************ 00:22:22.287 21:15:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:22.287 * Looking for test storage... 00:22:22.546 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:22.546 21:15:38 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:22.546 21:15:38 -- nvmf/common.sh@7 -- # uname -s 00:22:22.546 21:15:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:22.546 21:15:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:22.546 21:15:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:22.546 21:15:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:22.546 21:15:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:22.546 21:15:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:22.546 21:15:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:22.546 21:15:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:22.546 21:15:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:22.546 21:15:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:22.546 21:15:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:22.546 21:15:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:22.546 21:15:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:22.546 21:15:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:22.546 21:15:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:22.546 21:15:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:22.546 21:15:38 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:22.546 21:15:38 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:22.546 21:15:38 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:22.546 21:15:38 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:22.546 21:15:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.546 21:15:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.546 21:15:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.546 21:15:38 -- paths/export.sh@5 -- # export PATH 00:22:22.546 21:15:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.546 21:15:38 -- nvmf/common.sh@47 -- # : 0 00:22:22.546 21:15:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:22.546 21:15:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:22.546 21:15:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:22.546 21:15:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:22.546 21:15:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:22.546 21:15:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:22.546 21:15:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:22.546 21:15:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:22.546 21:15:38 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:22.546 21:15:38 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:22.546 21:15:38 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:22.546 21:15:38 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:22.546 21:15:38 -- host/failover.sh@18 -- # nvmftestinit 00:22:22.546 21:15:38 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:22.546 21:15:38 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:22.546 21:15:38 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:22.546 21:15:38 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:22.546 21:15:38 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:22.546 21:15:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.546 21:15:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:22.546 21:15:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.546 21:15:38 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:22.546 21:15:38 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:22.546 21:15:38 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:22.546 21:15:38 -- common/autotest_common.sh@10 -- # set +x 00:22:29.110 21:15:44 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:29.110 21:15:44 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:29.110 21:15:44 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:29.110 21:15:44 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:29.110 21:15:44 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:29.110 21:15:44 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:29.110 21:15:44 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:29.110 21:15:44 -- nvmf/common.sh@295 -- # net_devs=() 00:22:29.110 21:15:44 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:29.110 21:15:44 -- nvmf/common.sh@296 -- # e810=() 00:22:29.110 21:15:44 -- nvmf/common.sh@296 -- # local -ga e810 00:22:29.110 21:15:44 -- nvmf/common.sh@297 -- # x722=() 00:22:29.110 21:15:44 -- nvmf/common.sh@297 -- # local -ga x722 00:22:29.110 21:15:44 -- nvmf/common.sh@298 -- # mlx=() 00:22:29.110 21:15:44 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:29.110 21:15:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:29.110 21:15:44 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:29.110 21:15:44 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:29.110 21:15:44 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:29.110 21:15:44 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:29.110 21:15:44 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:29.110 21:15:44 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:29.110 21:15:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:29.110 21:15:44 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:29.110 21:15:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:29.110 21:15:44 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:29.110 21:15:44 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:29.110 21:15:44 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:29.110 21:15:44 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:29.110 21:15:44 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:29.110 21:15:44 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:29.110 21:15:44 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:29.110 21:15:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:29.110 21:15:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:29.110 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:29.110 21:15:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:29.110 21:15:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:29.110 21:15:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:29.110 21:15:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:29.110 21:15:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:29.110 21:15:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:29.110 21:15:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:29.110 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:29.110 21:15:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:29.110 21:15:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:29.110 21:15:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:29.110 21:15:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:29.110 21:15:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:29.110 21:15:44 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:29.110 21:15:44 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:29.110 21:15:44 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:29.110 21:15:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:29.110 21:15:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:29.110 21:15:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:29.110 21:15:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:29.110 21:15:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:29.110 Found net devices under 0000:86:00.0: cvl_0_0 00:22:29.110 21:15:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:29.110 21:15:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:29.110 21:15:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:29.110 21:15:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:29.110 21:15:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:29.110 21:15:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:29.110 Found net devices under 0000:86:00.1: cvl_0_1 00:22:29.110 21:15:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:29.110 21:15:44 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:29.110 21:15:44 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:29.110 21:15:44 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:29.110 21:15:44 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:29.110 21:15:44 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:29.110 21:15:44 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:29.110 21:15:44 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:29.110 21:15:44 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:29.110 21:15:44 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:29.110 21:15:44 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:29.110 21:15:44 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:29.110 21:15:44 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:29.110 21:15:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:29.110 21:15:44 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:29.110 21:15:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:29.110 21:15:44 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:29.110 21:15:44 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:29.110 21:15:44 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:29.110 21:15:44 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:29.110 21:15:44 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:29.110 21:15:44 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:29.110 21:15:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:29.110 21:15:44 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:29.110 21:15:44 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:29.110 21:15:44 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:29.110 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:29.110 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:22:29.110 00:22:29.110 --- 10.0.0.2 ping statistics --- 00:22:29.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.110 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:22:29.110 21:15:44 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:29.110 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:29.110 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:22:29.110 00:22:29.110 --- 10.0.0.1 ping statistics --- 00:22:29.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.110 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:22:29.110 21:15:44 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:29.110 21:15:44 -- nvmf/common.sh@411 -- # return 0 00:22:29.110 21:15:44 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:29.110 21:15:44 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:29.110 21:15:44 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:29.110 21:15:44 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:29.110 21:15:44 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:29.110 21:15:44 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:29.110 21:15:44 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:29.110 21:15:44 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:29.110 21:15:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:29.110 21:15:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:29.110 21:15:44 -- common/autotest_common.sh@10 -- # set +x 00:22:29.110 21:15:44 -- nvmf/common.sh@470 -- # nvmfpid=3145289 00:22:29.110 21:15:44 -- nvmf/common.sh@471 -- # waitforlisten 3145289 00:22:29.110 21:15:44 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:29.110 21:15:44 -- common/autotest_common.sh@817 -- # '[' -z 3145289 ']' 00:22:29.110 21:15:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:29.110 21:15:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:29.110 21:15:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:29.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:29.110 21:15:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:29.110 21:15:44 -- common/autotest_common.sh@10 -- # set +x 00:22:29.110 [2024-04-18 21:15:44.400094] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:22:29.110 [2024-04-18 21:15:44.400137] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:29.110 EAL: No free 2048 kB hugepages reported on node 1 00:22:29.110 [2024-04-18 21:15:44.463991] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:29.110 [2024-04-18 21:15:44.533841] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:29.111 [2024-04-18 21:15:44.533881] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:29.111 [2024-04-18 21:15:44.533887] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:29.111 [2024-04-18 21:15:44.533893] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:29.111 [2024-04-18 21:15:44.533898] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:29.111 [2024-04-18 21:15:44.534017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:29.111 [2024-04-18 21:15:44.534112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:29.111 [2024-04-18 21:15:44.534114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:29.369 21:15:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:29.369 21:15:45 -- common/autotest_common.sh@850 -- # return 0 00:22:29.369 21:15:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:29.369 21:15:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:29.369 21:15:45 -- common/autotest_common.sh@10 -- # set +x 00:22:29.369 21:15:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:29.369 21:15:45 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:29.627 [2024-04-18 21:15:45.388495] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:29.627 21:15:45 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:29.886 Malloc0 00:22:29.886 21:15:45 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:29.886 21:15:45 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:30.145 21:15:45 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:30.440 [2024-04-18 21:15:46.134896] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:30.440 21:15:46 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:30.440 [2024-04-18 21:15:46.315403] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:30.440 21:15:46 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:30.699 [2024-04-18 21:15:46.483978] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:30.699 21:15:46 -- host/failover.sh@31 -- # bdevperf_pid=3145562 00:22:30.699 21:15:46 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:30.699 21:15:46 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:30.699 21:15:46 -- host/failover.sh@34 -- # waitforlisten 3145562 /var/tmp/bdevperf.sock 00:22:30.699 21:15:46 -- common/autotest_common.sh@817 -- # '[' -z 3145562 ']' 00:22:30.699 21:15:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:30.699 21:15:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:30.699 21:15:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:30.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:30.699 21:15:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:30.699 21:15:46 -- common/autotest_common.sh@10 -- # set +x 00:22:31.636 21:15:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:31.636 21:15:47 -- common/autotest_common.sh@850 -- # return 0 00:22:31.636 21:15:47 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:31.896 NVMe0n1 00:22:31.896 21:15:47 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:32.154 00:22:32.154 21:15:47 -- host/failover.sh@39 -- # run_test_pid=3145800 00:22:32.154 21:15:47 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:32.154 21:15:47 -- host/failover.sh@41 -- # sleep 1 00:22:33.091 21:15:49 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:33.350 [2024-04-18 21:15:49.162283] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9690 is same with the state(5) to be set 00:22:33.350 [2024-04-18 21:15:49.162356] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9690 is same with the state(5) to be set 00:22:33.350 [2024-04-18 21:15:49.162370] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9690 is same with the state(5) to be set 00:22:33.350 [2024-04-18 21:15:49.162376] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9690 is same with the state(5) to be set 00:22:33.350 [2024-04-18 21:15:49.162382] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9690 is same with the state(5) to be set 00:22:33.350 [2024-04-18 21:15:49.162388] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9690 is same with the state(5) to be set 00:22:33.350 [2024-04-18 21:15:49.162394] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9690 is same with the state(5) to be set 00:22:33.350 [2024-04-18 21:15:49.162400] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9690 is same with the state(5) to be set 00:22:33.350 [2024-04-18 21:15:49.162406] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9690 is same with the state(5) to be set 00:22:33.350 [2024-04-18 21:15:49.162412] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9690 is same with the state(5) to be set 00:22:33.350 [2024-04-18 21:15:49.162417] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9690 is same with the state(5) to be set 00:22:33.350 [2024-04-18 21:15:49.162423] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9690 is same with the state(5) to be set 00:22:33.350 [2024-04-18 21:15:49.162429] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9690 is same with the state(5) to be set 00:22:33.350 [2024-04-18 21:15:49.162435] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9690 is same with the state(5) to be set 00:22:33.350 [2024-04-18 21:15:49.162440] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9690 is same with the state(5) to be set 00:22:33.350 [2024-04-18 21:15:49.162446] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9690 is same with the state(5) to be set 00:22:33.350 [2024-04-18 21:15:49.162451] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9690 is same with the state(5) to be set 00:22:33.350 [2024-04-18 21:15:49.162457] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9690 is same with the state(5) to be set 00:22:33.350 [2024-04-18 21:15:49.162463] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9690 is same with the state(5) to be set 00:22:33.350 [2024-04-18 21:15:49.162469] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9690 is same with the state(5) to be set 00:22:33.350 [2024-04-18 21:15:49.162474] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9690 is same with the state(5) to be set 00:22:33.350 [2024-04-18 21:15:49.162480] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9690 is same with the state(5) to be set 00:22:33.350 [2024-04-18 21:15:49.162485] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9690 is same with the state(5) to be set 00:22:33.350 [2024-04-18 21:15:49.162491] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9690 is same with the state(5) to be set 00:22:33.350 [2024-04-18 21:15:49.162496] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9690 is same with the state(5) to be set 00:22:33.350 [2024-04-18 21:15:49.162502] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9690 is same with the state(5) to be set 00:22:33.350 [2024-04-18 21:15:49.162507] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9690 is same with the state(5) to be set 00:22:33.350 [2024-04-18 21:15:49.162518] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9690 is same with the state(5) to be set 00:22:33.350 [2024-04-18 21:15:49.162524] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9690 is same with the state(5) to be set 00:22:33.350 [2024-04-18 21:15:49.162533] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9690 is same with the state(5) to be set 00:22:33.350 [2024-04-18 21:15:49.162539] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9690 is same with the state(5) to be set 00:22:33.350 [2024-04-18 21:15:49.162545] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9690 is same with the state(5) to be set 00:22:33.350 [2024-04-18 21:15:49.162551] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9690 is same with the state(5) to be set 00:22:33.350 [2024-04-18 21:15:49.162557] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9690 is same with the state(5) to be set 00:22:33.350 [2024-04-18 21:15:49.162563] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9690 is same with the state(5) to be set 00:22:33.350 [2024-04-18 21:15:49.162569] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9690 is same with the state(5) to be set 00:22:33.350 [2024-04-18 21:15:49.162575] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9690 is same with the state(5) to be set 00:22:33.350 [2024-04-18 21:15:49.162581] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9690 is same with the state(5) to be set 00:22:33.350 [2024-04-18 21:15:49.162587] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9690 is same with the state(5) to be set 00:22:33.350 21:15:49 -- host/failover.sh@45 -- # sleep 3 00:22:36.640 21:15:52 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:36.899 00:22:36.899 21:15:52 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:36.899 [2024-04-18 21:15:52.783966] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784017] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784025] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784032] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784039] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784045] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784050] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784056] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784062] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784068] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784074] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784080] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784086] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784092] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784103] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784109] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784115] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784121] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784127] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784132] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784138] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784144] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784149] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784155] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784161] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784166] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784172] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784178] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784183] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784189] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784194] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784200] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784206] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784212] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784217] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784224] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784230] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784236] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784242] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784248] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784254] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784260] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784267] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784273] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784279] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784285] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784291] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784297] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784302] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784308] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784314] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784319] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 [2024-04-18 21:15:52.784325] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafabf0 is same with the state(5) to be set 00:22:36.899 21:15:52 -- host/failover.sh@50 -- # sleep 3 00:22:40.188 21:15:55 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:40.188 [2024-04-18 21:15:55.980711] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:40.188 21:15:56 -- host/failover.sh@55 -- # sleep 1 00:22:41.126 21:15:57 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:41.386 [2024-04-18 21:15:57.180316] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafb2d0 is same with the state(5) to be set 00:22:41.386 [2024-04-18 21:15:57.180357] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafb2d0 is same with the state(5) to be set 00:22:41.386 [2024-04-18 21:15:57.180364] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafb2d0 is same with the state(5) to be set 00:22:41.386 [2024-04-18 21:15:57.180371] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafb2d0 is same with the state(5) to be set 00:22:41.386 [2024-04-18 21:15:57.180377] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafb2d0 is same with the state(5) to be set 00:22:41.386 [2024-04-18 21:15:57.180383] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafb2d0 is same with the state(5) to be set 00:22:41.386 [2024-04-18 21:15:57.180389] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafb2d0 is same with the state(5) to be set 00:22:41.386 [2024-04-18 21:15:57.180395] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafb2d0 is same with the state(5) to be set 00:22:41.386 [2024-04-18 21:15:57.180401] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafb2d0 is same with the state(5) to be set 00:22:41.386 21:15:57 -- host/failover.sh@59 -- # wait 3145800 00:22:47.965 0 00:22:47.965 21:16:03 -- host/failover.sh@61 -- # killprocess 3145562 00:22:47.965 21:16:03 -- common/autotest_common.sh@936 -- # '[' -z 3145562 ']' 00:22:47.965 21:16:03 -- common/autotest_common.sh@940 -- # kill -0 3145562 00:22:47.965 21:16:03 -- common/autotest_common.sh@941 -- # uname 00:22:47.965 21:16:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:47.965 21:16:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3145562 00:22:47.965 21:16:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:47.965 21:16:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:47.965 21:16:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3145562' 00:22:47.965 killing process with pid 3145562 00:22:47.965 21:16:03 -- common/autotest_common.sh@955 -- # kill 3145562 00:22:47.965 21:16:03 -- common/autotest_common.sh@960 -- # wait 3145562 00:22:47.965 21:16:03 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:47.965 [2024-04-18 21:15:46.553608] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:22:47.965 [2024-04-18 21:15:46.553655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3145562 ] 00:22:47.965 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.965 [2024-04-18 21:15:46.614881] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.965 [2024-04-18 21:15:46.687859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:47.965 Running I/O for 15 seconds... 00:22:47.965 [2024-04-18 21:15:49.162928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.965 [2024-04-18 21:15:49.162962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.965 [2024-04-18 21:15:49.162977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.965 [2024-04-18 21:15:49.162985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.965 [2024-04-18 21:15:49.162994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.965 [2024-04-18 21:15:49.163001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.965 [2024-04-18 21:15:49.163009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.965 [2024-04-18 21:15:49.163015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.965 [2024-04-18 21:15:49.163024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.965 [2024-04-18 21:15:49.163030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.965 [2024-04-18 21:15:49.163038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.965 [2024-04-18 21:15:49.163044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.965 [2024-04-18 21:15:49.163052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:96768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.965 [2024-04-18 21:15:49.163058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.965 [2024-04-18 21:15:49.163066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.965 [2024-04-18 21:15:49.163073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.965 [2024-04-18 21:15:49.163082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.965 [2024-04-18 21:15:49.163088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.965 [2024-04-18 21:15:49.163096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:96792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.965 [2024-04-18 21:15:49.163102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.965 [2024-04-18 21:15:49.163110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.965 [2024-04-18 21:15:49.163117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.965 [2024-04-18 21:15:49.163130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.965 [2024-04-18 21:15:49.163137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.965 [2024-04-18 21:15:49.163145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.965 [2024-04-18 21:15:49.163151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.965 [2024-04-18 21:15:49.163159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.965 [2024-04-18 21:15:49.163165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.965 [2024-04-18 21:15:49.163173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:96832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.965 [2024-04-18 21:15:49.163180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.965 [2024-04-18 21:15:49.163187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.965 [2024-04-18 21:15:49.163194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.965 [2024-04-18 21:15:49.163202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.965 [2024-04-18 21:15:49.163210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-04-18 21:15:49.163218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.966 [2024-04-18 21:15:49.163224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-04-18 21:15:49.163232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.966 [2024-04-18 21:15:49.163238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-04-18 21:15:49.163246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.966 [2024-04-18 21:15:49.163252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-04-18 21:15:49.163260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.966 [2024-04-18 21:15:49.163266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-04-18 21:15:49.163274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.966 [2024-04-18 21:15:49.163280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-04-18 21:15:49.163288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.966 [2024-04-18 21:15:49.163295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-04-18 21:15:49.163303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.966 [2024-04-18 21:15:49.163311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-04-18 21:15:49.163319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.966 [2024-04-18 21:15:49.163325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-04-18 21:15:49.163333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.966 [2024-04-18 21:15:49.163339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-04-18 21:15:49.163347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.966 [2024-04-18 21:15:49.163354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-04-18 21:15:49.163361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.966 [2024-04-18 21:15:49.163368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-04-18 21:15:49.163375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:96880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.966 [2024-04-18 21:15:49.163382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-04-18 21:15:49.163389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:96888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.966 [2024-04-18 21:15:49.163396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-04-18 21:15:49.163404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.966 [2024-04-18 21:15:49.163410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-04-18 21:15:49.163417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.966 [2024-04-18 21:15:49.163424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-04-18 21:15:49.163432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.966 [2024-04-18 21:15:49.163439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-04-18 21:15:49.163448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.966 [2024-04-18 21:15:49.163454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-04-18 21:15:49.163462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.966 [2024-04-18 21:15:49.163468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-04-18 21:15:49.163476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.966 [2024-04-18 21:15:49.163482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-04-18 21:15:49.163494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:96944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.966 [2024-04-18 21:15:49.163501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-04-18 21:15:49.163509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:96952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.966 [2024-04-18 21:15:49.163521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-04-18 21:15:49.163529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.966 [2024-04-18 21:15:49.163535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-04-18 21:15:49.163544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.966 [2024-04-18 21:15:49.163550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-04-18 21:15:49.163558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:96976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.966 [2024-04-18 21:15:49.163564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-04-18 21:15:49.163572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.966 [2024-04-18 21:15:49.163578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-04-18 21:15:49.163586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.966 [2024-04-18 21:15:49.163592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-04-18 21:15:49.163600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.966 [2024-04-18 21:15:49.163607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-04-18 21:15:49.163615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.966 [2024-04-18 21:15:49.163621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-04-18 21:15:49.163629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.966 [2024-04-18 21:15:49.163635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-04-18 21:15:49.163643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.966 [2024-04-18 21:15:49.163649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-04-18 21:15:49.163657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.966 [2024-04-18 21:15:49.163664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-04-18 21:15:49.163671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.966 [2024-04-18 21:15:49.163679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-04-18 21:15:49.163687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.966 [2024-04-18 21:15:49.163693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-04-18 21:15:49.163701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.966 [2024-04-18 21:15:49.163707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-04-18 21:15:49.163715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:97320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.966 [2024-04-18 21:15:49.163721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-04-18 21:15:49.163729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.966 [2024-04-18 21:15:49.163735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-04-18 21:15:49.163743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.966 [2024-04-18 21:15:49.163749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-04-18 21:15:49.163756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.966 [2024-04-18 21:15:49.163763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-04-18 21:15:49.163771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.966 [2024-04-18 21:15:49.163777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-04-18 21:15:49.163785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.967 [2024-04-18 21:15:49.163791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-04-18 21:15:49.163798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.967 [2024-04-18 21:15:49.163805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-04-18 21:15:49.163812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.967 [2024-04-18 21:15:49.163819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-04-18 21:15:49.163827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.967 [2024-04-18 21:15:49.163833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-04-18 21:15:49.163840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.967 [2024-04-18 21:15:49.163847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-04-18 21:15:49.163854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.967 [2024-04-18 21:15:49.163862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-04-18 21:15:49.163870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.967 [2024-04-18 21:15:49.163876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-04-18 21:15:49.163884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.967 [2024-04-18 21:15:49.163890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-04-18 21:15:49.163897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.967 [2024-04-18 21:15:49.163905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-04-18 21:15:49.163913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.967 [2024-04-18 21:15:49.163919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-04-18 21:15:49.163927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.967 [2024-04-18 21:15:49.163933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-04-18 21:15:49.163941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.967 [2024-04-18 21:15:49.163946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-04-18 21:15:49.163954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.967 [2024-04-18 21:15:49.163960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-04-18 21:15:49.163968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.967 [2024-04-18 21:15:49.163975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-04-18 21:15:49.163982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.967 [2024-04-18 21:15:49.163988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-04-18 21:15:49.163996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:97480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.967 [2024-04-18 21:15:49.164002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-04-18 21:15:49.164010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.967 [2024-04-18 21:15:49.164016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-04-18 21:15:49.164025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.967 [2024-04-18 21:15:49.164031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-04-18 21:15:49.164040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.967 [2024-04-18 21:15:49.164047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-04-18 21:15:49.164055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.967 [2024-04-18 21:15:49.164061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-04-18 21:15:49.164069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.967 [2024-04-18 21:15:49.164075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-04-18 21:15:49.164082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.967 [2024-04-18 21:15:49.164089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-04-18 21:15:49.164096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:97536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.967 [2024-04-18 21:15:49.164102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-04-18 21:15:49.164110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.967 [2024-04-18 21:15:49.164117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-04-18 21:15:49.164124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.967 [2024-04-18 21:15:49.164135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-04-18 21:15:49.164143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.967 [2024-04-18 21:15:49.164149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-04-18 21:15:49.164157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.967 [2024-04-18 21:15:49.164163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-04-18 21:15:49.164171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.967 [2024-04-18 21:15:49.164177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-04-18 21:15:49.164185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.967 [2024-04-18 21:15:49.164191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-04-18 21:15:49.164199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.967 [2024-04-18 21:15:49.164205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-04-18 21:15:49.164214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.967 [2024-04-18 21:15:49.164222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-04-18 21:15:49.164230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.967 [2024-04-18 21:15:49.164236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-04-18 21:15:49.164243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.967 [2024-04-18 21:15:49.164250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-04-18 21:15:49.164257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.967 [2024-04-18 21:15:49.164264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-04-18 21:15:49.164271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.967 [2024-04-18 21:15:49.164277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-04-18 21:15:49.164287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.967 [2024-04-18 21:15:49.164293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-04-18 21:15:49.164302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.967 [2024-04-18 21:15:49.164308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-04-18 21:15:49.164316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.967 [2024-04-18 21:15:49.164322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-04-18 21:15:49.164330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.967 [2024-04-18 21:15:49.164336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-04-18 21:15:49.164344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.967 [2024-04-18 21:15:49.164350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-04-18 21:15:49.164358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.967 [2024-04-18 21:15:49.164365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-04-18 21:15:49.164373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:97048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-04-18 21:15:49.164380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-04-18 21:15:49.164387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:97056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-04-18 21:15:49.164393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-04-18 21:15:49.164401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-04-18 21:15:49.164409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-04-18 21:15:49.164416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-04-18 21:15:49.164423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-04-18 21:15:49.164431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:97080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-04-18 21:15:49.164437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-04-18 21:15:49.164444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-04-18 21:15:49.164450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-04-18 21:15:49.164458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:97096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-04-18 21:15:49.164464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-04-18 21:15:49.164472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.968 [2024-04-18 21:15:49.164478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-04-18 21:15:49.164486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.968 [2024-04-18 21:15:49.164492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-04-18 21:15:49.164500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.968 [2024-04-18 21:15:49.164506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-04-18 21:15:49.164518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.968 [2024-04-18 21:15:49.164524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-04-18 21:15:49.164532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.968 [2024-04-18 21:15:49.164538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-04-18 21:15:49.164546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.968 [2024-04-18 21:15:49.164552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-04-18 21:15:49.164560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.968 [2024-04-18 21:15:49.164566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-04-18 21:15:49.164573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.968 [2024-04-18 21:15:49.164580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-04-18 21:15:49.164589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-04-18 21:15:49.164597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-04-18 21:15:49.164604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-04-18 21:15:49.164611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-04-18 21:15:49.164618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-04-18 21:15:49.164625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-04-18 21:15:49.164633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:97128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-04-18 21:15:49.164639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-04-18 21:15:49.164647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:97136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-04-18 21:15:49.164653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-04-18 21:15:49.164661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-04-18 21:15:49.164667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-04-18 21:15:49.164675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-04-18 21:15:49.164681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-04-18 21:15:49.164689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:97160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-04-18 21:15:49.164695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-04-18 21:15:49.164702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-04-18 21:15:49.164709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-04-18 21:15:49.164716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:97176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-04-18 21:15:49.164722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-04-18 21:15:49.164730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:97184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-04-18 21:15:49.164736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-04-18 21:15:49.164747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:97192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-04-18 21:15:49.164753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-04-18 21:15:49.164761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:97200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-04-18 21:15:49.164768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-04-18 21:15:49.164776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:97208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-04-18 21:15:49.164783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-04-18 21:15:49.164791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-04-18 21:15:49.164797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-04-18 21:15:49.164804] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca0270 is same with the state(5) to be set 00:22:47.968 [2024-04-18 21:15:49.164811] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:47.968 [2024-04-18 21:15:49.164816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:47.968 [2024-04-18 21:15:49.164824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97224 len:8 PRP1 0x0 PRP2 0x0 00:22:47.968 [2024-04-18 21:15:49.164830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-04-18 21:15:49.164870] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xca0270 was disconnected and freed. reset controller. 00:22:47.968 [2024-04-18 21:15:49.164878] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:47.968 [2024-04-18 21:15:49.164899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.968 [2024-04-18 21:15:49.164906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-04-18 21:15:49.164913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.968 [2024-04-18 21:15:49.164919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-04-18 21:15:49.164926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.968 [2024-04-18 21:15:49.164933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-04-18 21:15:49.164940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.968 [2024-04-18 21:15:49.164946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-04-18 21:15:49.164952] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:47.968 [2024-04-18 21:15:49.167785] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:47.968 [2024-04-18 21:15:49.167813] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc804a0 (9): Bad file descriptor 00:22:47.968 [2024-04-18 21:15:49.280489] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:47.968 [2024-04-18 21:15:52.784550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:63512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-04-18 21:15:52.784583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-04-18 21:15:52.784597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:63520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-04-18 21:15:52.784609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-04-18 21:15:52.784618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:63528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-04-18 21:15:52.784625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-04-18 21:15:52.784633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:63536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-04-18 21:15:52.784640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-04-18 21:15:52.784648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:63544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-04-18 21:15:52.784654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-04-18 21:15:52.784662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:63552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-04-18 21:15:52.784669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-04-18 21:15:52.784677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:63560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-04-18 21:15:52.784683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-04-18 21:15:52.784691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:63568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-04-18 21:15:52.784697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-04-18 21:15:52.784705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-04-18 21:15:52.784712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-04-18 21:15:52.784719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:63584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-04-18 21:15:52.784726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-04-18 21:15:52.784735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-04-18 21:15:52.784741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-04-18 21:15:52.784749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:63600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-04-18 21:15:52.784756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-04-18 21:15:52.784763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:63608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-04-18 21:15:52.784770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-04-18 21:15:52.784778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:63616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-04-18 21:15:52.784784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-04-18 21:15:52.784793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:63624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-04-18 21:15:52.784800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-04-18 21:15:52.784808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-04-18 21:15:52.784815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-04-18 21:15:52.784822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:63640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-04-18 21:15:52.784830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-04-18 21:15:52.784838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:63648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-04-18 21:15:52.784845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-04-18 21:15:52.784853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:63656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-04-18 21:15:52.784859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-04-18 21:15:52.784868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-04-18 21:15:52.784874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-04-18 21:15:52.784882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:63672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-04-18 21:15:52.784889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-04-18 21:15:52.784897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:63680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-04-18 21:15:52.784903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-04-18 21:15:52.784911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:63688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-04-18 21:15:52.784918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-04-18 21:15:52.784926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-04-18 21:15:52.784932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-04-18 21:15:52.784939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:63704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-04-18 21:15:52.784946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-04-18 21:15:52.784954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-04-18 21:15:52.784960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-04-18 21:15:52.784968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-04-18 21:15:52.784976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-04-18 21:15:52.784984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-04-18 21:15:52.784990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-04-18 21:15:52.784998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:63736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-04-18 21:15:52.785004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-04-18 21:15:52.785012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-04-18 21:15:52.785019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-04-18 21:15:52.785026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-04-18 21:15:52.785033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-04-18 21:15:52.785043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-04-18 21:15:52.785049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-04-18 21:15:52.785058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:63768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-04-18 21:15:52.785065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-04-18 21:15:52.785073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:63776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-04-18 21:15:52.785079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-04-18 21:15:52.785087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:63784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-04-18 21:15:52.785093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-04-18 21:15:52.785101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:63792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-04-18 21:15:52.785107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-04-18 21:15:52.785115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-04-18 21:15:52.785122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-04-18 21:15:52.785129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:63808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-04-18 21:15:52.785136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-04-18 21:15:52.785144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:63816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-04-18 21:15:52.785150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-04-18 21:15:52.785158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:63888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.970 [2024-04-18 21:15:52.785166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-04-18 21:15:52.785174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.970 [2024-04-18 21:15:52.785181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-04-18 21:15:52.785189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:63904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.970 [2024-04-18 21:15:52.785196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-04-18 21:15:52.785203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:63912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.970 [2024-04-18 21:15:52.785210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-04-18 21:15:52.785217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:63920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.970 [2024-04-18 21:15:52.785224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-04-18 21:15:52.785231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:63928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.970 [2024-04-18 21:15:52.785238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-04-18 21:15:52.785246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:63936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.970 [2024-04-18 21:15:52.785252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-04-18 21:15:52.785260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:63944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.970 [2024-04-18 21:15:52.785266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-04-18 21:15:52.785274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:63952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.970 [2024-04-18 21:15:52.785280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-04-18 21:15:52.785288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:63960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.970 [2024-04-18 21:15:52.785295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-04-18 21:15:52.785303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:63968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.970 [2024-04-18 21:15:52.785309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-04-18 21:15:52.785316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:63976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.970 [2024-04-18 21:15:52.785323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-04-18 21:15:52.785330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.970 [2024-04-18 21:15:52.785337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-04-18 21:15:52.785346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.970 [2024-04-18 21:15:52.785353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-04-18 21:15:52.785360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.970 [2024-04-18 21:15:52.785367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-04-18 21:15:52.785374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.970 [2024-04-18 21:15:52.785381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-04-18 21:15:52.785390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:64016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.970 [2024-04-18 21:15:52.785396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-04-18 21:15:52.785404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.970 [2024-04-18 21:15:52.785410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-04-18 21:15:52.785418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.970 [2024-04-18 21:15:52.785425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-04-18 21:15:52.785433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:64040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.970 [2024-04-18 21:15:52.785439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-04-18 21:15:52.785447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:64048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.970 [2024-04-18 21:15:52.785454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-04-18 21:15:52.785462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.970 [2024-04-18 21:15:52.785468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-04-18 21:15:52.785476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.970 [2024-04-18 21:15:52.785482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-04-18 21:15:52.785490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.970 [2024-04-18 21:15:52.785496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-04-18 21:15:52.785505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-04-18 21:15:52.785515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-04-18 21:15:52.785524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:63832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-04-18 21:15:52.785532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-04-18 21:15:52.785540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:63840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-04-18 21:15:52.785547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-04-18 21:15:52.785555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-04-18 21:15:52.785561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-04-18 21:15:52.785569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-04-18 21:15:52.785575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-04-18 21:15:52.785583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-04-18 21:15:52.785590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-04-18 21:15:52.785598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:63872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-04-18 21:15:52.785605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-04-18 21:15:52.785613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.970 [2024-04-18 21:15:52.785619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-04-18 21:15:52.785627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.970 [2024-04-18 21:15:52.785633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-04-18 21:15:52.785641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.970 [2024-04-18 21:15:52.785647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-04-18 21:15:52.785655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.971 [2024-04-18 21:15:52.785661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-04-18 21:15:52.785669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:64112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.971 [2024-04-18 21:15:52.785675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-04-18 21:15:52.785683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:64120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.971 [2024-04-18 21:15:52.785689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-04-18 21:15:52.785697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.971 [2024-04-18 21:15:52.785704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-04-18 21:15:52.785712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:64136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.971 [2024-04-18 21:15:52.785720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-04-18 21:15:52.785728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:64144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.971 [2024-04-18 21:15:52.785734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-04-18 21:15:52.785742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.971 [2024-04-18 21:15:52.785748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-04-18 21:15:52.785756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:64160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.971 [2024-04-18 21:15:52.785763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-04-18 21:15:52.785771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:64168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.971 [2024-04-18 21:15:52.785778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-04-18 21:15:52.785785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:64176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.971 [2024-04-18 21:15:52.785792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-04-18 21:15:52.785800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:64184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.971 [2024-04-18 21:15:52.785806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-04-18 21:15:52.785814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.971 [2024-04-18 21:15:52.785821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-04-18 21:15:52.785829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.971 [2024-04-18 21:15:52.785835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-04-18 21:15:52.785843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:64208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.971 [2024-04-18 21:15:52.785849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-04-18 21:15:52.785858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.971 [2024-04-18 21:15:52.785864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-04-18 21:15:52.785872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.971 [2024-04-18 21:15:52.785878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-04-18 21:15:52.785886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.971 [2024-04-18 21:15:52.785892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-04-18 21:15:52.785902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.971 [2024-04-18 21:15:52.785908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-04-18 21:15:52.785916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:64248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.971 [2024-04-18 21:15:52.785923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-04-18 21:15:52.785931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:64256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.971 [2024-04-18 21:15:52.785937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-04-18 21:15:52.785945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.971 [2024-04-18 21:15:52.785951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-04-18 21:15:52.785959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:64272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.971 [2024-04-18 21:15:52.785966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-04-18 21:15:52.785973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:64280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.971 [2024-04-18 21:15:52.785980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-04-18 21:15:52.785987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.971 [2024-04-18 21:15:52.785996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-04-18 21:15:52.786004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:64296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.971 [2024-04-18 21:15:52.786010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-04-18 21:15:52.786019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:64304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.971 [2024-04-18 21:15:52.786026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-04-18 21:15:52.786034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:64312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.971 [2024-04-18 21:15:52.786040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-04-18 21:15:52.786048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:64320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.971 [2024-04-18 21:15:52.786055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-04-18 21:15:52.786062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.971 [2024-04-18 21:15:52.786069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-04-18 21:15:52.786077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.971 [2024-04-18 21:15:52.786084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-04-18 21:15:52.786092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.971 [2024-04-18 21:15:52.786099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-04-18 21:15:52.786107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:64352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.971 [2024-04-18 21:15:52.786113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-04-18 21:15:52.786121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.971 [2024-04-18 21:15:52.786127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-04-18 21:15:52.786135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.971 [2024-04-18 21:15:52.786141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-04-18 21:15:52.786150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.971 [2024-04-18 21:15:52.786157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-04-18 21:15:52.786164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:64384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.971 [2024-04-18 21:15:52.786171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-04-18 21:15:52.786179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:64392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.971 [2024-04-18 21:15:52.786185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-04-18 21:15:52.786193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:64400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.971 [2024-04-18 21:15:52.786199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-04-18 21:15:52.786207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.971 [2024-04-18 21:15:52.786213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-04-18 21:15:52.786221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:64416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.971 [2024-04-18 21:15:52.786229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-04-18 21:15:52.786236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:64424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.971 [2024-04-18 21:15:52.786243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-04-18 21:15:52.786251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.972 [2024-04-18 21:15:52.786257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-04-18 21:15:52.786269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:64440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.972 [2024-04-18 21:15:52.786275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-04-18 21:15:52.786283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:64448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.972 [2024-04-18 21:15:52.786289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-04-18 21:15:52.786297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.972 [2024-04-18 21:15:52.786304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-04-18 21:15:52.786311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:64464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.972 [2024-04-18 21:15:52.786318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-04-18 21:15:52.786326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.972 [2024-04-18 21:15:52.786333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-04-18 21:15:52.786340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:64480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.972 [2024-04-18 21:15:52.786346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-04-18 21:15:52.786354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:64488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.972 [2024-04-18 21:15:52.786361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-04-18 21:15:52.786369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.972 [2024-04-18 21:15:52.786376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-04-18 21:15:52.786383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:64504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.972 [2024-04-18 21:15:52.786390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-04-18 21:15:52.786397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:64512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.972 [2024-04-18 21:15:52.786404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-04-18 21:15:52.786412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:64520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.972 [2024-04-18 21:15:52.786418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-04-18 21:15:52.786426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:64528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.972 [2024-04-18 21:15:52.786433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-04-18 21:15:52.786453] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:47.972 [2024-04-18 21:15:52.786459] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:47.972 [2024-04-18 21:15:52.786466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63880 len:8 PRP1 0x0 PRP2 0x0 00:22:47.972 [2024-04-18 21:15:52.786474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-04-18 21:15:52.786520] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc7aac0 was disconnected and freed. reset controller. 00:22:47.972 [2024-04-18 21:15:52.786529] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:47.972 [2024-04-18 21:15:52.786548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.972 [2024-04-18 21:15:52.786556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-04-18 21:15:52.786563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.972 [2024-04-18 21:15:52.786569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-04-18 21:15:52.786576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.972 [2024-04-18 21:15:52.786582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-04-18 21:15:52.786589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.972 [2024-04-18 21:15:52.786596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-04-18 21:15:52.786603] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:47.972 [2024-04-18 21:15:52.789413] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:47.972 [2024-04-18 21:15:52.789441] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc804a0 (9): Bad file descriptor 00:22:47.972 [2024-04-18 21:15:52.985109] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:47.972 [2024-04-18 21:15:57.181082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:120448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.972 [2024-04-18 21:15:57.181117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-04-18 21:15:57.181133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:120456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.972 [2024-04-18 21:15:57.181140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-04-18 21:15:57.181149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:120464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.972 [2024-04-18 21:15:57.181156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-04-18 21:15:57.181164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:120472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.972 [2024-04-18 21:15:57.181171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-04-18 21:15:57.181179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:120480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.972 [2024-04-18 21:15:57.181186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-04-18 21:15:57.181194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:120488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.972 [2024-04-18 21:15:57.181206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-04-18 21:15:57.181214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.972 [2024-04-18 21:15:57.181221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-04-18 21:15:57.181229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:120016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.972 [2024-04-18 21:15:57.181235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-04-18 21:15:57.181243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:120024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.972 [2024-04-18 21:15:57.181250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-04-18 21:15:57.181258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:120032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.972 [2024-04-18 21:15:57.181264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-04-18 21:15:57.181272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:120040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.972 [2024-04-18 21:15:57.181279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-04-18 21:15:57.181286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:120048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.972 [2024-04-18 21:15:57.181293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-04-18 21:15:57.181301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.972 [2024-04-18 21:15:57.181307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-04-18 21:15:57.181315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:120496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.972 [2024-04-18 21:15:57.181321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-04-18 21:15:57.181328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:120504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.972 [2024-04-18 21:15:57.181335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-04-18 21:15:57.181343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:120512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.972 [2024-04-18 21:15:57.181349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-04-18 21:15:57.181357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:120520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.972 [2024-04-18 21:15:57.181364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-04-18 21:15:57.181372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:120528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.972 [2024-04-18 21:15:57.181379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-04-18 21:15:57.181388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:120536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.973 [2024-04-18 21:15:57.181395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-04-18 21:15:57.181402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:120544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.973 [2024-04-18 21:15:57.181409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-04-18 21:15:57.181417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.973 [2024-04-18 21:15:57.181423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-04-18 21:15:57.181432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:120560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.973 [2024-04-18 21:15:57.181438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-04-18 21:15:57.181446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:120568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.973 [2024-04-18 21:15:57.181453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-04-18 21:15:57.181460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:120576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.973 [2024-04-18 21:15:57.181467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-04-18 21:15:57.181475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.973 [2024-04-18 21:15:57.181482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-04-18 21:15:57.181492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:120592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.973 [2024-04-18 21:15:57.181498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-04-18 21:15:57.181506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:120600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.973 [2024-04-18 21:15:57.181519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-04-18 21:15:57.181527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:120608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.973 [2024-04-18 21:15:57.181534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-04-18 21:15:57.181543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:120616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.973 [2024-04-18 21:15:57.181549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-04-18 21:15:57.181557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:120624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.973 [2024-04-18 21:15:57.181564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-04-18 21:15:57.181572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:120632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.973 [2024-04-18 21:15:57.181580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-04-18 21:15:57.181588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:120640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.973 [2024-04-18 21:15:57.181594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-04-18 21:15:57.181603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:120648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.973 [2024-04-18 21:15:57.181609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-04-18 21:15:57.181617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:120656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.973 [2024-04-18 21:15:57.181624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-04-18 21:15:57.181632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:120664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.973 [2024-04-18 21:15:57.181638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-04-18 21:15:57.181646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:120672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.973 [2024-04-18 21:15:57.181652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-04-18 21:15:57.181660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.973 [2024-04-18 21:15:57.181666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-04-18 21:15:57.181673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:120688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.973 [2024-04-18 21:15:57.181680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-04-18 21:15:57.181687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:120696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.973 [2024-04-18 21:15:57.181694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-04-18 21:15:57.181702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.973 [2024-04-18 21:15:57.181709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-04-18 21:15:57.181716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:120712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.973 [2024-04-18 21:15:57.181723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-04-18 21:15:57.181730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:120720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.973 [2024-04-18 21:15:57.181737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-04-18 21:15:57.181744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:120728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.973 [2024-04-18 21:15:57.181751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-04-18 21:15:57.181760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.973 [2024-04-18 21:15:57.181767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-04-18 21:15:57.181775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:120744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.973 [2024-04-18 21:15:57.181781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-04-18 21:15:57.181789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:120752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.973 [2024-04-18 21:15:57.181795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-04-18 21:15:57.181803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:120064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.973 [2024-04-18 21:15:57.181809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-04-18 21:15:57.181817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:120072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.973 [2024-04-18 21:15:57.181824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-04-18 21:15:57.181832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:120080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.973 [2024-04-18 21:15:57.181838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-04-18 21:15:57.181846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:120088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.973 [2024-04-18 21:15:57.181852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-04-18 21:15:57.181860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:120096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.973 [2024-04-18 21:15:57.181866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-04-18 21:15:57.181875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:120104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.973 [2024-04-18 21:15:57.181881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-04-18 21:15:57.181889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:120112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.973 [2024-04-18 21:15:57.181895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-04-18 21:15:57.181903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:120120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.973 [2024-04-18 21:15:57.181910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-04-18 21:15:57.181917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:120760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.973 [2024-04-18 21:15:57.181924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-04-18 21:15:57.181932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:120768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.973 [2024-04-18 21:15:57.181939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-04-18 21:15:57.181947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.973 [2024-04-18 21:15:57.181954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-04-18 21:15:57.181961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:120784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.973 [2024-04-18 21:15:57.181967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-04-18 21:15:57.181975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.974 [2024-04-18 21:15:57.181982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-04-18 21:15:57.181989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:120800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.974 [2024-04-18 21:15:57.181996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-04-18 21:15:57.182003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:120808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.974 [2024-04-18 21:15:57.182010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-04-18 21:15:57.182017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.974 [2024-04-18 21:15:57.182024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-04-18 21:15:57.182031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.974 [2024-04-18 21:15:57.182038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-04-18 21:15:57.182045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.974 [2024-04-18 21:15:57.182052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-04-18 21:15:57.182060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.974 [2024-04-18 21:15:57.182066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-04-18 21:15:57.182075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.974 [2024-04-18 21:15:57.182081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-04-18 21:15:57.182089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.974 [2024-04-18 21:15:57.182095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-04-18 21:15:57.182103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.974 [2024-04-18 21:15:57.182109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-04-18 21:15:57.182117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.974 [2024-04-18 21:15:57.182124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-04-18 21:15:57.182132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.974 [2024-04-18 21:15:57.182139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-04-18 21:15:57.182146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.974 [2024-04-18 21:15:57.182153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-04-18 21:15:57.182160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.974 [2024-04-18 21:15:57.182167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-04-18 21:15:57.182174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.974 [2024-04-18 21:15:57.182181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-04-18 21:15:57.182189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:120912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.974 [2024-04-18 21:15:57.182195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-04-18 21:15:57.182203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:120920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.974 [2024-04-18 21:15:57.182210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-04-18 21:15:57.182217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:120928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.974 [2024-04-18 21:15:57.182225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-04-18 21:15:57.182232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:120936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.974 [2024-04-18 21:15:57.182239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-04-18 21:15:57.182247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:120944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.974 [2024-04-18 21:15:57.182253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-04-18 21:15:57.182261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:120128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-04-18 21:15:57.182267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-04-18 21:15:57.182275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:120136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-04-18 21:15:57.182282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-04-18 21:15:57.182291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:120144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-04-18 21:15:57.182297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-04-18 21:15:57.182307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:120152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-04-18 21:15:57.182314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-04-18 21:15:57.182322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:120160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-04-18 21:15:57.182328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-04-18 21:15:57.182336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:120168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-04-18 21:15:57.182343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-04-18 21:15:57.182351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:120176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-04-18 21:15:57.182357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-04-18 21:15:57.182365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:120184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-04-18 21:15:57.182371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-04-18 21:15:57.182379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:120192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-04-18 21:15:57.182385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-04-18 21:15:57.182393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:120200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-04-18 21:15:57.182400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-04-18 21:15:57.182409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:120208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-04-18 21:15:57.182415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-04-18 21:15:57.182423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-04-18 21:15:57.182429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-04-18 21:15:57.182437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-04-18 21:15:57.182444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-04-18 21:15:57.182451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:120232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-04-18 21:15:57.182457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-04-18 21:15:57.182465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:120240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-04-18 21:15:57.182471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-04-18 21:15:57.182479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:120248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-04-18 21:15:57.182487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-04-18 21:15:57.182495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.975 [2024-04-18 21:15:57.182501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-04-18 21:15:57.182509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:120960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.975 [2024-04-18 21:15:57.182519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-04-18 21:15:57.182527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:120968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.975 [2024-04-18 21:15:57.182533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-04-18 21:15:57.182541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:120976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.975 [2024-04-18 21:15:57.182548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-04-18 21:15:57.182556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:120984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.975 [2024-04-18 21:15:57.182563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-04-18 21:15:57.182571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:120992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.975 [2024-04-18 21:15:57.182577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-04-18 21:15:57.182585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:121000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.975 [2024-04-18 21:15:57.182591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-04-18 21:15:57.182599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.975 [2024-04-18 21:15:57.182605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-04-18 21:15:57.182613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:120256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-04-18 21:15:57.182620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-04-18 21:15:57.182627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-04-18 21:15:57.182634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-04-18 21:15:57.182642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:120272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-04-18 21:15:57.182648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-04-18 21:15:57.182656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:120280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-04-18 21:15:57.182662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-04-18 21:15:57.182673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:120288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-04-18 21:15:57.182679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-04-18 21:15:57.182687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:120296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-04-18 21:15:57.182693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-04-18 21:15:57.182701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:120304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-04-18 21:15:57.182708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-04-18 21:15:57.182716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:120312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-04-18 21:15:57.182722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-04-18 21:15:57.182730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:120320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-04-18 21:15:57.182736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-04-18 21:15:57.182744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:120328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-04-18 21:15:57.182751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-04-18 21:15:57.182759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:120336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-04-18 21:15:57.182766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-04-18 21:15:57.182774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:120344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-04-18 21:15:57.182781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-04-18 21:15:57.182789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:120352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-04-18 21:15:57.182795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-04-18 21:15:57.182803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:120360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-04-18 21:15:57.182809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-04-18 21:15:57.182818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:120368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-04-18 21:15:57.182824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-04-18 21:15:57.182832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:120376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-04-18 21:15:57.182838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-04-18 21:15:57.182846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:121016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.975 [2024-04-18 21:15:57.182854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-04-18 21:15:57.182862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.975 [2024-04-18 21:15:57.182868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-04-18 21:15:57.182876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:120384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-04-18 21:15:57.182882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-04-18 21:15:57.182890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:120392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-04-18 21:15:57.182897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-04-18 21:15:57.182904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:120400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-04-18 21:15:57.182910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-04-18 21:15:57.182918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:120408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-04-18 21:15:57.182925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-04-18 21:15:57.182933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:120416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-04-18 21:15:57.182939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-04-18 21:15:57.182947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:120424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-04-18 21:15:57.182954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-04-18 21:15:57.182961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-04-18 21:15:57.182968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-04-18 21:15:57.182987] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:47.975 [2024-04-18 21:15:57.182993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:47.975 [2024-04-18 21:15:57.182999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120440 len:8 PRP1 0x0 PRP2 0x0 00:22:47.975 [2024-04-18 21:15:57.183006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-04-18 21:15:57.183046] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc7a8b0 was disconnected and freed. reset controller. 00:22:47.975 [2024-04-18 21:15:57.183056] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:47.975 [2024-04-18 21:15:57.183074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.975 [2024-04-18 21:15:57.183082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-04-18 21:15:57.183089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.975 [2024-04-18 21:15:57.183097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-04-18 21:15:57.183104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.975 [2024-04-18 21:15:57.183110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.976 [2024-04-18 21:15:57.183117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.976 [2024-04-18 21:15:57.183124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.976 [2024-04-18 21:15:57.183131] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:47.976 [2024-04-18 21:15:57.185953] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:47.976 [2024-04-18 21:15:57.185982] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc804a0 (9): Bad file descriptor 00:22:47.976 [2024-04-18 21:15:57.348145] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:47.976 00:22:47.976 Latency(us) 00:22:47.976 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.976 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:47.976 Verification LBA range: start 0x0 length 0x4000 00:22:47.976 NVMe0n1 : 15.01 10841.80 42.35 1473.76 0.00 10371.04 847.69 19033.93 00:22:47.976 =================================================================================================================== 00:22:47.976 Total : 10841.80 42.35 1473.76 0.00 10371.04 847.69 19033.93 00:22:47.976 Received shutdown signal, test time was about 15.000000 seconds 00:22:47.976 00:22:47.976 Latency(us) 00:22:47.976 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.976 =================================================================================================================== 00:22:47.976 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:47.976 21:16:03 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:47.976 21:16:03 -- host/failover.sh@65 -- # count=3 00:22:47.976 21:16:03 -- host/failover.sh@67 -- # (( count != 3 )) 00:22:47.976 21:16:03 -- host/failover.sh@73 -- # bdevperf_pid=3148319 00:22:47.976 21:16:03 -- host/failover.sh@75 -- # waitforlisten 3148319 /var/tmp/bdevperf.sock 00:22:47.976 21:16:03 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:47.976 21:16:03 -- common/autotest_common.sh@817 -- # '[' -z 3148319 ']' 00:22:47.976 21:16:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:47.976 21:16:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:47.976 21:16:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:47.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:47.976 21:16:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:47.976 21:16:03 -- common/autotest_common.sh@10 -- # set +x 00:22:48.544 21:16:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:48.544 21:16:04 -- common/autotest_common.sh@850 -- # return 0 00:22:48.544 21:16:04 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:48.544 [2024-04-18 21:16:04.422463] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:48.544 21:16:04 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:48.804 [2024-04-18 21:16:04.602991] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:48.804 21:16:04 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:49.063 NVMe0n1 00:22:49.063 21:16:04 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:49.685 00:22:49.685 21:16:05 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:49.685 00:22:49.685 21:16:05 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:49.685 21:16:05 -- host/failover.sh@82 -- # grep -q NVMe0 00:22:49.942 21:16:05 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:50.200 21:16:05 -- host/failover.sh@87 -- # sleep 3 00:22:53.488 21:16:08 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:53.488 21:16:08 -- host/failover.sh@88 -- # grep -q NVMe0 00:22:53.488 21:16:09 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:53.488 21:16:09 -- host/failover.sh@90 -- # run_test_pid=3149251 00:22:53.488 21:16:09 -- host/failover.sh@92 -- # wait 3149251 00:22:54.425 0 00:22:54.425 21:16:10 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:54.425 [2024-04-18 21:16:03.457724] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:22:54.425 [2024-04-18 21:16:03.457777] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3148319 ] 00:22:54.425 EAL: No free 2048 kB hugepages reported on node 1 00:22:54.425 [2024-04-18 21:16:03.517965] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.425 [2024-04-18 21:16:03.585469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:54.425 [2024-04-18 21:16:05.906123] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:54.425 [2024-04-18 21:16:05.906170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.425 [2024-04-18 21:16:05.906181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.425 [2024-04-18 21:16:05.906190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.425 [2024-04-18 21:16:05.906197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.425 [2024-04-18 21:16:05.906205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.425 [2024-04-18 21:16:05.906212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.425 [2024-04-18 21:16:05.906219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.425 [2024-04-18 21:16:05.906226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.425 [2024-04-18 21:16:05.906232] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:54.425 [2024-04-18 21:16:05.906261] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f414a0 (9): Bad file descriptor 00:22:54.425 [2024-04-18 21:16:05.906274] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:54.425 [2024-04-18 21:16:05.951124] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:54.425 Running I/O for 1 seconds... 00:22:54.425 00:22:54.425 Latency(us) 00:22:54.425 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.425 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:54.425 Verification LBA range: start 0x0 length 0x4000 00:22:54.425 NVMe0n1 : 1.01 11255.37 43.97 0.00 0.00 11313.53 2122.80 19375.86 00:22:54.425 =================================================================================================================== 00:22:54.425 Total : 11255.37 43.97 0.00 0.00 11313.53 2122.80 19375.86 00:22:54.425 21:16:10 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:54.425 21:16:10 -- host/failover.sh@95 -- # grep -q NVMe0 00:22:54.684 21:16:10 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:54.684 21:16:10 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:54.684 21:16:10 -- host/failover.sh@99 -- # grep -q NVMe0 00:22:54.942 21:16:10 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:55.201 21:16:10 -- host/failover.sh@101 -- # sleep 3 00:22:58.491 21:16:13 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:58.491 21:16:13 -- host/failover.sh@103 -- # grep -q NVMe0 00:22:58.491 21:16:14 -- host/failover.sh@108 -- # killprocess 3148319 00:22:58.491 21:16:14 -- common/autotest_common.sh@936 -- # '[' -z 3148319 ']' 00:22:58.491 21:16:14 -- common/autotest_common.sh@940 -- # kill -0 3148319 00:22:58.491 21:16:14 -- common/autotest_common.sh@941 -- # uname 00:22:58.491 21:16:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:58.491 21:16:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3148319 00:22:58.491 21:16:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:58.491 21:16:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:58.491 21:16:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3148319' 00:22:58.491 killing process with pid 3148319 00:22:58.491 21:16:14 -- common/autotest_common.sh@955 -- # kill 3148319 00:22:58.491 21:16:14 -- common/autotest_common.sh@960 -- # wait 3148319 00:22:58.491 21:16:14 -- host/failover.sh@110 -- # sync 00:22:58.491 21:16:14 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:58.751 21:16:14 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:58.751 21:16:14 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:58.751 21:16:14 -- host/failover.sh@116 -- # nvmftestfini 00:22:58.751 21:16:14 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:58.751 21:16:14 -- nvmf/common.sh@117 -- # sync 00:22:58.751 21:16:14 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:58.751 21:16:14 -- nvmf/common.sh@120 -- # set +e 00:22:58.751 21:16:14 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:58.751 21:16:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:58.751 rmmod nvme_tcp 00:22:58.751 rmmod nvme_fabrics 00:22:58.751 rmmod nvme_keyring 00:22:58.751 21:16:14 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:58.751 21:16:14 -- nvmf/common.sh@124 -- # set -e 00:22:58.751 21:16:14 -- nvmf/common.sh@125 -- # return 0 00:22:58.751 21:16:14 -- nvmf/common.sh@478 -- # '[' -n 3145289 ']' 00:22:58.751 21:16:14 -- nvmf/common.sh@479 -- # killprocess 3145289 00:22:58.751 21:16:14 -- common/autotest_common.sh@936 -- # '[' -z 3145289 ']' 00:22:58.751 21:16:14 -- common/autotest_common.sh@940 -- # kill -0 3145289 00:22:58.751 21:16:14 -- common/autotest_common.sh@941 -- # uname 00:22:58.751 21:16:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:58.751 21:16:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3145289 00:22:59.009 21:16:14 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:59.009 21:16:14 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:59.009 21:16:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3145289' 00:22:59.009 killing process with pid 3145289 00:22:59.009 21:16:14 -- common/autotest_common.sh@955 -- # kill 3145289 00:22:59.009 21:16:14 -- common/autotest_common.sh@960 -- # wait 3145289 00:22:59.009 21:16:14 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:59.009 21:16:14 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:59.009 21:16:14 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:59.009 21:16:14 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:59.009 21:16:14 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:59.009 21:16:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.009 21:16:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:59.009 21:16:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:01.547 21:16:16 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:01.547 00:23:01.547 real 0m38.864s 00:23:01.547 user 2m3.219s 00:23:01.547 sys 0m7.941s 00:23:01.547 21:16:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:01.547 21:16:16 -- common/autotest_common.sh@10 -- # set +x 00:23:01.547 ************************************ 00:23:01.547 END TEST nvmf_failover 00:23:01.547 ************************************ 00:23:01.547 21:16:17 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:01.547 21:16:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:01.547 21:16:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:01.547 21:16:17 -- common/autotest_common.sh@10 -- # set +x 00:23:01.547 ************************************ 00:23:01.547 START TEST nvmf_discovery 00:23:01.547 ************************************ 00:23:01.547 21:16:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:01.547 * Looking for test storage... 00:23:01.547 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:01.547 21:16:17 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:01.547 21:16:17 -- nvmf/common.sh@7 -- # uname -s 00:23:01.547 21:16:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:01.547 21:16:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:01.547 21:16:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:01.547 21:16:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:01.547 21:16:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:01.547 21:16:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:01.547 21:16:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:01.547 21:16:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:01.547 21:16:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:01.547 21:16:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:01.547 21:16:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:01.547 21:16:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:01.547 21:16:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:01.547 21:16:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:01.547 21:16:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:01.547 21:16:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:01.547 21:16:17 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:01.547 21:16:17 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:01.547 21:16:17 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:01.547 21:16:17 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:01.547 21:16:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.547 21:16:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.547 21:16:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.547 21:16:17 -- paths/export.sh@5 -- # export PATH 00:23:01.547 21:16:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.547 21:16:17 -- nvmf/common.sh@47 -- # : 0 00:23:01.547 21:16:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:01.547 21:16:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:01.547 21:16:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:01.547 21:16:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:01.547 21:16:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:01.547 21:16:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:01.547 21:16:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:01.547 21:16:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:01.547 21:16:17 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:01.547 21:16:17 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:01.547 21:16:17 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:01.547 21:16:17 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:01.547 21:16:17 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:01.547 21:16:17 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:01.547 21:16:17 -- host/discovery.sh@25 -- # nvmftestinit 00:23:01.548 21:16:17 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:01.548 21:16:17 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:01.548 21:16:17 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:01.548 21:16:17 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:01.548 21:16:17 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:01.548 21:16:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:01.548 21:16:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:01.548 21:16:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:01.548 21:16:17 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:01.548 21:16:17 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:01.548 21:16:17 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:01.548 21:16:17 -- common/autotest_common.sh@10 -- # set +x 00:23:06.819 21:16:22 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:06.819 21:16:22 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:06.819 21:16:22 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:06.819 21:16:22 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:06.819 21:16:22 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:06.819 21:16:22 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:06.819 21:16:22 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:06.819 21:16:22 -- nvmf/common.sh@295 -- # net_devs=() 00:23:06.819 21:16:22 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:06.819 21:16:22 -- nvmf/common.sh@296 -- # e810=() 00:23:06.819 21:16:22 -- nvmf/common.sh@296 -- # local -ga e810 00:23:06.819 21:16:22 -- nvmf/common.sh@297 -- # x722=() 00:23:06.819 21:16:22 -- nvmf/common.sh@297 -- # local -ga x722 00:23:06.819 21:16:22 -- nvmf/common.sh@298 -- # mlx=() 00:23:06.819 21:16:22 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:06.819 21:16:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:06.819 21:16:22 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:06.819 21:16:22 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:06.819 21:16:22 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:06.819 21:16:22 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:06.819 21:16:22 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:06.819 21:16:22 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:06.819 21:16:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:06.819 21:16:22 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:06.819 21:16:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:06.819 21:16:22 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:06.819 21:16:22 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:06.819 21:16:22 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:06.819 21:16:22 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:06.819 21:16:22 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:06.819 21:16:22 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:06.819 21:16:22 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:06.819 21:16:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:06.819 21:16:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:06.819 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:06.819 21:16:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:06.819 21:16:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:06.819 21:16:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:06.819 21:16:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:06.819 21:16:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:06.819 21:16:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:06.819 21:16:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:06.819 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:06.819 21:16:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:06.819 21:16:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:06.819 21:16:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:06.819 21:16:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:06.819 21:16:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:06.819 21:16:22 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:06.819 21:16:22 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:06.819 21:16:22 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:06.819 21:16:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:06.819 21:16:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:06.819 21:16:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:06.819 21:16:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:06.819 21:16:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:06.819 Found net devices under 0000:86:00.0: cvl_0_0 00:23:06.819 21:16:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:06.819 21:16:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:06.819 21:16:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:06.819 21:16:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:06.819 21:16:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:06.819 21:16:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:06.819 Found net devices under 0000:86:00.1: cvl_0_1 00:23:06.819 21:16:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:06.819 21:16:22 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:06.819 21:16:22 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:06.819 21:16:22 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:06.819 21:16:22 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:06.819 21:16:22 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:06.819 21:16:22 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:06.819 21:16:22 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:06.819 21:16:22 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:06.819 21:16:22 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:06.819 21:16:22 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:06.819 21:16:22 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:06.819 21:16:22 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:06.819 21:16:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:06.819 21:16:22 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:06.819 21:16:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:06.819 21:16:22 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:06.819 21:16:22 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:06.819 21:16:22 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:06.819 21:16:22 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:06.819 21:16:22 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:06.819 21:16:22 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:06.819 21:16:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:06.819 21:16:22 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:06.819 21:16:22 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:06.819 21:16:22 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:06.819 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:06.819 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:23:06.819 00:23:06.819 --- 10.0.0.2 ping statistics --- 00:23:06.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.819 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:23:06.819 21:16:22 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:06.819 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:06.819 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:23:06.819 00:23:06.819 --- 10.0.0.1 ping statistics --- 00:23:06.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.819 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:23:06.819 21:16:22 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:06.819 21:16:22 -- nvmf/common.sh@411 -- # return 0 00:23:06.819 21:16:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:06.819 21:16:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:06.819 21:16:22 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:06.819 21:16:22 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:06.819 21:16:22 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:06.819 21:16:22 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:06.819 21:16:22 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:07.078 21:16:22 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:07.078 21:16:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:07.078 21:16:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:07.078 21:16:22 -- common/autotest_common.sh@10 -- # set +x 00:23:07.078 21:16:22 -- nvmf/common.sh@470 -- # nvmfpid=3153975 00:23:07.078 21:16:22 -- nvmf/common.sh@471 -- # waitforlisten 3153975 00:23:07.078 21:16:22 -- common/autotest_common.sh@817 -- # '[' -z 3153975 ']' 00:23:07.078 21:16:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.078 21:16:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:07.078 21:16:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.078 21:16:22 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:07.078 21:16:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:07.078 21:16:22 -- common/autotest_common.sh@10 -- # set +x 00:23:07.078 [2024-04-18 21:16:22.807661] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:23:07.078 [2024-04-18 21:16:22.807705] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:07.078 EAL: No free 2048 kB hugepages reported on node 1 00:23:07.078 [2024-04-18 21:16:22.869064] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.078 [2024-04-18 21:16:22.946625] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:07.078 [2024-04-18 21:16:22.946659] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:07.078 [2024-04-18 21:16:22.946666] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:07.078 [2024-04-18 21:16:22.946672] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:07.078 [2024-04-18 21:16:22.946677] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:07.078 [2024-04-18 21:16:22.946716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:08.013 21:16:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:08.013 21:16:23 -- common/autotest_common.sh@850 -- # return 0 00:23:08.013 21:16:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:08.013 21:16:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:08.013 21:16:23 -- common/autotest_common.sh@10 -- # set +x 00:23:08.013 21:16:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:08.013 21:16:23 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:08.013 21:16:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.013 21:16:23 -- common/autotest_common.sh@10 -- # set +x 00:23:08.013 [2024-04-18 21:16:23.634328] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:08.013 21:16:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.013 21:16:23 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:08.013 21:16:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.013 21:16:23 -- common/autotest_common.sh@10 -- # set +x 00:23:08.013 [2024-04-18 21:16:23.642445] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:08.013 21:16:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.013 21:16:23 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:08.013 21:16:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.013 21:16:23 -- common/autotest_common.sh@10 -- # set +x 00:23:08.013 null0 00:23:08.013 21:16:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.013 21:16:23 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:08.013 21:16:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.013 21:16:23 -- common/autotest_common.sh@10 -- # set +x 00:23:08.013 null1 00:23:08.013 21:16:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.013 21:16:23 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:08.013 21:16:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.013 21:16:23 -- common/autotest_common.sh@10 -- # set +x 00:23:08.013 21:16:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.013 21:16:23 -- host/discovery.sh@45 -- # hostpid=3154080 00:23:08.013 21:16:23 -- host/discovery.sh@46 -- # waitforlisten 3154080 /tmp/host.sock 00:23:08.013 21:16:23 -- common/autotest_common.sh@817 -- # '[' -z 3154080 ']' 00:23:08.013 21:16:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:23:08.013 21:16:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:08.013 21:16:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:08.013 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:08.013 21:16:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:08.013 21:16:23 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:08.013 21:16:23 -- common/autotest_common.sh@10 -- # set +x 00:23:08.013 [2024-04-18 21:16:23.717640] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:23:08.013 [2024-04-18 21:16:23.717683] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3154080 ] 00:23:08.013 EAL: No free 2048 kB hugepages reported on node 1 00:23:08.013 [2024-04-18 21:16:23.776769] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.013 [2024-04-18 21:16:23.854229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:08.609 21:16:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:08.609 21:16:24 -- common/autotest_common.sh@850 -- # return 0 00:23:08.609 21:16:24 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:08.609 21:16:24 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:08.609 21:16:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.609 21:16:24 -- common/autotest_common.sh@10 -- # set +x 00:23:08.609 21:16:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.609 21:16:24 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:08.609 21:16:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.609 21:16:24 -- common/autotest_common.sh@10 -- # set +x 00:23:08.609 21:16:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.609 21:16:24 -- host/discovery.sh@72 -- # notify_id=0 00:23:08.609 21:16:24 -- host/discovery.sh@83 -- # get_subsystem_names 00:23:08.868 21:16:24 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:08.868 21:16:24 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:08.868 21:16:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.868 21:16:24 -- host/discovery.sh@59 -- # sort 00:23:08.868 21:16:24 -- common/autotest_common.sh@10 -- # set +x 00:23:08.868 21:16:24 -- host/discovery.sh@59 -- # xargs 00:23:08.868 21:16:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.868 21:16:24 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:08.868 21:16:24 -- host/discovery.sh@84 -- # get_bdev_list 00:23:08.868 21:16:24 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:08.868 21:16:24 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:08.868 21:16:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.868 21:16:24 -- host/discovery.sh@55 -- # sort 00:23:08.868 21:16:24 -- common/autotest_common.sh@10 -- # set +x 00:23:08.868 21:16:24 -- host/discovery.sh@55 -- # xargs 00:23:08.868 21:16:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.868 21:16:24 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:08.868 21:16:24 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:08.868 21:16:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.868 21:16:24 -- common/autotest_common.sh@10 -- # set +x 00:23:08.868 21:16:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.868 21:16:24 -- host/discovery.sh@87 -- # get_subsystem_names 00:23:08.868 21:16:24 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:08.868 21:16:24 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:08.868 21:16:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.868 21:16:24 -- host/discovery.sh@59 -- # sort 00:23:08.868 21:16:24 -- common/autotest_common.sh@10 -- # set +x 00:23:08.868 21:16:24 -- host/discovery.sh@59 -- # xargs 00:23:08.868 21:16:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.868 21:16:24 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:08.868 21:16:24 -- host/discovery.sh@88 -- # get_bdev_list 00:23:08.868 21:16:24 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:08.868 21:16:24 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:08.868 21:16:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.868 21:16:24 -- host/discovery.sh@55 -- # sort 00:23:08.868 21:16:24 -- common/autotest_common.sh@10 -- # set +x 00:23:08.868 21:16:24 -- host/discovery.sh@55 -- # xargs 00:23:08.868 21:16:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.868 21:16:24 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:08.868 21:16:24 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:08.868 21:16:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.868 21:16:24 -- common/autotest_common.sh@10 -- # set +x 00:23:08.868 21:16:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.868 21:16:24 -- host/discovery.sh@91 -- # get_subsystem_names 00:23:08.868 21:16:24 -- host/discovery.sh@59 -- # sort 00:23:08.868 21:16:24 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:08.868 21:16:24 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:08.868 21:16:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.868 21:16:24 -- common/autotest_common.sh@10 -- # set +x 00:23:08.868 21:16:24 -- host/discovery.sh@59 -- # xargs 00:23:08.868 21:16:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.126 21:16:24 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:09.126 21:16:24 -- host/discovery.sh@92 -- # get_bdev_list 00:23:09.126 21:16:24 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:09.126 21:16:24 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:09.126 21:16:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.126 21:16:24 -- host/discovery.sh@55 -- # sort 00:23:09.126 21:16:24 -- common/autotest_common.sh@10 -- # set +x 00:23:09.127 21:16:24 -- host/discovery.sh@55 -- # xargs 00:23:09.127 21:16:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.127 21:16:24 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:09.127 21:16:24 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:09.127 21:16:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.127 21:16:24 -- common/autotest_common.sh@10 -- # set +x 00:23:09.127 [2024-04-18 21:16:24.861662] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:09.127 21:16:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.127 21:16:24 -- host/discovery.sh@97 -- # get_subsystem_names 00:23:09.127 21:16:24 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:09.127 21:16:24 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:09.127 21:16:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.127 21:16:24 -- host/discovery.sh@59 -- # sort 00:23:09.127 21:16:24 -- common/autotest_common.sh@10 -- # set +x 00:23:09.127 21:16:24 -- host/discovery.sh@59 -- # xargs 00:23:09.127 21:16:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.127 21:16:24 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:09.127 21:16:24 -- host/discovery.sh@98 -- # get_bdev_list 00:23:09.127 21:16:24 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:09.127 21:16:24 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:09.127 21:16:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.127 21:16:24 -- host/discovery.sh@55 -- # sort 00:23:09.127 21:16:24 -- common/autotest_common.sh@10 -- # set +x 00:23:09.127 21:16:24 -- host/discovery.sh@55 -- # xargs 00:23:09.127 21:16:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.127 21:16:24 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:09.127 21:16:24 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:09.127 21:16:24 -- host/discovery.sh@79 -- # expected_count=0 00:23:09.127 21:16:24 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:09.127 21:16:24 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:09.127 21:16:24 -- common/autotest_common.sh@901 -- # local max=10 00:23:09.127 21:16:24 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:09.127 21:16:24 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:09.127 21:16:24 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:09.127 21:16:24 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:09.127 21:16:24 -- host/discovery.sh@74 -- # jq '. | length' 00:23:09.127 21:16:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.127 21:16:24 -- common/autotest_common.sh@10 -- # set +x 00:23:09.127 21:16:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.127 21:16:25 -- host/discovery.sh@74 -- # notification_count=0 00:23:09.127 21:16:25 -- host/discovery.sh@75 -- # notify_id=0 00:23:09.127 21:16:25 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:09.127 21:16:25 -- common/autotest_common.sh@904 -- # return 0 00:23:09.127 21:16:25 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:09.127 21:16:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.127 21:16:25 -- common/autotest_common.sh@10 -- # set +x 00:23:09.127 21:16:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.127 21:16:25 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:09.127 21:16:25 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:09.127 21:16:25 -- common/autotest_common.sh@901 -- # local max=10 00:23:09.127 21:16:25 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:09.127 21:16:25 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:09.127 21:16:25 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:23:09.127 21:16:25 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:09.127 21:16:25 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:09.127 21:16:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.127 21:16:25 -- host/discovery.sh@59 -- # sort 00:23:09.127 21:16:25 -- common/autotest_common.sh@10 -- # set +x 00:23:09.127 21:16:25 -- host/discovery.sh@59 -- # xargs 00:23:09.127 21:16:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.385 21:16:25 -- common/autotest_common.sh@903 -- # [[ '' == \n\v\m\e\0 ]] 00:23:09.385 21:16:25 -- common/autotest_common.sh@906 -- # sleep 1 00:23:09.644 [2024-04-18 21:16:25.542539] bdev_nvme.c:6930:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:09.644 [2024-04-18 21:16:25.542561] bdev_nvme.c:7010:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:09.644 [2024-04-18 21:16:25.542575] bdev_nvme.c:6893:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:09.902 [2024-04-18 21:16:25.629841] bdev_nvme.c:6859:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:09.902 [2024-04-18 21:16:25.692890] bdev_nvme.c:6749:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:09.902 [2024-04-18 21:16:25.692908] bdev_nvme.c:6708:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:10.161 21:16:26 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:10.161 21:16:26 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:10.161 21:16:26 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:23:10.161 21:16:26 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:10.161 21:16:26 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:10.161 21:16:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.161 21:16:26 -- host/discovery.sh@59 -- # sort 00:23:10.161 21:16:26 -- common/autotest_common.sh@10 -- # set +x 00:23:10.161 21:16:26 -- host/discovery.sh@59 -- # xargs 00:23:10.161 21:16:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.419 21:16:26 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.419 21:16:26 -- common/autotest_common.sh@904 -- # return 0 00:23:10.419 21:16:26 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:10.419 21:16:26 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:10.419 21:16:26 -- common/autotest_common.sh@901 -- # local max=10 00:23:10.419 21:16:26 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:10.419 21:16:26 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:10.419 21:16:26 -- common/autotest_common.sh@903 -- # get_bdev_list 00:23:10.419 21:16:26 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:10.419 21:16:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.419 21:16:26 -- common/autotest_common.sh@10 -- # set +x 00:23:10.419 21:16:26 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:10.419 21:16:26 -- host/discovery.sh@55 -- # sort 00:23:10.419 21:16:26 -- host/discovery.sh@55 -- # xargs 00:23:10.419 21:16:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.419 21:16:26 -- common/autotest_common.sh@903 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:10.419 21:16:26 -- common/autotest_common.sh@904 -- # return 0 00:23:10.419 21:16:26 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:10.419 21:16:26 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:10.419 21:16:26 -- common/autotest_common.sh@901 -- # local max=10 00:23:10.419 21:16:26 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:10.419 21:16:26 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:10.419 21:16:26 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:23:10.419 21:16:26 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:10.419 21:16:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.420 21:16:26 -- common/autotest_common.sh@10 -- # set +x 00:23:10.420 21:16:26 -- host/discovery.sh@63 -- # xargs 00:23:10.420 21:16:26 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:10.420 21:16:26 -- host/discovery.sh@63 -- # sort -n 00:23:10.420 21:16:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.420 21:16:26 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0 ]] 00:23:10.420 21:16:26 -- common/autotest_common.sh@904 -- # return 0 00:23:10.420 21:16:26 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:10.420 21:16:26 -- host/discovery.sh@79 -- # expected_count=1 00:23:10.420 21:16:26 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:10.420 21:16:26 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:10.420 21:16:26 -- common/autotest_common.sh@901 -- # local max=10 00:23:10.420 21:16:26 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:10.420 21:16:26 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:10.420 21:16:26 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:10.420 21:16:26 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:10.420 21:16:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.420 21:16:26 -- common/autotest_common.sh@10 -- # set +x 00:23:10.420 21:16:26 -- host/discovery.sh@74 -- # jq '. | length' 00:23:10.420 21:16:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.420 21:16:26 -- host/discovery.sh@74 -- # notification_count=1 00:23:10.420 21:16:26 -- host/discovery.sh@75 -- # notify_id=1 00:23:10.420 21:16:26 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:10.420 21:16:26 -- common/autotest_common.sh@904 -- # return 0 00:23:10.420 21:16:26 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:10.420 21:16:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.420 21:16:26 -- common/autotest_common.sh@10 -- # set +x 00:23:10.420 21:16:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.420 21:16:26 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:10.420 21:16:26 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:10.420 21:16:26 -- common/autotest_common.sh@901 -- # local max=10 00:23:10.420 21:16:26 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:10.420 21:16:26 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:10.420 21:16:26 -- common/autotest_common.sh@903 -- # get_bdev_list 00:23:10.420 21:16:26 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:10.420 21:16:26 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:10.420 21:16:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.420 21:16:26 -- host/discovery.sh@55 -- # sort 00:23:10.420 21:16:26 -- common/autotest_common.sh@10 -- # set +x 00:23:10.420 21:16:26 -- host/discovery.sh@55 -- # xargs 00:23:10.679 21:16:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.679 21:16:26 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:10.679 21:16:26 -- common/autotest_common.sh@904 -- # return 0 00:23:10.679 21:16:26 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:10.679 21:16:26 -- host/discovery.sh@79 -- # expected_count=1 00:23:10.679 21:16:26 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:10.679 21:16:26 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:10.679 21:16:26 -- common/autotest_common.sh@901 -- # local max=10 00:23:10.679 21:16:26 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:10.679 21:16:26 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:10.679 21:16:26 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:10.679 21:16:26 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:10.679 21:16:26 -- host/discovery.sh@74 -- # jq '. | length' 00:23:10.679 21:16:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.679 21:16:26 -- common/autotest_common.sh@10 -- # set +x 00:23:10.679 21:16:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.679 21:16:26 -- host/discovery.sh@74 -- # notification_count=1 00:23:10.679 21:16:26 -- host/discovery.sh@75 -- # notify_id=2 00:23:10.679 21:16:26 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:10.679 21:16:26 -- common/autotest_common.sh@904 -- # return 0 00:23:10.679 21:16:26 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:10.679 21:16:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.679 21:16:26 -- common/autotest_common.sh@10 -- # set +x 00:23:10.679 [2024-04-18 21:16:26.518248] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:10.679 [2024-04-18 21:16:26.518681] bdev_nvme.c:6912:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:10.679 [2024-04-18 21:16:26.518702] bdev_nvme.c:6893:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:10.679 21:16:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.679 21:16:26 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:10.679 21:16:26 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:10.679 21:16:26 -- common/autotest_common.sh@901 -- # local max=10 00:23:10.679 21:16:26 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:10.679 21:16:26 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:10.679 21:16:26 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:23:10.679 21:16:26 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:10.679 21:16:26 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:10.679 21:16:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.679 21:16:26 -- common/autotest_common.sh@10 -- # set +x 00:23:10.679 21:16:26 -- host/discovery.sh@59 -- # sort 00:23:10.679 21:16:26 -- host/discovery.sh@59 -- # xargs 00:23:10.679 21:16:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.679 21:16:26 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.679 21:16:26 -- common/autotest_common.sh@904 -- # return 0 00:23:10.679 21:16:26 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:10.679 21:16:26 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:10.679 21:16:26 -- common/autotest_common.sh@901 -- # local max=10 00:23:10.679 21:16:26 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:10.679 21:16:26 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:10.679 21:16:26 -- common/autotest_common.sh@903 -- # get_bdev_list 00:23:10.679 21:16:26 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:10.679 21:16:26 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:10.679 21:16:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.679 21:16:26 -- common/autotest_common.sh@10 -- # set +x 00:23:10.679 21:16:26 -- host/discovery.sh@55 -- # sort 00:23:10.679 21:16:26 -- host/discovery.sh@55 -- # xargs 00:23:10.679 21:16:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.679 [2024-04-18 21:16:26.605217] bdev_nvme.c:6854:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:10.938 21:16:26 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:10.938 21:16:26 -- common/autotest_common.sh@904 -- # return 0 00:23:10.938 21:16:26 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:10.938 21:16:26 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:10.938 21:16:26 -- common/autotest_common.sh@901 -- # local max=10 00:23:10.938 21:16:26 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:10.938 21:16:26 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:10.938 21:16:26 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:23:10.938 21:16:26 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:10.938 21:16:26 -- host/discovery.sh@63 -- # xargs 00:23:10.938 21:16:26 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:10.938 21:16:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.938 21:16:26 -- host/discovery.sh@63 -- # sort -n 00:23:10.938 21:16:26 -- common/autotest_common.sh@10 -- # set +x 00:23:10.938 21:16:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.938 21:16:26 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:10.938 21:16:26 -- common/autotest_common.sh@906 -- # sleep 1 00:23:10.938 [2024-04-18 21:16:26.706883] bdev_nvme.c:6749:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:10.938 [2024-04-18 21:16:26.706899] bdev_nvme.c:6708:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:10.938 [2024-04-18 21:16:26.706904] bdev_nvme.c:6708:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:11.874 21:16:27 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:11.874 21:16:27 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:11.874 21:16:27 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:23:11.874 21:16:27 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:11.874 21:16:27 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:11.874 21:16:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:11.874 21:16:27 -- host/discovery.sh@63 -- # sort -n 00:23:11.874 21:16:27 -- common/autotest_common.sh@10 -- # set +x 00:23:11.874 21:16:27 -- host/discovery.sh@63 -- # xargs 00:23:11.874 21:16:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:11.874 21:16:27 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:11.874 21:16:27 -- common/autotest_common.sh@904 -- # return 0 00:23:11.874 21:16:27 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:11.874 21:16:27 -- host/discovery.sh@79 -- # expected_count=0 00:23:11.874 21:16:27 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:11.874 21:16:27 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:11.874 21:16:27 -- common/autotest_common.sh@901 -- # local max=10 00:23:11.874 21:16:27 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:11.874 21:16:27 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:11.874 21:16:27 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:11.874 21:16:27 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:11.874 21:16:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:11.874 21:16:27 -- common/autotest_common.sh@10 -- # set +x 00:23:11.874 21:16:27 -- host/discovery.sh@74 -- # jq '. | length' 00:23:11.874 21:16:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:11.874 21:16:27 -- host/discovery.sh@74 -- # notification_count=0 00:23:11.874 21:16:27 -- host/discovery.sh@75 -- # notify_id=2 00:23:11.874 21:16:27 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:11.874 21:16:27 -- common/autotest_common.sh@904 -- # return 0 00:23:11.874 21:16:27 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:11.874 21:16:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:11.875 21:16:27 -- common/autotest_common.sh@10 -- # set +x 00:23:11.875 [2024-04-18 21:16:27.778100] bdev_nvme.c:6912:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:11.875 [2024-04-18 21:16:27.778122] bdev_nvme.c:6893:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:11.875 21:16:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:11.875 21:16:27 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:11.875 21:16:27 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:11.875 21:16:27 -- common/autotest_common.sh@901 -- # local max=10 00:23:11.875 21:16:27 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:11.875 21:16:27 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:11.875 [2024-04-18 21:16:27.784742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.875 [2024-04-18 21:16:27.784759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.875 [2024-04-18 21:16:27.784767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.875 [2024-04-18 21:16:27.784774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.875 [2024-04-18 21:16:27.784781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.875 [2024-04-18 21:16:27.784788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.875 [2024-04-18 21:16:27.784795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.875 [2024-04-18 21:16:27.784801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.875 [2024-04-18 21:16:27.784808] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e1780 is same with the state(5) to be set 00:23:11.875 21:16:27 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:23:11.875 21:16:27 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:11.875 21:16:27 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:11.875 21:16:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:11.875 21:16:27 -- host/discovery.sh@59 -- # sort 00:23:11.875 21:16:27 -- common/autotest_common.sh@10 -- # set +x 00:23:11.875 21:16:27 -- host/discovery.sh@59 -- # xargs 00:23:11.875 [2024-04-18 21:16:27.794755] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e1780 (9): Bad file descriptor 00:23:11.875 21:16:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:11.875 [2024-04-18 21:16:27.804793] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:11.875 [2024-04-18 21:16:27.805131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.135 [2024-04-18 21:16:27.805363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.135 [2024-04-18 21:16:27.805375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e1780 with addr=10.0.0.2, port=4420 00:23:12.135 [2024-04-18 21:16:27.805382] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e1780 is same with the state(5) to be set 00:23:12.135 [2024-04-18 21:16:27.805394] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e1780 (9): Bad file descriptor 00:23:12.135 [2024-04-18 21:16:27.805416] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:12.135 [2024-04-18 21:16:27.805424] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:12.135 [2024-04-18 21:16:27.805431] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:12.135 [2024-04-18 21:16:27.805441] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.135 [2024-04-18 21:16:27.814843] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:12.135 [2024-04-18 21:16:27.815189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.135 [2024-04-18 21:16:27.815480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.135 [2024-04-18 21:16:27.815490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e1780 with addr=10.0.0.2, port=4420 00:23:12.135 [2024-04-18 21:16:27.815497] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e1780 is same with the state(5) to be set 00:23:12.135 [2024-04-18 21:16:27.815507] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e1780 (9): Bad file descriptor 00:23:12.135 [2024-04-18 21:16:27.815528] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:12.135 [2024-04-18 21:16:27.815535] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:12.135 [2024-04-18 21:16:27.815541] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:12.135 [2024-04-18 21:16:27.815550] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.135 [2024-04-18 21:16:27.824894] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:12.135 [2024-04-18 21:16:27.825290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.135 [2024-04-18 21:16:27.825412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.135 [2024-04-18 21:16:27.825423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e1780 with addr=10.0.0.2, port=4420 00:23:12.135 [2024-04-18 21:16:27.825429] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e1780 is same with the state(5) to be set 00:23:12.135 [2024-04-18 21:16:27.825440] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e1780 (9): Bad file descriptor 00:23:12.135 [2024-04-18 21:16:27.825450] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:12.135 [2024-04-18 21:16:27.825456] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:12.135 [2024-04-18 21:16:27.825463] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:12.135 [2024-04-18 21:16:27.825479] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.135 [2024-04-18 21:16:27.834946] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:12.135 [2024-04-18 21:16:27.835309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.135 [2024-04-18 21:16:27.835685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.135 [2024-04-18 21:16:27.835696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e1780 with addr=10.0.0.2, port=4420 00:23:12.135 [2024-04-18 21:16:27.835703] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e1780 is same with the state(5) to be set 00:23:12.135 [2024-04-18 21:16:27.835714] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e1780 (9): Bad file descriptor 00:23:12.135 [2024-04-18 21:16:27.835730] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:12.135 [2024-04-18 21:16:27.835737] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:12.135 [2024-04-18 21:16:27.835743] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:12.135 [2024-04-18 21:16:27.835753] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.135 21:16:27 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.135 21:16:27 -- common/autotest_common.sh@904 -- # return 0 00:23:12.135 21:16:27 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:12.135 21:16:27 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:12.135 21:16:27 -- common/autotest_common.sh@901 -- # local max=10 00:23:12.135 21:16:27 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:12.135 21:16:27 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:12.135 21:16:27 -- common/autotest_common.sh@903 -- # get_bdev_list 00:23:12.135 21:16:27 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:12.135 21:16:27 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:12.135 21:16:27 -- host/discovery.sh@55 -- # sort 00:23:12.135 21:16:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:12.135 21:16:27 -- host/discovery.sh@55 -- # xargs 00:23:12.135 21:16:27 -- common/autotest_common.sh@10 -- # set +x 00:23:12.135 [2024-04-18 21:16:27.844995] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:12.135 [2024-04-18 21:16:27.845300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.135 [2024-04-18 21:16:27.845739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.135 [2024-04-18 21:16:27.845750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e1780 with addr=10.0.0.2, port=4420 00:23:12.135 [2024-04-18 21:16:27.845757] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e1780 is same with the state(5) to be set 00:23:12.135 [2024-04-18 21:16:27.845767] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e1780 (9): Bad file descriptor 00:23:12.135 [2024-04-18 21:16:27.846368] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:12.135 [2024-04-18 21:16:27.846379] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:12.135 [2024-04-18 21:16:27.846385] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:12.135 [2024-04-18 21:16:27.846396] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.135 [2024-04-18 21:16:27.855044] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:12.135 [2024-04-18 21:16:27.855387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.135 [2024-04-18 21:16:27.855686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.135 [2024-04-18 21:16:27.855696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e1780 with addr=10.0.0.2, port=4420 00:23:12.135 [2024-04-18 21:16:27.855704] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e1780 is same with the state(5) to be set 00:23:12.135 [2024-04-18 21:16:27.855714] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e1780 (9): Bad file descriptor 00:23:12.135 [2024-04-18 21:16:27.855724] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:12.135 [2024-04-18 21:16:27.855730] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:12.136 [2024-04-18 21:16:27.855736] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:12.136 [2024-04-18 21:16:27.855745] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.136 [2024-04-18 21:16:27.864858] bdev_nvme.c:6717:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:12.136 [2024-04-18 21:16:27.864873] bdev_nvme.c:6708:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:12.136 21:16:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:12.136 21:16:27 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:12.136 21:16:27 -- common/autotest_common.sh@904 -- # return 0 00:23:12.136 21:16:27 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:12.136 21:16:27 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:12.136 21:16:27 -- common/autotest_common.sh@901 -- # local max=10 00:23:12.136 21:16:27 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:12.136 21:16:27 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:12.136 21:16:27 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:23:12.136 21:16:27 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:12.136 21:16:27 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:12.136 21:16:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:12.136 21:16:27 -- common/autotest_common.sh@10 -- # set +x 00:23:12.136 21:16:27 -- host/discovery.sh@63 -- # sort -n 00:23:12.136 21:16:27 -- host/discovery.sh@63 -- # xargs 00:23:12.136 21:16:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:12.136 21:16:27 -- common/autotest_common.sh@903 -- # [[ 4421 == \4\4\2\1 ]] 00:23:12.136 21:16:27 -- common/autotest_common.sh@904 -- # return 0 00:23:12.136 21:16:27 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:12.136 21:16:27 -- host/discovery.sh@79 -- # expected_count=0 00:23:12.136 21:16:27 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:12.136 21:16:27 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:12.136 21:16:27 -- common/autotest_common.sh@901 -- # local max=10 00:23:12.136 21:16:27 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:12.136 21:16:27 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:12.136 21:16:27 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:12.136 21:16:27 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:12.136 21:16:27 -- host/discovery.sh@74 -- # jq '. | length' 00:23:12.136 21:16:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:12.136 21:16:27 -- common/autotest_common.sh@10 -- # set +x 00:23:12.136 21:16:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:12.136 21:16:27 -- host/discovery.sh@74 -- # notification_count=0 00:23:12.136 21:16:27 -- host/discovery.sh@75 -- # notify_id=2 00:23:12.136 21:16:27 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:12.136 21:16:27 -- common/autotest_common.sh@904 -- # return 0 00:23:12.136 21:16:27 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:12.136 21:16:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:12.136 21:16:27 -- common/autotest_common.sh@10 -- # set +x 00:23:12.136 21:16:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:12.136 21:16:27 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:12.136 21:16:27 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:12.136 21:16:27 -- common/autotest_common.sh@901 -- # local max=10 00:23:12.136 21:16:27 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:12.136 21:16:27 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:12.136 21:16:27 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:23:12.136 21:16:27 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:12.136 21:16:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:12.136 21:16:27 -- common/autotest_common.sh@10 -- # set +x 00:23:12.136 21:16:27 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:12.136 21:16:27 -- host/discovery.sh@59 -- # sort 00:23:12.136 21:16:27 -- host/discovery.sh@59 -- # xargs 00:23:12.136 21:16:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:12.136 21:16:28 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:23:12.136 21:16:28 -- common/autotest_common.sh@904 -- # return 0 00:23:12.136 21:16:28 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:12.136 21:16:28 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:12.136 21:16:28 -- common/autotest_common.sh@901 -- # local max=10 00:23:12.136 21:16:28 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:12.136 21:16:28 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:12.136 21:16:28 -- common/autotest_common.sh@903 -- # get_bdev_list 00:23:12.136 21:16:28 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:12.136 21:16:28 -- host/discovery.sh@55 -- # xargs 00:23:12.136 21:16:28 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:12.136 21:16:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:12.136 21:16:28 -- host/discovery.sh@55 -- # sort 00:23:12.136 21:16:28 -- common/autotest_common.sh@10 -- # set +x 00:23:12.136 21:16:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:12.395 21:16:28 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:23:12.395 21:16:28 -- common/autotest_common.sh@904 -- # return 0 00:23:12.395 21:16:28 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:12.395 21:16:28 -- host/discovery.sh@79 -- # expected_count=2 00:23:12.395 21:16:28 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:12.395 21:16:28 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:12.395 21:16:28 -- common/autotest_common.sh@901 -- # local max=10 00:23:12.395 21:16:28 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:12.395 21:16:28 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:12.395 21:16:28 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:12.395 21:16:28 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:12.395 21:16:28 -- host/discovery.sh@74 -- # jq '. | length' 00:23:12.395 21:16:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:12.395 21:16:28 -- common/autotest_common.sh@10 -- # set +x 00:23:12.395 21:16:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:12.395 21:16:28 -- host/discovery.sh@74 -- # notification_count=2 00:23:12.395 21:16:28 -- host/discovery.sh@75 -- # notify_id=4 00:23:12.395 21:16:28 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:12.395 21:16:28 -- common/autotest_common.sh@904 -- # return 0 00:23:12.395 21:16:28 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:12.395 21:16:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:12.395 21:16:28 -- common/autotest_common.sh@10 -- # set +x 00:23:13.332 [2024-04-18 21:16:29.192236] bdev_nvme.c:6930:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:13.332 [2024-04-18 21:16:29.192253] bdev_nvme.c:7010:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:13.332 [2024-04-18 21:16:29.192264] bdev_nvme.c:6893:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:13.591 [2024-04-18 21:16:29.280536] bdev_nvme.c:6859:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:13.591 [2024-04-18 21:16:29.508519] bdev_nvme.c:6749:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:13.591 [2024-04-18 21:16:29.508544] bdev_nvme.c:6708:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:13.591 21:16:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.591 21:16:29 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:13.591 21:16:29 -- common/autotest_common.sh@638 -- # local es=0 00:23:13.591 21:16:29 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:13.591 21:16:29 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:13.591 21:16:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:13.591 21:16:29 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:13.591 21:16:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:13.591 21:16:29 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:13.591 21:16:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.591 21:16:29 -- common/autotest_common.sh@10 -- # set +x 00:23:13.850 request: 00:23:13.850 { 00:23:13.850 "name": "nvme", 00:23:13.850 "trtype": "tcp", 00:23:13.850 "traddr": "10.0.0.2", 00:23:13.850 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:13.850 "adrfam": "ipv4", 00:23:13.850 "trsvcid": "8009", 00:23:13.850 "wait_for_attach": true, 00:23:13.850 "method": "bdev_nvme_start_discovery", 00:23:13.850 "req_id": 1 00:23:13.850 } 00:23:13.850 Got JSON-RPC error response 00:23:13.850 response: 00:23:13.850 { 00:23:13.850 "code": -17, 00:23:13.850 "message": "File exists" 00:23:13.850 } 00:23:13.850 21:16:29 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:13.850 21:16:29 -- common/autotest_common.sh@641 -- # es=1 00:23:13.850 21:16:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:13.850 21:16:29 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:13.850 21:16:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:13.850 21:16:29 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:13.850 21:16:29 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:13.850 21:16:29 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:13.850 21:16:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.850 21:16:29 -- host/discovery.sh@67 -- # sort 00:23:13.850 21:16:29 -- common/autotest_common.sh@10 -- # set +x 00:23:13.850 21:16:29 -- host/discovery.sh@67 -- # xargs 00:23:13.850 21:16:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.850 21:16:29 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:13.850 21:16:29 -- host/discovery.sh@146 -- # get_bdev_list 00:23:13.850 21:16:29 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:13.850 21:16:29 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:13.850 21:16:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.850 21:16:29 -- host/discovery.sh@55 -- # sort 00:23:13.850 21:16:29 -- common/autotest_common.sh@10 -- # set +x 00:23:13.850 21:16:29 -- host/discovery.sh@55 -- # xargs 00:23:13.850 21:16:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.850 21:16:29 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:13.850 21:16:29 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:13.850 21:16:29 -- common/autotest_common.sh@638 -- # local es=0 00:23:13.850 21:16:29 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:13.850 21:16:29 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:13.850 21:16:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:13.850 21:16:29 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:13.850 21:16:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:13.850 21:16:29 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:13.850 21:16:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.850 21:16:29 -- common/autotest_common.sh@10 -- # set +x 00:23:13.850 request: 00:23:13.850 { 00:23:13.850 "name": "nvme_second", 00:23:13.850 "trtype": "tcp", 00:23:13.850 "traddr": "10.0.0.2", 00:23:13.850 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:13.850 "adrfam": "ipv4", 00:23:13.850 "trsvcid": "8009", 00:23:13.850 "wait_for_attach": true, 00:23:13.850 "method": "bdev_nvme_start_discovery", 00:23:13.850 "req_id": 1 00:23:13.850 } 00:23:13.850 Got JSON-RPC error response 00:23:13.850 response: 00:23:13.850 { 00:23:13.850 "code": -17, 00:23:13.850 "message": "File exists" 00:23:13.850 } 00:23:13.850 21:16:29 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:13.850 21:16:29 -- common/autotest_common.sh@641 -- # es=1 00:23:13.850 21:16:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:13.850 21:16:29 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:13.850 21:16:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:13.850 21:16:29 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:13.850 21:16:29 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:13.850 21:16:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.850 21:16:29 -- common/autotest_common.sh@10 -- # set +x 00:23:13.850 21:16:29 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:13.850 21:16:29 -- host/discovery.sh@67 -- # sort 00:23:13.850 21:16:29 -- host/discovery.sh@67 -- # xargs 00:23:13.850 21:16:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.850 21:16:29 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:13.850 21:16:29 -- host/discovery.sh@152 -- # get_bdev_list 00:23:13.850 21:16:29 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:13.850 21:16:29 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:13.850 21:16:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.850 21:16:29 -- host/discovery.sh@55 -- # sort 00:23:13.850 21:16:29 -- common/autotest_common.sh@10 -- # set +x 00:23:13.850 21:16:29 -- host/discovery.sh@55 -- # xargs 00:23:13.850 21:16:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.850 21:16:29 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:13.850 21:16:29 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:13.850 21:16:29 -- common/autotest_common.sh@638 -- # local es=0 00:23:13.850 21:16:29 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:13.850 21:16:29 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:13.850 21:16:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:13.850 21:16:29 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:13.850 21:16:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:13.850 21:16:29 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:13.850 21:16:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.850 21:16:29 -- common/autotest_common.sh@10 -- # set +x 00:23:15.229 [2024-04-18 21:16:30.756159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:15.229 [2024-04-18 21:16:30.756539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:15.229 [2024-04-18 21:16:30.756552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d78e0 with addr=10.0.0.2, port=8010 00:23:15.229 [2024-04-18 21:16:30.756568] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:15.229 [2024-04-18 21:16:30.756574] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:15.229 [2024-04-18 21:16:30.756581] bdev_nvme.c:6992:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:16.165 [2024-04-18 21:16:31.758537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:16.165 [2024-04-18 21:16:31.758892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:16.165 [2024-04-18 21:16:31.758903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18faa50 with addr=10.0.0.2, port=8010 00:23:16.165 [2024-04-18 21:16:31.758914] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:16.166 [2024-04-18 21:16:31.758921] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:16.166 [2024-04-18 21:16:31.758926] bdev_nvme.c:6992:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:17.102 [2024-04-18 21:16:32.760630] bdev_nvme.c:6973:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:17.102 request: 00:23:17.102 { 00:23:17.102 "name": "nvme_second", 00:23:17.102 "trtype": "tcp", 00:23:17.102 "traddr": "10.0.0.2", 00:23:17.102 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:17.102 "adrfam": "ipv4", 00:23:17.102 "trsvcid": "8010", 00:23:17.102 "attach_timeout_ms": 3000, 00:23:17.102 "method": "bdev_nvme_start_discovery", 00:23:17.102 "req_id": 1 00:23:17.102 } 00:23:17.102 Got JSON-RPC error response 00:23:17.102 response: 00:23:17.102 { 00:23:17.102 "code": -110, 00:23:17.102 "message": "Connection timed out" 00:23:17.102 } 00:23:17.102 21:16:32 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:17.102 21:16:32 -- common/autotest_common.sh@641 -- # es=1 00:23:17.102 21:16:32 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:17.102 21:16:32 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:17.102 21:16:32 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:17.102 21:16:32 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:17.102 21:16:32 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:17.102 21:16:32 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:17.102 21:16:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:17.102 21:16:32 -- host/discovery.sh@67 -- # sort 00:23:17.102 21:16:32 -- common/autotest_common.sh@10 -- # set +x 00:23:17.102 21:16:32 -- host/discovery.sh@67 -- # xargs 00:23:17.102 21:16:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:17.102 21:16:32 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:17.102 21:16:32 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:17.102 21:16:32 -- host/discovery.sh@161 -- # kill 3154080 00:23:17.102 21:16:32 -- host/discovery.sh@162 -- # nvmftestfini 00:23:17.102 21:16:32 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:17.102 21:16:32 -- nvmf/common.sh@117 -- # sync 00:23:17.102 21:16:32 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:17.102 21:16:32 -- nvmf/common.sh@120 -- # set +e 00:23:17.102 21:16:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:17.102 21:16:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:17.102 rmmod nvme_tcp 00:23:17.102 rmmod nvme_fabrics 00:23:17.102 rmmod nvme_keyring 00:23:17.102 21:16:32 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:17.102 21:16:32 -- nvmf/common.sh@124 -- # set -e 00:23:17.102 21:16:32 -- nvmf/common.sh@125 -- # return 0 00:23:17.102 21:16:32 -- nvmf/common.sh@478 -- # '[' -n 3153975 ']' 00:23:17.102 21:16:32 -- nvmf/common.sh@479 -- # killprocess 3153975 00:23:17.102 21:16:32 -- common/autotest_common.sh@936 -- # '[' -z 3153975 ']' 00:23:17.102 21:16:32 -- common/autotest_common.sh@940 -- # kill -0 3153975 00:23:17.102 21:16:32 -- common/autotest_common.sh@941 -- # uname 00:23:17.102 21:16:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:17.102 21:16:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3153975 00:23:17.102 21:16:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:17.102 21:16:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:17.102 21:16:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3153975' 00:23:17.102 killing process with pid 3153975 00:23:17.102 21:16:32 -- common/autotest_common.sh@955 -- # kill 3153975 00:23:17.102 21:16:32 -- common/autotest_common.sh@960 -- # wait 3153975 00:23:17.362 21:16:33 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:17.362 21:16:33 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:17.362 21:16:33 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:17.362 21:16:33 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:17.362 21:16:33 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:17.362 21:16:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.362 21:16:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:17.362 21:16:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.893 21:16:35 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:19.893 00:23:19.893 real 0m18.057s 00:23:19.893 user 0m22.630s 00:23:19.893 sys 0m5.448s 00:23:19.893 21:16:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:19.893 21:16:35 -- common/autotest_common.sh@10 -- # set +x 00:23:19.893 ************************************ 00:23:19.893 END TEST nvmf_discovery 00:23:19.893 ************************************ 00:23:19.893 21:16:35 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:19.893 21:16:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:19.893 21:16:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:19.893 21:16:35 -- common/autotest_common.sh@10 -- # set +x 00:23:19.893 ************************************ 00:23:19.893 START TEST nvmf_discovery_remove_ifc 00:23:19.893 ************************************ 00:23:19.893 21:16:35 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:19.893 * Looking for test storage... 00:23:19.893 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:19.893 21:16:35 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:19.893 21:16:35 -- nvmf/common.sh@7 -- # uname -s 00:23:19.893 21:16:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:19.893 21:16:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:19.893 21:16:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:19.893 21:16:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:19.893 21:16:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:19.893 21:16:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:19.893 21:16:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:19.893 21:16:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:19.893 21:16:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:19.893 21:16:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:19.893 21:16:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:19.893 21:16:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:19.893 21:16:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:19.893 21:16:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:19.893 21:16:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:19.893 21:16:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:19.893 21:16:35 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:19.893 21:16:35 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:19.893 21:16:35 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:19.893 21:16:35 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:19.894 21:16:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.894 21:16:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.894 21:16:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.894 21:16:35 -- paths/export.sh@5 -- # export PATH 00:23:19.894 21:16:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.894 21:16:35 -- nvmf/common.sh@47 -- # : 0 00:23:19.894 21:16:35 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:19.894 21:16:35 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:19.894 21:16:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:19.894 21:16:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:19.894 21:16:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:19.894 21:16:35 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:19.894 21:16:35 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:19.894 21:16:35 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:19.894 21:16:35 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:23:19.894 21:16:35 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:23:19.894 21:16:35 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:23:19.894 21:16:35 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:23:19.894 21:16:35 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:23:19.894 21:16:35 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:23:19.894 21:16:35 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:23:19.894 21:16:35 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:19.894 21:16:35 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:19.894 21:16:35 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:19.894 21:16:35 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:19.894 21:16:35 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:19.894 21:16:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.894 21:16:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:19.894 21:16:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.894 21:16:35 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:19.894 21:16:35 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:19.894 21:16:35 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:19.894 21:16:35 -- common/autotest_common.sh@10 -- # set +x 00:23:26.461 21:16:41 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:26.461 21:16:41 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:26.461 21:16:41 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:26.461 21:16:41 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:26.461 21:16:41 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:26.461 21:16:41 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:26.461 21:16:41 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:26.461 21:16:41 -- nvmf/common.sh@295 -- # net_devs=() 00:23:26.462 21:16:41 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:26.462 21:16:41 -- nvmf/common.sh@296 -- # e810=() 00:23:26.462 21:16:41 -- nvmf/common.sh@296 -- # local -ga e810 00:23:26.462 21:16:41 -- nvmf/common.sh@297 -- # x722=() 00:23:26.462 21:16:41 -- nvmf/common.sh@297 -- # local -ga x722 00:23:26.462 21:16:41 -- nvmf/common.sh@298 -- # mlx=() 00:23:26.462 21:16:41 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:26.462 21:16:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:26.462 21:16:41 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:26.462 21:16:41 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:26.462 21:16:41 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:26.462 21:16:41 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:26.462 21:16:41 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:26.462 21:16:41 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:26.462 21:16:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:26.462 21:16:41 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:26.462 21:16:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:26.462 21:16:41 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:26.462 21:16:41 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:26.462 21:16:41 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:26.462 21:16:41 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:26.462 21:16:41 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:26.462 21:16:41 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:26.462 21:16:41 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:26.462 21:16:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:26.462 21:16:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:26.462 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:26.462 21:16:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:26.462 21:16:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:26.462 21:16:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:26.462 21:16:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:26.462 21:16:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:26.462 21:16:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:26.462 21:16:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:26.462 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:26.462 21:16:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:26.462 21:16:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:26.462 21:16:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:26.462 21:16:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:26.462 21:16:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:26.462 21:16:41 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:26.462 21:16:41 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:26.462 21:16:41 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:26.462 21:16:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:26.462 21:16:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:26.462 21:16:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:26.462 21:16:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:26.462 21:16:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:26.462 Found net devices under 0000:86:00.0: cvl_0_0 00:23:26.462 21:16:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:26.462 21:16:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:26.462 21:16:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:26.462 21:16:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:26.462 21:16:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:26.462 21:16:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:26.462 Found net devices under 0000:86:00.1: cvl_0_1 00:23:26.462 21:16:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:26.462 21:16:41 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:26.462 21:16:41 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:26.462 21:16:41 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:26.462 21:16:41 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:26.462 21:16:41 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:26.462 21:16:41 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:26.462 21:16:41 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:26.462 21:16:41 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:26.462 21:16:41 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:26.462 21:16:41 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:26.462 21:16:41 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:26.462 21:16:41 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:26.462 21:16:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:26.462 21:16:41 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:26.462 21:16:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:26.462 21:16:41 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:26.462 21:16:41 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:26.462 21:16:41 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:26.462 21:16:41 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:26.462 21:16:41 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:26.462 21:16:41 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:26.462 21:16:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:26.462 21:16:41 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:26.462 21:16:41 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:26.462 21:16:41 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:26.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:26.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:23:26.462 00:23:26.462 --- 10.0.0.2 ping statistics --- 00:23:26.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.462 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:23:26.462 21:16:41 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:26.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:26.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:23:26.462 00:23:26.462 --- 10.0.0.1 ping statistics --- 00:23:26.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.462 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:23:26.462 21:16:41 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:26.462 21:16:41 -- nvmf/common.sh@411 -- # return 0 00:23:26.462 21:16:41 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:26.462 21:16:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:26.462 21:16:41 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:26.462 21:16:41 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:26.462 21:16:41 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:26.462 21:16:41 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:26.462 21:16:41 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:26.462 21:16:41 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:23:26.462 21:16:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:26.462 21:16:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:26.462 21:16:41 -- common/autotest_common.sh@10 -- # set +x 00:23:26.462 21:16:41 -- nvmf/common.sh@470 -- # nvmfpid=3159603 00:23:26.462 21:16:41 -- nvmf/common.sh@471 -- # waitforlisten 3159603 00:23:26.462 21:16:41 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:26.462 21:16:41 -- common/autotest_common.sh@817 -- # '[' -z 3159603 ']' 00:23:26.462 21:16:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:26.462 21:16:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:26.462 21:16:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:26.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:26.462 21:16:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:26.462 21:16:41 -- common/autotest_common.sh@10 -- # set +x 00:23:26.462 [2024-04-18 21:16:41.831858] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:23:26.462 [2024-04-18 21:16:41.831901] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:26.462 EAL: No free 2048 kB hugepages reported on node 1 00:23:26.462 [2024-04-18 21:16:41.893397] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.462 [2024-04-18 21:16:41.970676] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:26.462 [2024-04-18 21:16:41.970707] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:26.462 [2024-04-18 21:16:41.970713] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:26.462 [2024-04-18 21:16:41.970719] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:26.462 [2024-04-18 21:16:41.970724] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:26.462 [2024-04-18 21:16:41.970760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:26.722 21:16:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:26.722 21:16:42 -- common/autotest_common.sh@850 -- # return 0 00:23:26.722 21:16:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:26.722 21:16:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:26.722 21:16:42 -- common/autotest_common.sh@10 -- # set +x 00:23:26.981 21:16:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:26.981 21:16:42 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:23:26.981 21:16:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:26.981 21:16:42 -- common/autotest_common.sh@10 -- # set +x 00:23:26.981 [2024-04-18 21:16:42.676264] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:26.981 [2024-04-18 21:16:42.684384] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:26.981 null0 00:23:26.981 [2024-04-18 21:16:42.716407] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:26.981 21:16:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:26.981 21:16:42 -- host/discovery_remove_ifc.sh@59 -- # hostpid=3159846 00:23:26.981 21:16:42 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:23:26.981 21:16:42 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3159846 /tmp/host.sock 00:23:26.981 21:16:42 -- common/autotest_common.sh@817 -- # '[' -z 3159846 ']' 00:23:26.981 21:16:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:23:26.981 21:16:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:26.981 21:16:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:26.981 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:26.981 21:16:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:26.981 21:16:42 -- common/autotest_common.sh@10 -- # set +x 00:23:26.981 [2024-04-18 21:16:42.782129] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:23:26.981 [2024-04-18 21:16:42.782168] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3159846 ] 00:23:26.981 EAL: No free 2048 kB hugepages reported on node 1 00:23:26.981 [2024-04-18 21:16:42.839718] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.981 [2024-04-18 21:16:42.910729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.919 21:16:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:27.919 21:16:43 -- common/autotest_common.sh@850 -- # return 0 00:23:27.919 21:16:43 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:27.919 21:16:43 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:23:27.919 21:16:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:27.919 21:16:43 -- common/autotest_common.sh@10 -- # set +x 00:23:27.920 21:16:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:27.920 21:16:43 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:23:27.920 21:16:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:27.920 21:16:43 -- common/autotest_common.sh@10 -- # set +x 00:23:27.920 21:16:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:27.920 21:16:43 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:23:27.920 21:16:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:27.920 21:16:43 -- common/autotest_common.sh@10 -- # set +x 00:23:28.855 [2024-04-18 21:16:44.705260] bdev_nvme.c:6930:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:28.855 [2024-04-18 21:16:44.705283] bdev_nvme.c:7010:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:28.855 [2024-04-18 21:16:44.705296] bdev_nvme.c:6893:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:29.114 [2024-04-18 21:16:44.835715] bdev_nvme.c:6859:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:29.114 [2024-04-18 21:16:45.019382] bdev_nvme.c:7720:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:29.114 [2024-04-18 21:16:45.019428] bdev_nvme.c:7720:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:29.114 [2024-04-18 21:16:45.019450] bdev_nvme.c:7720:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:29.114 [2024-04-18 21:16:45.019462] bdev_nvme.c:6749:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:29.114 [2024-04-18 21:16:45.019479] bdev_nvme.c:6708:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:29.114 21:16:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:29.114 21:16:45 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:23:29.114 21:16:45 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:29.114 [2024-04-18 21:16:45.024489] bdev_nvme.c:1606:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x12eb900 was disconnected and freed. delete nvme_qpair. 00:23:29.114 21:16:45 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:29.114 21:16:45 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:29.114 21:16:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:29.114 21:16:45 -- common/autotest_common.sh@10 -- # set +x 00:23:29.114 21:16:45 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:29.114 21:16:45 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:29.114 21:16:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:29.373 21:16:45 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:23:29.373 21:16:45 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:23:29.373 21:16:45 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:23:29.373 21:16:45 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:23:29.373 21:16:45 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:29.373 21:16:45 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:29.373 21:16:45 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:29.373 21:16:45 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:29.373 21:16:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:29.373 21:16:45 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:29.373 21:16:45 -- common/autotest_common.sh@10 -- # set +x 00:23:29.373 21:16:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:29.373 21:16:45 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:29.373 21:16:45 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:30.312 21:16:46 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:30.312 21:16:46 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:30.312 21:16:46 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:30.312 21:16:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.312 21:16:46 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:30.312 21:16:46 -- common/autotest_common.sh@10 -- # set +x 00:23:30.312 21:16:46 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:30.312 21:16:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.571 21:16:46 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:30.571 21:16:46 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:31.506 21:16:47 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:31.506 21:16:47 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:31.506 21:16:47 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:31.506 21:16:47 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:31.506 21:16:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.506 21:16:47 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:31.506 21:16:47 -- common/autotest_common.sh@10 -- # set +x 00:23:31.506 21:16:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.506 21:16:47 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:31.506 21:16:47 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:32.442 21:16:48 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:32.442 21:16:48 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:32.442 21:16:48 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:32.442 21:16:48 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:32.442 21:16:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:32.442 21:16:48 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:32.442 21:16:48 -- common/autotest_common.sh@10 -- # set +x 00:23:32.442 21:16:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:32.702 21:16:48 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:32.702 21:16:48 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:33.640 21:16:49 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:33.640 21:16:49 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:33.640 21:16:49 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:33.640 21:16:49 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:33.640 21:16:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:33.640 21:16:49 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:33.640 21:16:49 -- common/autotest_common.sh@10 -- # set +x 00:23:33.640 21:16:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:33.640 21:16:49 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:33.640 21:16:49 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:34.576 21:16:50 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:34.576 21:16:50 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:34.576 21:16:50 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:34.576 21:16:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:34.576 21:16:50 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:34.576 21:16:50 -- common/autotest_common.sh@10 -- # set +x 00:23:34.576 21:16:50 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:34.576 21:16:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:34.576 [2024-04-18 21:16:50.460433] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:23:34.576 [2024-04-18 21:16:50.460472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.576 [2024-04-18 21:16:50.460483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.576 [2024-04-18 21:16:50.460491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.576 [2024-04-18 21:16:50.460498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.576 [2024-04-18 21:16:50.460505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.576 [2024-04-18 21:16:50.460515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.576 [2024-04-18 21:16:50.460530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.576 [2024-04-18 21:16:50.460537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.576 [2024-04-18 21:16:50.460544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.576 [2024-04-18 21:16:50.460551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.576 [2024-04-18 21:16:50.460557] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b2ae0 is same with the state(5) to be set 00:23:34.576 [2024-04-18 21:16:50.470455] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b2ae0 (9): Bad file descriptor 00:23:34.576 [2024-04-18 21:16:50.480492] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:34.576 21:16:50 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:34.576 21:16:50 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:35.952 21:16:51 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:35.952 21:16:51 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:35.952 21:16:51 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:35.952 21:16:51 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:35.952 21:16:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:35.952 21:16:51 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:35.952 21:16:51 -- common/autotest_common.sh@10 -- # set +x 00:23:35.952 [2024-04-18 21:16:51.509533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:36.889 [2024-04-18 21:16:52.533535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:36.889 [2024-04-18 21:16:52.533586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b2ae0 with addr=10.0.0.2, port=4420 00:23:36.889 [2024-04-18 21:16:52.533612] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b2ae0 is same with the state(5) to be set 00:23:36.889 [2024-04-18 21:16:52.534047] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b2ae0 (9): Bad file descriptor 00:23:36.889 [2024-04-18 21:16:52.534075] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.889 [2024-04-18 21:16:52.534099] bdev_nvme.c:6681:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:23:36.889 [2024-04-18 21:16:52.534125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.889 [2024-04-18 21:16:52.534137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.889 [2024-04-18 21:16:52.534150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.889 [2024-04-18 21:16:52.534159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.889 [2024-04-18 21:16:52.534169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.889 [2024-04-18 21:16:52.534178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.889 [2024-04-18 21:16:52.534188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.889 [2024-04-18 21:16:52.534197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.889 [2024-04-18 21:16:52.534207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.889 [2024-04-18 21:16:52.534222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.889 [2024-04-18 21:16:52.534231] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:23:36.889 [2024-04-18 21:16:52.534639] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b1f70 (9): Bad file descriptor 00:23:36.889 [2024-04-18 21:16:52.535652] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:23:36.889 [2024-04-18 21:16:52.535666] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:23:36.889 21:16:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:36.889 21:16:52 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:36.889 21:16:52 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:37.826 21:16:53 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:37.826 21:16:53 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:37.826 21:16:53 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:37.826 21:16:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:37.826 21:16:53 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:37.826 21:16:53 -- common/autotest_common.sh@10 -- # set +x 00:23:37.826 21:16:53 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:37.826 21:16:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:37.826 21:16:53 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:23:37.826 21:16:53 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:37.826 21:16:53 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:37.826 21:16:53 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:23:37.826 21:16:53 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:37.826 21:16:53 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:37.826 21:16:53 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:37.826 21:16:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:37.826 21:16:53 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:37.826 21:16:53 -- common/autotest_common.sh@10 -- # set +x 00:23:37.826 21:16:53 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:37.826 21:16:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:37.826 21:16:53 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:37.826 21:16:53 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:38.762 [2024-04-18 21:16:54.590191] bdev_nvme.c:6930:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:38.762 [2024-04-18 21:16:54.590209] bdev_nvme.c:7010:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:38.762 [2024-04-18 21:16:54.590223] bdev_nvme.c:6893:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:39.020 [2024-04-18 21:16:54.720632] bdev_nvme.c:6859:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:23:39.020 21:16:54 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:39.020 21:16:54 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:39.020 21:16:54 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:39.020 21:16:54 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:39.020 21:16:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:39.020 21:16:54 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:39.020 21:16:54 -- common/autotest_common.sh@10 -- # set +x 00:23:39.020 21:16:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:39.020 21:16:54 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:39.020 21:16:54 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:39.020 [2024-04-18 21:16:54.821341] bdev_nvme.c:7720:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:39.020 [2024-04-18 21:16:54.821376] bdev_nvme.c:7720:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:39.020 [2024-04-18 21:16:54.821393] bdev_nvme.c:7720:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:39.020 [2024-04-18 21:16:54.821406] bdev_nvme.c:6749:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:23:39.020 [2024-04-18 21:16:54.821416] bdev_nvme.c:6708:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:39.020 [2024-04-18 21:16:54.829501] bdev_nvme.c:1606:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x12f6490 was disconnected and freed. delete nvme_qpair. 00:23:39.958 21:16:55 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:39.958 21:16:55 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:39.958 21:16:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:39.958 21:16:55 -- common/autotest_common.sh@10 -- # set +x 00:23:39.958 21:16:55 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:39.958 21:16:55 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:39.958 21:16:55 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:39.958 21:16:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:39.958 21:16:55 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:23:39.958 21:16:55 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:23:39.958 21:16:55 -- host/discovery_remove_ifc.sh@90 -- # killprocess 3159846 00:23:39.958 21:16:55 -- common/autotest_common.sh@936 -- # '[' -z 3159846 ']' 00:23:39.958 21:16:55 -- common/autotest_common.sh@940 -- # kill -0 3159846 00:23:39.958 21:16:55 -- common/autotest_common.sh@941 -- # uname 00:23:39.958 21:16:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:39.958 21:16:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3159846 00:23:40.217 21:16:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:40.217 21:16:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:40.217 21:16:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3159846' 00:23:40.217 killing process with pid 3159846 00:23:40.217 21:16:55 -- common/autotest_common.sh@955 -- # kill 3159846 00:23:40.217 21:16:55 -- common/autotest_common.sh@960 -- # wait 3159846 00:23:40.217 21:16:56 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:23:40.217 21:16:56 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:40.217 21:16:56 -- nvmf/common.sh@117 -- # sync 00:23:40.217 21:16:56 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:40.217 21:16:56 -- nvmf/common.sh@120 -- # set +e 00:23:40.217 21:16:56 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:40.217 21:16:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:40.217 rmmod nvme_tcp 00:23:40.217 rmmod nvme_fabrics 00:23:40.217 rmmod nvme_keyring 00:23:40.509 21:16:56 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:40.509 21:16:56 -- nvmf/common.sh@124 -- # set -e 00:23:40.509 21:16:56 -- nvmf/common.sh@125 -- # return 0 00:23:40.509 21:16:56 -- nvmf/common.sh@478 -- # '[' -n 3159603 ']' 00:23:40.509 21:16:56 -- nvmf/common.sh@479 -- # killprocess 3159603 00:23:40.509 21:16:56 -- common/autotest_common.sh@936 -- # '[' -z 3159603 ']' 00:23:40.509 21:16:56 -- common/autotest_common.sh@940 -- # kill -0 3159603 00:23:40.509 21:16:56 -- common/autotest_common.sh@941 -- # uname 00:23:40.509 21:16:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:40.509 21:16:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3159603 00:23:40.509 21:16:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:40.509 21:16:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:40.509 21:16:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3159603' 00:23:40.509 killing process with pid 3159603 00:23:40.509 21:16:56 -- common/autotest_common.sh@955 -- # kill 3159603 00:23:40.509 21:16:56 -- common/autotest_common.sh@960 -- # wait 3159603 00:23:40.509 21:16:56 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:40.509 21:16:56 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:40.509 21:16:56 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:40.509 21:16:56 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:40.509 21:16:56 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:40.509 21:16:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.509 21:16:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:40.509 21:16:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.060 21:16:58 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:43.060 00:23:43.060 real 0m23.094s 00:23:43.060 user 0m28.005s 00:23:43.060 sys 0m6.075s 00:23:43.060 21:16:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:43.060 21:16:58 -- common/autotest_common.sh@10 -- # set +x 00:23:43.060 ************************************ 00:23:43.060 END TEST nvmf_discovery_remove_ifc 00:23:43.060 ************************************ 00:23:43.060 21:16:58 -- nvmf/nvmf.sh@102 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:43.060 21:16:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:43.060 21:16:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:43.060 21:16:58 -- common/autotest_common.sh@10 -- # set +x 00:23:43.060 ************************************ 00:23:43.060 START TEST nvmf_identify_kernel_target 00:23:43.060 ************************************ 00:23:43.060 21:16:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:43.060 * Looking for test storage... 00:23:43.060 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:43.060 21:16:58 -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:43.060 21:16:58 -- nvmf/common.sh@7 -- # uname -s 00:23:43.060 21:16:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:43.060 21:16:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:43.060 21:16:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:43.060 21:16:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:43.060 21:16:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:43.060 21:16:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:43.060 21:16:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:43.060 21:16:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:43.060 21:16:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:43.060 21:16:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:43.061 21:16:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:43.061 21:16:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:43.061 21:16:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:43.061 21:16:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:43.061 21:16:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:43.061 21:16:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:43.061 21:16:58 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:43.061 21:16:58 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:43.061 21:16:58 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:43.061 21:16:58 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:43.061 21:16:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.061 21:16:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.061 21:16:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.061 21:16:58 -- paths/export.sh@5 -- # export PATH 00:23:43.061 21:16:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.061 21:16:58 -- nvmf/common.sh@47 -- # : 0 00:23:43.061 21:16:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:43.061 21:16:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:43.061 21:16:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:43.061 21:16:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:43.061 21:16:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:43.061 21:16:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:43.061 21:16:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:43.061 21:16:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:43.061 21:16:58 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:23:43.061 21:16:58 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:43.061 21:16:58 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:43.061 21:16:58 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:43.061 21:16:58 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:43.061 21:16:58 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:43.061 21:16:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.061 21:16:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:43.061 21:16:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.061 21:16:58 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:43.061 21:16:58 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:43.061 21:16:58 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:43.061 21:16:58 -- common/autotest_common.sh@10 -- # set +x 00:23:49.633 21:17:04 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:49.633 21:17:04 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:49.633 21:17:04 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:49.633 21:17:04 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:49.633 21:17:04 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:49.633 21:17:04 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:49.633 21:17:04 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:49.633 21:17:04 -- nvmf/common.sh@295 -- # net_devs=() 00:23:49.633 21:17:04 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:49.633 21:17:04 -- nvmf/common.sh@296 -- # e810=() 00:23:49.633 21:17:04 -- nvmf/common.sh@296 -- # local -ga e810 00:23:49.633 21:17:04 -- nvmf/common.sh@297 -- # x722=() 00:23:49.633 21:17:04 -- nvmf/common.sh@297 -- # local -ga x722 00:23:49.633 21:17:04 -- nvmf/common.sh@298 -- # mlx=() 00:23:49.633 21:17:04 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:49.633 21:17:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:49.633 21:17:04 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:49.633 21:17:04 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:49.633 21:17:04 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:49.633 21:17:04 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:49.633 21:17:04 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:49.633 21:17:04 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:49.633 21:17:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:49.633 21:17:04 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:49.633 21:17:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:49.633 21:17:04 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:49.633 21:17:04 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:49.633 21:17:04 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:49.633 21:17:04 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:49.633 21:17:04 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:49.633 21:17:04 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:49.633 21:17:04 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:49.633 21:17:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:49.633 21:17:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:49.633 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:49.633 21:17:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:49.633 21:17:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:49.633 21:17:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:49.633 21:17:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:49.633 21:17:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:49.633 21:17:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:49.633 21:17:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:49.633 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:49.633 21:17:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:49.633 21:17:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:49.633 21:17:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:49.633 21:17:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:49.633 21:17:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:49.633 21:17:04 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:49.633 21:17:04 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:49.633 21:17:04 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:49.633 21:17:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:49.633 21:17:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:49.633 21:17:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:49.633 21:17:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:49.633 21:17:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:49.633 Found net devices under 0000:86:00.0: cvl_0_0 00:23:49.633 21:17:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:49.633 21:17:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:49.633 21:17:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:49.633 21:17:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:49.633 21:17:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:49.633 21:17:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:49.633 Found net devices under 0000:86:00.1: cvl_0_1 00:23:49.633 21:17:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:49.633 21:17:04 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:49.633 21:17:04 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:49.633 21:17:04 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:49.633 21:17:04 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:49.633 21:17:04 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:49.633 21:17:04 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:49.633 21:17:04 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:49.633 21:17:04 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:49.633 21:17:04 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:49.633 21:17:04 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:49.633 21:17:04 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:49.633 21:17:04 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:49.633 21:17:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:49.633 21:17:04 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:49.633 21:17:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:49.633 21:17:04 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:49.633 21:17:04 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:49.633 21:17:04 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:49.633 21:17:04 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:49.633 21:17:04 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:49.634 21:17:04 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:49.634 21:17:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:49.634 21:17:05 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:49.634 21:17:05 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:49.634 21:17:05 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:49.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:49.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:23:49.634 00:23:49.634 --- 10.0.0.2 ping statistics --- 00:23:49.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:49.634 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:23:49.634 21:17:05 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:49.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:49.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:23:49.634 00:23:49.634 --- 10.0.0.1 ping statistics --- 00:23:49.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:49.634 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:23:49.634 21:17:05 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:49.634 21:17:05 -- nvmf/common.sh@411 -- # return 0 00:23:49.634 21:17:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:49.634 21:17:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:49.634 21:17:05 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:49.634 21:17:05 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:49.634 21:17:05 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:49.634 21:17:05 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:49.634 21:17:05 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:49.634 21:17:05 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:23:49.634 21:17:05 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:23:49.634 21:17:05 -- nvmf/common.sh@730 -- # local ip 00:23:49.634 21:17:05 -- nvmf/common.sh@731 -- # ip_candidates=() 00:23:49.634 21:17:05 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:23:49.634 21:17:05 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.634 21:17:05 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.634 21:17:05 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:23:49.634 21:17:05 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.634 21:17:05 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:23:49.634 21:17:05 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:23:49.634 21:17:05 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:23:49.634 21:17:05 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:23:49.634 21:17:05 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:49.634 21:17:05 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:49.634 21:17:05 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:23:49.634 21:17:05 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:49.634 21:17:05 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:49.634 21:17:05 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:49.634 21:17:05 -- nvmf/common.sh@628 -- # local block nvme 00:23:49.634 21:17:05 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:23:49.634 21:17:05 -- nvmf/common.sh@631 -- # modprobe nvmet 00:23:49.634 21:17:05 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:49.634 21:17:05 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:52.169 Waiting for block devices as requested 00:23:52.428 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:23:52.428 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:23:52.428 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:23:52.687 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:23:52.687 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:23:52.687 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:23:52.687 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:23:52.946 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:23:52.946 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:23:52.946 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:23:53.205 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:23:53.205 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:23:53.205 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:23:53.205 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:23:53.464 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:23:53.464 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:23:53.464 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:23:53.724 21:17:09 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:23:53.724 21:17:09 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:53.724 21:17:09 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:23:53.724 21:17:09 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:23:53.724 21:17:09 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:53.724 21:17:09 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:23:53.724 21:17:09 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:23:53.724 21:17:09 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:53.724 21:17:09 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:53.724 No valid GPT data, bailing 00:23:53.724 21:17:09 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:53.724 21:17:09 -- scripts/common.sh@391 -- # pt= 00:23:53.724 21:17:09 -- scripts/common.sh@392 -- # return 1 00:23:53.724 21:17:09 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:23:53.724 21:17:09 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:23:53.724 21:17:09 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:53.724 21:17:09 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:53.724 21:17:09 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:53.724 21:17:09 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:53.724 21:17:09 -- nvmf/common.sh@656 -- # echo 1 00:23:53.724 21:17:09 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:23:53.724 21:17:09 -- nvmf/common.sh@658 -- # echo 1 00:23:53.724 21:17:09 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:23:53.724 21:17:09 -- nvmf/common.sh@661 -- # echo tcp 00:23:53.724 21:17:09 -- nvmf/common.sh@662 -- # echo 4420 00:23:53.724 21:17:09 -- nvmf/common.sh@663 -- # echo ipv4 00:23:53.724 21:17:09 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:53.724 21:17:09 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:23:53.724 00:23:53.724 Discovery Log Number of Records 2, Generation counter 2 00:23:53.724 =====Discovery Log Entry 0====== 00:23:53.724 trtype: tcp 00:23:53.724 adrfam: ipv4 00:23:53.724 subtype: current discovery subsystem 00:23:53.724 treq: not specified, sq flow control disable supported 00:23:53.724 portid: 1 00:23:53.724 trsvcid: 4420 00:23:53.724 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:53.724 traddr: 10.0.0.1 00:23:53.724 eflags: none 00:23:53.724 sectype: none 00:23:53.724 =====Discovery Log Entry 1====== 00:23:53.724 trtype: tcp 00:23:53.724 adrfam: ipv4 00:23:53.724 subtype: nvme subsystem 00:23:53.724 treq: not specified, sq flow control disable supported 00:23:53.724 portid: 1 00:23:53.724 trsvcid: 4420 00:23:53.724 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:53.724 traddr: 10.0.0.1 00:23:53.724 eflags: none 00:23:53.724 sectype: none 00:23:53.724 21:17:09 -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:23:53.724 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:23:53.724 EAL: No free 2048 kB hugepages reported on node 1 00:23:53.724 ===================================================== 00:23:53.724 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:53.724 ===================================================== 00:23:53.724 Controller Capabilities/Features 00:23:53.724 ================================ 00:23:53.724 Vendor ID: 0000 00:23:53.724 Subsystem Vendor ID: 0000 00:23:53.724 Serial Number: 8016ba86948f3a03e6c2 00:23:53.724 Model Number: Linux 00:23:53.724 Firmware Version: 6.7.0-68 00:23:53.724 Recommended Arb Burst: 0 00:23:53.724 IEEE OUI Identifier: 00 00 00 00:23:53.724 Multi-path I/O 00:23:53.724 May have multiple subsystem ports: No 00:23:53.724 May have multiple controllers: No 00:23:53.724 Associated with SR-IOV VF: No 00:23:53.724 Max Data Transfer Size: Unlimited 00:23:53.724 Max Number of Namespaces: 0 00:23:53.724 Max Number of I/O Queues: 1024 00:23:53.724 NVMe Specification Version (VS): 1.3 00:23:53.724 NVMe Specification Version (Identify): 1.3 00:23:53.724 Maximum Queue Entries: 1024 00:23:53.725 Contiguous Queues Required: No 00:23:53.725 Arbitration Mechanisms Supported 00:23:53.725 Weighted Round Robin: Not Supported 00:23:53.725 Vendor Specific: Not Supported 00:23:53.725 Reset Timeout: 7500 ms 00:23:53.725 Doorbell Stride: 4 bytes 00:23:53.725 NVM Subsystem Reset: Not Supported 00:23:53.725 Command Sets Supported 00:23:53.725 NVM Command Set: Supported 00:23:53.725 Boot Partition: Not Supported 00:23:53.725 Memory Page Size Minimum: 4096 bytes 00:23:53.725 Memory Page Size Maximum: 4096 bytes 00:23:53.725 Persistent Memory Region: Not Supported 00:23:53.725 Optional Asynchronous Events Supported 00:23:53.725 Namespace Attribute Notices: Not Supported 00:23:53.725 Firmware Activation Notices: Not Supported 00:23:53.725 ANA Change Notices: Not Supported 00:23:53.725 PLE Aggregate Log Change Notices: Not Supported 00:23:53.725 LBA Status Info Alert Notices: Not Supported 00:23:53.725 EGE Aggregate Log Change Notices: Not Supported 00:23:53.725 Normal NVM Subsystem Shutdown event: Not Supported 00:23:53.725 Zone Descriptor Change Notices: Not Supported 00:23:53.725 Discovery Log Change Notices: Supported 00:23:53.725 Controller Attributes 00:23:53.725 128-bit Host Identifier: Not Supported 00:23:53.725 Non-Operational Permissive Mode: Not Supported 00:23:53.725 NVM Sets: Not Supported 00:23:53.725 Read Recovery Levels: Not Supported 00:23:53.725 Endurance Groups: Not Supported 00:23:53.725 Predictable Latency Mode: Not Supported 00:23:53.725 Traffic Based Keep ALive: Not Supported 00:23:53.725 Namespace Granularity: Not Supported 00:23:53.725 SQ Associations: Not Supported 00:23:53.725 UUID List: Not Supported 00:23:53.725 Multi-Domain Subsystem: Not Supported 00:23:53.725 Fixed Capacity Management: Not Supported 00:23:53.725 Variable Capacity Management: Not Supported 00:23:53.725 Delete Endurance Group: Not Supported 00:23:53.725 Delete NVM Set: Not Supported 00:23:53.725 Extended LBA Formats Supported: Not Supported 00:23:53.725 Flexible Data Placement Supported: Not Supported 00:23:53.725 00:23:53.725 Controller Memory Buffer Support 00:23:53.725 ================================ 00:23:53.725 Supported: No 00:23:53.725 00:23:53.725 Persistent Memory Region Support 00:23:53.725 ================================ 00:23:53.725 Supported: No 00:23:53.725 00:23:53.725 Admin Command Set Attributes 00:23:53.725 ============================ 00:23:53.725 Security Send/Receive: Not Supported 00:23:53.725 Format NVM: Not Supported 00:23:53.725 Firmware Activate/Download: Not Supported 00:23:53.725 Namespace Management: Not Supported 00:23:53.725 Device Self-Test: Not Supported 00:23:53.725 Directives: Not Supported 00:23:53.725 NVMe-MI: Not Supported 00:23:53.725 Virtualization Management: Not Supported 00:23:53.725 Doorbell Buffer Config: Not Supported 00:23:53.725 Get LBA Status Capability: Not Supported 00:23:53.725 Command & Feature Lockdown Capability: Not Supported 00:23:53.725 Abort Command Limit: 1 00:23:53.725 Async Event Request Limit: 1 00:23:53.725 Number of Firmware Slots: N/A 00:23:53.725 Firmware Slot 1 Read-Only: N/A 00:23:53.725 Firmware Activation Without Reset: N/A 00:23:53.725 Multiple Update Detection Support: N/A 00:23:53.725 Firmware Update Granularity: No Information Provided 00:23:53.725 Per-Namespace SMART Log: No 00:23:53.725 Asymmetric Namespace Access Log Page: Not Supported 00:23:53.725 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:53.725 Command Effects Log Page: Not Supported 00:23:53.725 Get Log Page Extended Data: Supported 00:23:53.725 Telemetry Log Pages: Not Supported 00:23:53.725 Persistent Event Log Pages: Not Supported 00:23:53.725 Supported Log Pages Log Page: May Support 00:23:53.725 Commands Supported & Effects Log Page: Not Supported 00:23:53.725 Feature Identifiers & Effects Log Page:May Support 00:23:53.725 NVMe-MI Commands & Effects Log Page: May Support 00:23:53.725 Data Area 4 for Telemetry Log: Not Supported 00:23:53.725 Error Log Page Entries Supported: 1 00:23:53.725 Keep Alive: Not Supported 00:23:53.725 00:23:53.725 NVM Command Set Attributes 00:23:53.725 ========================== 00:23:53.725 Submission Queue Entry Size 00:23:53.725 Max: 1 00:23:53.725 Min: 1 00:23:53.725 Completion Queue Entry Size 00:23:53.725 Max: 1 00:23:53.725 Min: 1 00:23:53.725 Number of Namespaces: 0 00:23:53.725 Compare Command: Not Supported 00:23:53.725 Write Uncorrectable Command: Not Supported 00:23:53.725 Dataset Management Command: Not Supported 00:23:53.725 Write Zeroes Command: Not Supported 00:23:53.725 Set Features Save Field: Not Supported 00:23:53.725 Reservations: Not Supported 00:23:53.725 Timestamp: Not Supported 00:23:53.725 Copy: Not Supported 00:23:53.725 Volatile Write Cache: Not Present 00:23:53.725 Atomic Write Unit (Normal): 1 00:23:53.725 Atomic Write Unit (PFail): 1 00:23:53.725 Atomic Compare & Write Unit: 1 00:23:53.725 Fused Compare & Write: Not Supported 00:23:53.725 Scatter-Gather List 00:23:53.725 SGL Command Set: Supported 00:23:53.725 SGL Keyed: Not Supported 00:23:53.725 SGL Bit Bucket Descriptor: Not Supported 00:23:53.725 SGL Metadata Pointer: Not Supported 00:23:53.725 Oversized SGL: Not Supported 00:23:53.725 SGL Metadata Address: Not Supported 00:23:53.725 SGL Offset: Supported 00:23:53.725 Transport SGL Data Block: Not Supported 00:23:53.725 Replay Protected Memory Block: Not Supported 00:23:53.725 00:23:53.725 Firmware Slot Information 00:23:53.725 ========================= 00:23:53.725 Active slot: 0 00:23:53.725 00:23:53.725 00:23:53.725 Error Log 00:23:53.725 ========= 00:23:53.725 00:23:53.725 Active Namespaces 00:23:53.725 ================= 00:23:53.725 Discovery Log Page 00:23:53.725 ================== 00:23:53.725 Generation Counter: 2 00:23:53.725 Number of Records: 2 00:23:53.725 Record Format: 0 00:23:53.725 00:23:53.725 Discovery Log Entry 0 00:23:53.725 ---------------------- 00:23:53.725 Transport Type: 3 (TCP) 00:23:53.725 Address Family: 1 (IPv4) 00:23:53.725 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:53.725 Entry Flags: 00:23:53.726 Duplicate Returned Information: 0 00:23:53.726 Explicit Persistent Connection Support for Discovery: 0 00:23:53.726 Transport Requirements: 00:23:53.726 Secure Channel: Not Specified 00:23:53.726 Port ID: 1 (0x0001) 00:23:53.726 Controller ID: 65535 (0xffff) 00:23:53.726 Admin Max SQ Size: 32 00:23:53.726 Transport Service Identifier: 4420 00:23:53.726 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:53.726 Transport Address: 10.0.0.1 00:23:53.726 Discovery Log Entry 1 00:23:53.726 ---------------------- 00:23:53.726 Transport Type: 3 (TCP) 00:23:53.726 Address Family: 1 (IPv4) 00:23:53.726 Subsystem Type: 2 (NVM Subsystem) 00:23:53.726 Entry Flags: 00:23:53.726 Duplicate Returned Information: 0 00:23:53.726 Explicit Persistent Connection Support for Discovery: 0 00:23:53.726 Transport Requirements: 00:23:53.726 Secure Channel: Not Specified 00:23:53.726 Port ID: 1 (0x0001) 00:23:53.726 Controller ID: 65535 (0xffff) 00:23:53.726 Admin Max SQ Size: 32 00:23:53.726 Transport Service Identifier: 4420 00:23:53.726 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:23:53.726 Transport Address: 10.0.0.1 00:23:53.726 21:17:09 -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:53.986 EAL: No free 2048 kB hugepages reported on node 1 00:23:53.986 get_feature(0x01) failed 00:23:53.986 get_feature(0x02) failed 00:23:53.986 get_feature(0x04) failed 00:23:53.986 ===================================================== 00:23:53.986 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:53.986 ===================================================== 00:23:53.986 Controller Capabilities/Features 00:23:53.986 ================================ 00:23:53.986 Vendor ID: 0000 00:23:53.986 Subsystem Vendor ID: 0000 00:23:53.986 Serial Number: bb2fa6356b49ed56f368 00:23:53.986 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:23:53.986 Firmware Version: 6.7.0-68 00:23:53.986 Recommended Arb Burst: 6 00:23:53.986 IEEE OUI Identifier: 00 00 00 00:23:53.986 Multi-path I/O 00:23:53.986 May have multiple subsystem ports: Yes 00:23:53.986 May have multiple controllers: Yes 00:23:53.986 Associated with SR-IOV VF: No 00:23:53.986 Max Data Transfer Size: Unlimited 00:23:53.986 Max Number of Namespaces: 1024 00:23:53.986 Max Number of I/O Queues: 128 00:23:53.986 NVMe Specification Version (VS): 1.3 00:23:53.986 NVMe Specification Version (Identify): 1.3 00:23:53.986 Maximum Queue Entries: 1024 00:23:53.986 Contiguous Queues Required: No 00:23:53.986 Arbitration Mechanisms Supported 00:23:53.986 Weighted Round Robin: Not Supported 00:23:53.986 Vendor Specific: Not Supported 00:23:53.986 Reset Timeout: 7500 ms 00:23:53.986 Doorbell Stride: 4 bytes 00:23:53.986 NVM Subsystem Reset: Not Supported 00:23:53.986 Command Sets Supported 00:23:53.986 NVM Command Set: Supported 00:23:53.986 Boot Partition: Not Supported 00:23:53.986 Memory Page Size Minimum: 4096 bytes 00:23:53.986 Memory Page Size Maximum: 4096 bytes 00:23:53.986 Persistent Memory Region: Not Supported 00:23:53.986 Optional Asynchronous Events Supported 00:23:53.986 Namespace Attribute Notices: Supported 00:23:53.986 Firmware Activation Notices: Not Supported 00:23:53.986 ANA Change Notices: Supported 00:23:53.986 PLE Aggregate Log Change Notices: Not Supported 00:23:53.986 LBA Status Info Alert Notices: Not Supported 00:23:53.986 EGE Aggregate Log Change Notices: Not Supported 00:23:53.986 Normal NVM Subsystem Shutdown event: Not Supported 00:23:53.986 Zone Descriptor Change Notices: Not Supported 00:23:53.987 Discovery Log Change Notices: Not Supported 00:23:53.987 Controller Attributes 00:23:53.987 128-bit Host Identifier: Supported 00:23:53.987 Non-Operational Permissive Mode: Not Supported 00:23:53.987 NVM Sets: Not Supported 00:23:53.987 Read Recovery Levels: Not Supported 00:23:53.987 Endurance Groups: Not Supported 00:23:53.987 Predictable Latency Mode: Not Supported 00:23:53.987 Traffic Based Keep ALive: Supported 00:23:53.987 Namespace Granularity: Not Supported 00:23:53.987 SQ Associations: Not Supported 00:23:53.987 UUID List: Not Supported 00:23:53.987 Multi-Domain Subsystem: Not Supported 00:23:53.987 Fixed Capacity Management: Not Supported 00:23:53.987 Variable Capacity Management: Not Supported 00:23:53.987 Delete Endurance Group: Not Supported 00:23:53.987 Delete NVM Set: Not Supported 00:23:53.987 Extended LBA Formats Supported: Not Supported 00:23:53.987 Flexible Data Placement Supported: Not Supported 00:23:53.987 00:23:53.987 Controller Memory Buffer Support 00:23:53.987 ================================ 00:23:53.987 Supported: No 00:23:53.987 00:23:53.987 Persistent Memory Region Support 00:23:53.987 ================================ 00:23:53.987 Supported: No 00:23:53.987 00:23:53.987 Admin Command Set Attributes 00:23:53.987 ============================ 00:23:53.987 Security Send/Receive: Not Supported 00:23:53.987 Format NVM: Not Supported 00:23:53.987 Firmware Activate/Download: Not Supported 00:23:53.987 Namespace Management: Not Supported 00:23:53.987 Device Self-Test: Not Supported 00:23:53.987 Directives: Not Supported 00:23:53.987 NVMe-MI: Not Supported 00:23:53.987 Virtualization Management: Not Supported 00:23:53.987 Doorbell Buffer Config: Not Supported 00:23:53.987 Get LBA Status Capability: Not Supported 00:23:53.987 Command & Feature Lockdown Capability: Not Supported 00:23:53.987 Abort Command Limit: 4 00:23:53.987 Async Event Request Limit: 4 00:23:53.987 Number of Firmware Slots: N/A 00:23:53.987 Firmware Slot 1 Read-Only: N/A 00:23:53.987 Firmware Activation Without Reset: N/A 00:23:53.987 Multiple Update Detection Support: N/A 00:23:53.987 Firmware Update Granularity: No Information Provided 00:23:53.987 Per-Namespace SMART Log: Yes 00:23:53.987 Asymmetric Namespace Access Log Page: Supported 00:23:53.987 ANA Transition Time : 10 sec 00:23:53.987 00:23:53.987 Asymmetric Namespace Access Capabilities 00:23:53.987 ANA Optimized State : Supported 00:23:53.987 ANA Non-Optimized State : Supported 00:23:53.987 ANA Inaccessible State : Supported 00:23:53.987 ANA Persistent Loss State : Supported 00:23:53.987 ANA Change State : Supported 00:23:53.987 ANAGRPID is not changed : No 00:23:53.987 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:23:53.987 00:23:53.987 ANA Group Identifier Maximum : 128 00:23:53.987 Number of ANA Group Identifiers : 128 00:23:53.987 Max Number of Allowed Namespaces : 1024 00:23:53.987 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:23:53.987 Command Effects Log Page: Supported 00:23:53.987 Get Log Page Extended Data: Supported 00:23:53.987 Telemetry Log Pages: Not Supported 00:23:53.987 Persistent Event Log Pages: Not Supported 00:23:53.987 Supported Log Pages Log Page: May Support 00:23:53.987 Commands Supported & Effects Log Page: Not Supported 00:23:53.987 Feature Identifiers & Effects Log Page:May Support 00:23:53.987 NVMe-MI Commands & Effects Log Page: May Support 00:23:53.987 Data Area 4 for Telemetry Log: Not Supported 00:23:53.987 Error Log Page Entries Supported: 128 00:23:53.987 Keep Alive: Supported 00:23:53.987 Keep Alive Granularity: 1000 ms 00:23:53.987 00:23:53.987 NVM Command Set Attributes 00:23:53.987 ========================== 00:23:53.987 Submission Queue Entry Size 00:23:53.987 Max: 64 00:23:53.987 Min: 64 00:23:53.987 Completion Queue Entry Size 00:23:53.987 Max: 16 00:23:53.987 Min: 16 00:23:53.987 Number of Namespaces: 1024 00:23:53.987 Compare Command: Not Supported 00:23:53.987 Write Uncorrectable Command: Not Supported 00:23:53.987 Dataset Management Command: Supported 00:23:53.987 Write Zeroes Command: Supported 00:23:53.987 Set Features Save Field: Not Supported 00:23:53.987 Reservations: Not Supported 00:23:53.987 Timestamp: Not Supported 00:23:53.987 Copy: Not Supported 00:23:53.987 Volatile Write Cache: Present 00:23:53.987 Atomic Write Unit (Normal): 1 00:23:53.987 Atomic Write Unit (PFail): 1 00:23:53.987 Atomic Compare & Write Unit: 1 00:23:53.987 Fused Compare & Write: Not Supported 00:23:53.987 Scatter-Gather List 00:23:53.987 SGL Command Set: Supported 00:23:53.987 SGL Keyed: Not Supported 00:23:53.987 SGL Bit Bucket Descriptor: Not Supported 00:23:53.987 SGL Metadata Pointer: Not Supported 00:23:53.987 Oversized SGL: Not Supported 00:23:53.987 SGL Metadata Address: Not Supported 00:23:53.987 SGL Offset: Supported 00:23:53.987 Transport SGL Data Block: Not Supported 00:23:53.987 Replay Protected Memory Block: Not Supported 00:23:53.987 00:23:53.987 Firmware Slot Information 00:23:53.987 ========================= 00:23:53.987 Active slot: 0 00:23:53.987 00:23:53.987 Asymmetric Namespace Access 00:23:53.987 =========================== 00:23:53.987 Change Count : 0 00:23:53.987 Number of ANA Group Descriptors : 1 00:23:53.987 ANA Group Descriptor : 0 00:23:53.987 ANA Group ID : 1 00:23:53.987 Number of NSID Values : 1 00:23:53.987 Change Count : 0 00:23:53.987 ANA State : 1 00:23:53.987 Namespace Identifier : 1 00:23:53.987 00:23:53.987 Commands Supported and Effects 00:23:53.987 ============================== 00:23:53.987 Admin Commands 00:23:53.987 -------------- 00:23:53.987 Get Log Page (02h): Supported 00:23:53.987 Identify (06h): Supported 00:23:53.987 Abort (08h): Supported 00:23:53.987 Set Features (09h): Supported 00:23:53.987 Get Features (0Ah): Supported 00:23:53.987 Asynchronous Event Request (0Ch): Supported 00:23:53.987 Keep Alive (18h): Supported 00:23:53.987 I/O Commands 00:23:53.987 ------------ 00:23:53.987 Flush (00h): Supported 00:23:53.987 Write (01h): Supported LBA-Change 00:23:53.987 Read (02h): Supported 00:23:53.987 Write Zeroes (08h): Supported LBA-Change 00:23:53.987 Dataset Management (09h): Supported 00:23:53.987 00:23:53.987 Error Log 00:23:53.987 ========= 00:23:53.987 Entry: 0 00:23:53.987 Error Count: 0x3 00:23:53.987 Submission Queue Id: 0x0 00:23:53.987 Command Id: 0x5 00:23:53.987 Phase Bit: 0 00:23:53.987 Status Code: 0x2 00:23:53.987 Status Code Type: 0x0 00:23:53.987 Do Not Retry: 1 00:23:53.987 Error Location: 0x28 00:23:53.987 LBA: 0x0 00:23:53.987 Namespace: 0x0 00:23:53.987 Vendor Log Page: 0x0 00:23:53.987 ----------- 00:23:53.987 Entry: 1 00:23:53.987 Error Count: 0x2 00:23:53.987 Submission Queue Id: 0x0 00:23:53.987 Command Id: 0x5 00:23:53.987 Phase Bit: 0 00:23:53.987 Status Code: 0x2 00:23:53.987 Status Code Type: 0x0 00:23:53.987 Do Not Retry: 1 00:23:53.987 Error Location: 0x28 00:23:53.987 LBA: 0x0 00:23:53.987 Namespace: 0x0 00:23:53.987 Vendor Log Page: 0x0 00:23:53.987 ----------- 00:23:53.987 Entry: 2 00:23:53.987 Error Count: 0x1 00:23:53.987 Submission Queue Id: 0x0 00:23:53.987 Command Id: 0x4 00:23:53.987 Phase Bit: 0 00:23:53.987 Status Code: 0x2 00:23:53.987 Status Code Type: 0x0 00:23:53.987 Do Not Retry: 1 00:23:53.987 Error Location: 0x28 00:23:53.987 LBA: 0x0 00:23:53.987 Namespace: 0x0 00:23:53.987 Vendor Log Page: 0x0 00:23:53.987 00:23:53.987 Number of Queues 00:23:53.987 ================ 00:23:53.987 Number of I/O Submission Queues: 128 00:23:53.987 Number of I/O Completion Queues: 128 00:23:53.987 00:23:53.987 ZNS Specific Controller Data 00:23:53.987 ============================ 00:23:53.987 Zone Append Size Limit: 0 00:23:53.987 00:23:53.987 00:23:53.987 Active Namespaces 00:23:53.987 ================= 00:23:53.987 get_feature(0x05) failed 00:23:53.987 Namespace ID:1 00:23:53.987 Command Set Identifier: NVM (00h) 00:23:53.987 Deallocate: Supported 00:23:53.987 Deallocated/Unwritten Error: Not Supported 00:23:53.987 Deallocated Read Value: Unknown 00:23:53.987 Deallocate in Write Zeroes: Not Supported 00:23:53.987 Deallocated Guard Field: 0xFFFF 00:23:53.987 Flush: Supported 00:23:53.987 Reservation: Not Supported 00:23:53.987 Namespace Sharing Capabilities: Multiple Controllers 00:23:53.987 Size (in LBAs): 1953525168 (931GiB) 00:23:53.987 Capacity (in LBAs): 1953525168 (931GiB) 00:23:53.987 Utilization (in LBAs): 1953525168 (931GiB) 00:23:53.987 UUID: 3d42db9a-2b6e-48b5-8cc9-21f23902bd6d 00:23:53.987 Thin Provisioning: Not Supported 00:23:53.987 Per-NS Atomic Units: Yes 00:23:53.988 Atomic Boundary Size (Normal): 0 00:23:53.988 Atomic Boundary Size (PFail): 0 00:23:53.988 Atomic Boundary Offset: 0 00:23:53.988 NGUID/EUI64 Never Reused: No 00:23:53.988 ANA group ID: 1 00:23:53.988 Namespace Write Protected: No 00:23:53.988 Number of LBA Formats: 1 00:23:53.988 Current LBA Format: LBA Format #00 00:23:53.988 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:53.988 00:23:53.988 21:17:09 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:23:53.988 21:17:09 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:53.988 21:17:09 -- nvmf/common.sh@117 -- # sync 00:23:53.988 21:17:09 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:53.988 21:17:09 -- nvmf/common.sh@120 -- # set +e 00:23:53.988 21:17:09 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:53.988 21:17:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:53.988 rmmod nvme_tcp 00:23:53.988 rmmod nvme_fabrics 00:23:53.988 21:17:09 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:53.988 21:17:09 -- nvmf/common.sh@124 -- # set -e 00:23:53.988 21:17:09 -- nvmf/common.sh@125 -- # return 0 00:23:53.988 21:17:09 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:23:53.988 21:17:09 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:53.988 21:17:09 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:53.988 21:17:09 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:53.988 21:17:09 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:53.988 21:17:09 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:53.988 21:17:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:53.988 21:17:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:53.988 21:17:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.893 21:17:11 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:55.893 21:17:11 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:23:55.893 21:17:11 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:56.159 21:17:11 -- nvmf/common.sh@675 -- # echo 0 00:23:56.159 21:17:11 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:56.159 21:17:11 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:56.159 21:17:11 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:56.159 21:17:11 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:56.159 21:17:11 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:23:56.159 21:17:11 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:23:56.159 21:17:11 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:59.444 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:23:59.444 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:23:59.444 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:23:59.444 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:23:59.444 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:23:59.444 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:23:59.444 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:23:59.444 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:23:59.444 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:23:59.444 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:23:59.444 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:23:59.444 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:23:59.444 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:23:59.444 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:23:59.444 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:23:59.444 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:00.011 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:24:00.011 00:24:00.011 real 0m17.195s 00:24:00.011 user 0m4.290s 00:24:00.011 sys 0m9.282s 00:24:00.011 21:17:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:00.011 21:17:15 -- common/autotest_common.sh@10 -- # set +x 00:24:00.011 ************************************ 00:24:00.011 END TEST nvmf_identify_kernel_target 00:24:00.011 ************************************ 00:24:00.011 21:17:15 -- nvmf/nvmf.sh@103 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:00.011 21:17:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:00.011 21:17:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:00.011 21:17:15 -- common/autotest_common.sh@10 -- # set +x 00:24:00.270 ************************************ 00:24:00.270 START TEST nvmf_auth_host 00:24:00.270 ************************************ 00:24:00.270 21:17:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:00.270 * Looking for test storage... 00:24:00.270 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:00.270 21:17:16 -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:00.270 21:17:16 -- nvmf/common.sh@7 -- # uname -s 00:24:00.270 21:17:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:00.270 21:17:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:00.270 21:17:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:00.270 21:17:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:00.270 21:17:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:00.270 21:17:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:00.270 21:17:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:00.270 21:17:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:00.270 21:17:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:00.270 21:17:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:00.270 21:17:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:00.270 21:17:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:00.270 21:17:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:00.270 21:17:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:00.270 21:17:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:00.270 21:17:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:00.270 21:17:16 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:00.270 21:17:16 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:00.270 21:17:16 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:00.270 21:17:16 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:00.270 21:17:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.270 21:17:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.270 21:17:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.270 21:17:16 -- paths/export.sh@5 -- # export PATH 00:24:00.270 21:17:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.270 21:17:16 -- nvmf/common.sh@47 -- # : 0 00:24:00.270 21:17:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:00.270 21:17:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:00.270 21:17:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:00.270 21:17:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:00.270 21:17:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:00.270 21:17:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:00.270 21:17:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:00.271 21:17:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:00.271 21:17:16 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:00.271 21:17:16 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:00.271 21:17:16 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:00.271 21:17:16 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:00.271 21:17:16 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:00.271 21:17:16 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:00.271 21:17:16 -- host/auth.sh@21 -- # keys=() 00:24:00.271 21:17:16 -- host/auth.sh@21 -- # ckeys=() 00:24:00.271 21:17:16 -- host/auth.sh@68 -- # nvmftestinit 00:24:00.271 21:17:16 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:00.271 21:17:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:00.271 21:17:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:00.271 21:17:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:00.271 21:17:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:00.271 21:17:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.271 21:17:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:00.271 21:17:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.271 21:17:16 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:24:00.271 21:17:16 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:00.271 21:17:16 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:00.271 21:17:16 -- common/autotest_common.sh@10 -- # set +x 00:24:06.919 21:17:22 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:06.919 21:17:22 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:06.919 21:17:22 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:06.919 21:17:22 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:06.919 21:17:22 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:06.919 21:17:22 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:06.919 21:17:22 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:06.919 21:17:22 -- nvmf/common.sh@295 -- # net_devs=() 00:24:06.919 21:17:22 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:06.919 21:17:22 -- nvmf/common.sh@296 -- # e810=() 00:24:06.919 21:17:22 -- nvmf/common.sh@296 -- # local -ga e810 00:24:06.919 21:17:22 -- nvmf/common.sh@297 -- # x722=() 00:24:06.919 21:17:22 -- nvmf/common.sh@297 -- # local -ga x722 00:24:06.919 21:17:22 -- nvmf/common.sh@298 -- # mlx=() 00:24:06.919 21:17:22 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:06.919 21:17:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:06.919 21:17:22 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:06.919 21:17:22 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:06.919 21:17:22 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:06.919 21:17:22 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:06.919 21:17:22 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:06.919 21:17:22 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:06.919 21:17:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:06.919 21:17:22 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:06.919 21:17:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:06.919 21:17:22 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:06.919 21:17:22 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:06.919 21:17:22 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:06.919 21:17:22 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:06.919 21:17:22 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:06.919 21:17:22 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:06.919 21:17:22 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:06.919 21:17:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:06.919 21:17:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:06.919 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:06.919 21:17:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:06.919 21:17:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:06.919 21:17:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:06.919 21:17:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:06.919 21:17:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:06.919 21:17:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:06.919 21:17:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:06.919 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:06.919 21:17:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:06.919 21:17:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:06.919 21:17:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:06.919 21:17:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:06.919 21:17:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:06.919 21:17:22 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:06.919 21:17:22 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:06.919 21:17:22 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:06.919 21:17:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:06.919 21:17:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:06.919 21:17:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:06.919 21:17:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:06.919 21:17:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:06.919 Found net devices under 0000:86:00.0: cvl_0_0 00:24:06.919 21:17:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:06.919 21:17:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:06.919 21:17:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:06.919 21:17:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:06.919 21:17:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:06.919 21:17:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:06.919 Found net devices under 0000:86:00.1: cvl_0_1 00:24:06.919 21:17:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:06.919 21:17:22 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:06.919 21:17:22 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:06.919 21:17:22 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:06.919 21:17:22 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:06.919 21:17:22 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:06.919 21:17:22 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:06.919 21:17:22 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:06.919 21:17:22 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:06.919 21:17:22 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:06.919 21:17:22 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:06.919 21:17:22 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:06.919 21:17:22 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:06.919 21:17:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:06.919 21:17:22 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:06.919 21:17:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:06.919 21:17:22 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:06.919 21:17:22 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:06.919 21:17:22 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:06.919 21:17:22 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:06.919 21:17:22 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:06.919 21:17:22 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:06.919 21:17:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:06.919 21:17:22 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:06.919 21:17:22 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:06.919 21:17:22 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:06.919 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:06.919 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:24:06.919 00:24:06.919 --- 10.0.0.2 ping statistics --- 00:24:06.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.919 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:24:06.919 21:17:22 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:06.919 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:06.919 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:24:06.919 00:24:06.919 --- 10.0.0.1 ping statistics --- 00:24:06.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.919 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:24:06.919 21:17:22 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:06.919 21:17:22 -- nvmf/common.sh@411 -- # return 0 00:24:06.919 21:17:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:06.919 21:17:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:06.919 21:17:22 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:06.919 21:17:22 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:06.919 21:17:22 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:06.919 21:17:22 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:06.919 21:17:22 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:06.919 21:17:22 -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:24:06.919 21:17:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:06.919 21:17:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:06.919 21:17:22 -- common/autotest_common.sh@10 -- # set +x 00:24:06.920 21:17:22 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:06.920 21:17:22 -- nvmf/common.sh@470 -- # nvmfpid=3173046 00:24:06.920 21:17:22 -- nvmf/common.sh@471 -- # waitforlisten 3173046 00:24:06.920 21:17:22 -- common/autotest_common.sh@817 -- # '[' -z 3173046 ']' 00:24:06.920 21:17:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:06.920 21:17:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:06.920 21:17:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:06.920 21:17:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:06.920 21:17:22 -- common/autotest_common.sh@10 -- # set +x 00:24:07.852 21:17:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:07.852 21:17:23 -- common/autotest_common.sh@850 -- # return 0 00:24:07.852 21:17:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:07.852 21:17:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:07.852 21:17:23 -- common/autotest_common.sh@10 -- # set +x 00:24:07.852 21:17:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:07.852 21:17:23 -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:07.852 21:17:23 -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:24:07.852 21:17:23 -- nvmf/common.sh@712 -- # local digest len file key 00:24:07.852 21:17:23 -- nvmf/common.sh@713 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:07.852 21:17:23 -- nvmf/common.sh@713 -- # local -A digests 00:24:07.852 21:17:23 -- nvmf/common.sh@715 -- # digest=null 00:24:07.852 21:17:23 -- nvmf/common.sh@715 -- # len=32 00:24:07.852 21:17:23 -- nvmf/common.sh@716 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:07.852 21:17:23 -- nvmf/common.sh@716 -- # key=d8a5fbb3d07a973f376d3ba90cf68599 00:24:07.852 21:17:23 -- nvmf/common.sh@717 -- # mktemp -t spdk.key-null.XXX 00:24:07.852 21:17:23 -- nvmf/common.sh@717 -- # file=/tmp/spdk.key-null.llJ 00:24:07.852 21:17:23 -- nvmf/common.sh@718 -- # format_dhchap_key d8a5fbb3d07a973f376d3ba90cf68599 0 00:24:07.852 21:17:23 -- nvmf/common.sh@708 -- # format_key DHHC-1 d8a5fbb3d07a973f376d3ba90cf68599 0 00:24:07.852 21:17:23 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:07.852 21:17:23 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:07.852 21:17:23 -- nvmf/common.sh@693 -- # key=d8a5fbb3d07a973f376d3ba90cf68599 00:24:07.852 21:17:23 -- nvmf/common.sh@693 -- # digest=0 00:24:07.852 21:17:23 -- nvmf/common.sh@694 -- # python - 00:24:07.852 21:17:23 -- nvmf/common.sh@719 -- # chmod 0600 /tmp/spdk.key-null.llJ 00:24:07.852 21:17:23 -- nvmf/common.sh@721 -- # echo /tmp/spdk.key-null.llJ 00:24:07.852 21:17:23 -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.llJ 00:24:07.852 21:17:23 -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:24:07.852 21:17:23 -- nvmf/common.sh@712 -- # local digest len file key 00:24:07.852 21:17:23 -- nvmf/common.sh@713 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:07.852 21:17:23 -- nvmf/common.sh@713 -- # local -A digests 00:24:07.852 21:17:23 -- nvmf/common.sh@715 -- # digest=sha512 00:24:07.852 21:17:23 -- nvmf/common.sh@715 -- # len=64 00:24:07.852 21:17:23 -- nvmf/common.sh@716 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:07.852 21:17:23 -- nvmf/common.sh@716 -- # key=776bfb598645ea79b3d369fd29a363ccfcca12a366307bfc71edc84889c17189 00:24:07.852 21:17:23 -- nvmf/common.sh@717 -- # mktemp -t spdk.key-sha512.XXX 00:24:07.852 21:17:23 -- nvmf/common.sh@717 -- # file=/tmp/spdk.key-sha512.knJ 00:24:07.852 21:17:23 -- nvmf/common.sh@718 -- # format_dhchap_key 776bfb598645ea79b3d369fd29a363ccfcca12a366307bfc71edc84889c17189 3 00:24:07.852 21:17:23 -- nvmf/common.sh@708 -- # format_key DHHC-1 776bfb598645ea79b3d369fd29a363ccfcca12a366307bfc71edc84889c17189 3 00:24:07.852 21:17:23 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:07.852 21:17:23 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:07.852 21:17:23 -- nvmf/common.sh@693 -- # key=776bfb598645ea79b3d369fd29a363ccfcca12a366307bfc71edc84889c17189 00:24:07.852 21:17:23 -- nvmf/common.sh@693 -- # digest=3 00:24:07.852 21:17:23 -- nvmf/common.sh@694 -- # python - 00:24:07.852 21:17:23 -- nvmf/common.sh@719 -- # chmod 0600 /tmp/spdk.key-sha512.knJ 00:24:07.852 21:17:23 -- nvmf/common.sh@721 -- # echo /tmp/spdk.key-sha512.knJ 00:24:07.852 21:17:23 -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.knJ 00:24:07.852 21:17:23 -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:24:07.852 21:17:23 -- nvmf/common.sh@712 -- # local digest len file key 00:24:07.852 21:17:23 -- nvmf/common.sh@713 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:07.852 21:17:23 -- nvmf/common.sh@713 -- # local -A digests 00:24:07.852 21:17:23 -- nvmf/common.sh@715 -- # digest=null 00:24:07.852 21:17:23 -- nvmf/common.sh@715 -- # len=48 00:24:07.852 21:17:23 -- nvmf/common.sh@716 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:07.852 21:17:23 -- nvmf/common.sh@716 -- # key=c4a015a621ab294aa2409429908a352ab58e35db4ed2fa24 00:24:07.852 21:17:23 -- nvmf/common.sh@717 -- # mktemp -t spdk.key-null.XXX 00:24:07.852 21:17:23 -- nvmf/common.sh@717 -- # file=/tmp/spdk.key-null.iEz 00:24:07.852 21:17:23 -- nvmf/common.sh@718 -- # format_dhchap_key c4a015a621ab294aa2409429908a352ab58e35db4ed2fa24 0 00:24:07.852 21:17:23 -- nvmf/common.sh@708 -- # format_key DHHC-1 c4a015a621ab294aa2409429908a352ab58e35db4ed2fa24 0 00:24:07.852 21:17:23 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:07.852 21:17:23 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:07.852 21:17:23 -- nvmf/common.sh@693 -- # key=c4a015a621ab294aa2409429908a352ab58e35db4ed2fa24 00:24:07.852 21:17:23 -- nvmf/common.sh@693 -- # digest=0 00:24:07.852 21:17:23 -- nvmf/common.sh@694 -- # python - 00:24:07.852 21:17:23 -- nvmf/common.sh@719 -- # chmod 0600 /tmp/spdk.key-null.iEz 00:24:07.852 21:17:23 -- nvmf/common.sh@721 -- # echo /tmp/spdk.key-null.iEz 00:24:07.852 21:17:23 -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.iEz 00:24:07.852 21:17:23 -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:24:07.852 21:17:23 -- nvmf/common.sh@712 -- # local digest len file key 00:24:07.852 21:17:23 -- nvmf/common.sh@713 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:07.852 21:17:23 -- nvmf/common.sh@713 -- # local -A digests 00:24:07.852 21:17:23 -- nvmf/common.sh@715 -- # digest=sha384 00:24:07.852 21:17:23 -- nvmf/common.sh@715 -- # len=48 00:24:07.852 21:17:23 -- nvmf/common.sh@716 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:07.852 21:17:23 -- nvmf/common.sh@716 -- # key=b64975d0e90009fc7f3048dfdbf92282af8e64905d4d9a9b 00:24:07.852 21:17:23 -- nvmf/common.sh@717 -- # mktemp -t spdk.key-sha384.XXX 00:24:07.852 21:17:23 -- nvmf/common.sh@717 -- # file=/tmp/spdk.key-sha384.Wxb 00:24:07.852 21:17:23 -- nvmf/common.sh@718 -- # format_dhchap_key b64975d0e90009fc7f3048dfdbf92282af8e64905d4d9a9b 2 00:24:07.852 21:17:23 -- nvmf/common.sh@708 -- # format_key DHHC-1 b64975d0e90009fc7f3048dfdbf92282af8e64905d4d9a9b 2 00:24:07.852 21:17:23 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:07.852 21:17:23 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:07.852 21:17:23 -- nvmf/common.sh@693 -- # key=b64975d0e90009fc7f3048dfdbf92282af8e64905d4d9a9b 00:24:07.852 21:17:23 -- nvmf/common.sh@693 -- # digest=2 00:24:07.852 21:17:23 -- nvmf/common.sh@694 -- # python - 00:24:07.852 21:17:23 -- nvmf/common.sh@719 -- # chmod 0600 /tmp/spdk.key-sha384.Wxb 00:24:07.852 21:17:23 -- nvmf/common.sh@721 -- # echo /tmp/spdk.key-sha384.Wxb 00:24:07.852 21:17:23 -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Wxb 00:24:07.852 21:17:23 -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:07.852 21:17:23 -- nvmf/common.sh@712 -- # local digest len file key 00:24:07.852 21:17:23 -- nvmf/common.sh@713 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:07.852 21:17:23 -- nvmf/common.sh@713 -- # local -A digests 00:24:07.852 21:17:23 -- nvmf/common.sh@715 -- # digest=sha256 00:24:07.852 21:17:23 -- nvmf/common.sh@715 -- # len=32 00:24:07.852 21:17:23 -- nvmf/common.sh@716 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:07.852 21:17:23 -- nvmf/common.sh@716 -- # key=700b8afd7c9d291032c630984fa3ee45 00:24:07.852 21:17:23 -- nvmf/common.sh@717 -- # mktemp -t spdk.key-sha256.XXX 00:24:07.852 21:17:23 -- nvmf/common.sh@717 -- # file=/tmp/spdk.key-sha256.QSL 00:24:07.852 21:17:23 -- nvmf/common.sh@718 -- # format_dhchap_key 700b8afd7c9d291032c630984fa3ee45 1 00:24:07.852 21:17:23 -- nvmf/common.sh@708 -- # format_key DHHC-1 700b8afd7c9d291032c630984fa3ee45 1 00:24:07.852 21:17:23 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:07.852 21:17:23 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:07.852 21:17:23 -- nvmf/common.sh@693 -- # key=700b8afd7c9d291032c630984fa3ee45 00:24:07.852 21:17:23 -- nvmf/common.sh@693 -- # digest=1 00:24:07.852 21:17:23 -- nvmf/common.sh@694 -- # python - 00:24:08.110 21:17:23 -- nvmf/common.sh@719 -- # chmod 0600 /tmp/spdk.key-sha256.QSL 00:24:08.110 21:17:23 -- nvmf/common.sh@721 -- # echo /tmp/spdk.key-sha256.QSL 00:24:08.110 21:17:23 -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.QSL 00:24:08.110 21:17:23 -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:08.110 21:17:23 -- nvmf/common.sh@712 -- # local digest len file key 00:24:08.110 21:17:23 -- nvmf/common.sh@713 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:08.110 21:17:23 -- nvmf/common.sh@713 -- # local -A digests 00:24:08.110 21:17:23 -- nvmf/common.sh@715 -- # digest=sha256 00:24:08.110 21:17:23 -- nvmf/common.sh@715 -- # len=32 00:24:08.110 21:17:23 -- nvmf/common.sh@716 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:08.110 21:17:23 -- nvmf/common.sh@716 -- # key=1ff70b85434702ebc1bedfb60a67dc11 00:24:08.110 21:17:23 -- nvmf/common.sh@717 -- # mktemp -t spdk.key-sha256.XXX 00:24:08.110 21:17:23 -- nvmf/common.sh@717 -- # file=/tmp/spdk.key-sha256.wWu 00:24:08.110 21:17:23 -- nvmf/common.sh@718 -- # format_dhchap_key 1ff70b85434702ebc1bedfb60a67dc11 1 00:24:08.110 21:17:23 -- nvmf/common.sh@708 -- # format_key DHHC-1 1ff70b85434702ebc1bedfb60a67dc11 1 00:24:08.110 21:17:23 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:08.110 21:17:23 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:08.110 21:17:23 -- nvmf/common.sh@693 -- # key=1ff70b85434702ebc1bedfb60a67dc11 00:24:08.110 21:17:23 -- nvmf/common.sh@693 -- # digest=1 00:24:08.110 21:17:23 -- nvmf/common.sh@694 -- # python - 00:24:08.110 21:17:23 -- nvmf/common.sh@719 -- # chmod 0600 /tmp/spdk.key-sha256.wWu 00:24:08.110 21:17:23 -- nvmf/common.sh@721 -- # echo /tmp/spdk.key-sha256.wWu 00:24:08.110 21:17:23 -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.wWu 00:24:08.110 21:17:23 -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:24:08.110 21:17:23 -- nvmf/common.sh@712 -- # local digest len file key 00:24:08.110 21:17:23 -- nvmf/common.sh@713 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:08.110 21:17:23 -- nvmf/common.sh@713 -- # local -A digests 00:24:08.110 21:17:23 -- nvmf/common.sh@715 -- # digest=sha384 00:24:08.110 21:17:23 -- nvmf/common.sh@715 -- # len=48 00:24:08.110 21:17:23 -- nvmf/common.sh@716 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:08.110 21:17:23 -- nvmf/common.sh@716 -- # key=55f4bdabdd66fbe3838f05fb19d35de2b3ddf5999e2a9c81 00:24:08.110 21:17:23 -- nvmf/common.sh@717 -- # mktemp -t spdk.key-sha384.XXX 00:24:08.110 21:17:23 -- nvmf/common.sh@717 -- # file=/tmp/spdk.key-sha384.8XC 00:24:08.110 21:17:23 -- nvmf/common.sh@718 -- # format_dhchap_key 55f4bdabdd66fbe3838f05fb19d35de2b3ddf5999e2a9c81 2 00:24:08.110 21:17:23 -- nvmf/common.sh@708 -- # format_key DHHC-1 55f4bdabdd66fbe3838f05fb19d35de2b3ddf5999e2a9c81 2 00:24:08.110 21:17:23 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:08.110 21:17:23 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:08.110 21:17:23 -- nvmf/common.sh@693 -- # key=55f4bdabdd66fbe3838f05fb19d35de2b3ddf5999e2a9c81 00:24:08.110 21:17:23 -- nvmf/common.sh@693 -- # digest=2 00:24:08.110 21:17:23 -- nvmf/common.sh@694 -- # python - 00:24:08.110 21:17:23 -- nvmf/common.sh@719 -- # chmod 0600 /tmp/spdk.key-sha384.8XC 00:24:08.110 21:17:23 -- nvmf/common.sh@721 -- # echo /tmp/spdk.key-sha384.8XC 00:24:08.110 21:17:23 -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.8XC 00:24:08.110 21:17:23 -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:24:08.110 21:17:23 -- nvmf/common.sh@712 -- # local digest len file key 00:24:08.110 21:17:23 -- nvmf/common.sh@713 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:08.110 21:17:23 -- nvmf/common.sh@713 -- # local -A digests 00:24:08.110 21:17:23 -- nvmf/common.sh@715 -- # digest=null 00:24:08.110 21:17:23 -- nvmf/common.sh@715 -- # len=32 00:24:08.110 21:17:23 -- nvmf/common.sh@716 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:08.110 21:17:23 -- nvmf/common.sh@716 -- # key=eabcbea6f8445974eb614d9d1a4f8a06 00:24:08.110 21:17:23 -- nvmf/common.sh@717 -- # mktemp -t spdk.key-null.XXX 00:24:08.110 21:17:23 -- nvmf/common.sh@717 -- # file=/tmp/spdk.key-null.cFR 00:24:08.110 21:17:23 -- nvmf/common.sh@718 -- # format_dhchap_key eabcbea6f8445974eb614d9d1a4f8a06 0 00:24:08.110 21:17:23 -- nvmf/common.sh@708 -- # format_key DHHC-1 eabcbea6f8445974eb614d9d1a4f8a06 0 00:24:08.110 21:17:23 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:08.110 21:17:23 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:08.110 21:17:23 -- nvmf/common.sh@693 -- # key=eabcbea6f8445974eb614d9d1a4f8a06 00:24:08.110 21:17:23 -- nvmf/common.sh@693 -- # digest=0 00:24:08.110 21:17:23 -- nvmf/common.sh@694 -- # python - 00:24:08.110 21:17:23 -- nvmf/common.sh@719 -- # chmod 0600 /tmp/spdk.key-null.cFR 00:24:08.110 21:17:23 -- nvmf/common.sh@721 -- # echo /tmp/spdk.key-null.cFR 00:24:08.110 21:17:23 -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.cFR 00:24:08.110 21:17:23 -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:24:08.110 21:17:23 -- nvmf/common.sh@712 -- # local digest len file key 00:24:08.110 21:17:23 -- nvmf/common.sh@713 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:08.110 21:17:23 -- nvmf/common.sh@713 -- # local -A digests 00:24:08.110 21:17:23 -- nvmf/common.sh@715 -- # digest=sha512 00:24:08.110 21:17:23 -- nvmf/common.sh@715 -- # len=64 00:24:08.110 21:17:23 -- nvmf/common.sh@716 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:08.110 21:17:23 -- nvmf/common.sh@716 -- # key=6192a176295fdff5ce29434124bc330e9db8afe894cc6fe17631d72e9b736b8b 00:24:08.110 21:17:23 -- nvmf/common.sh@717 -- # mktemp -t spdk.key-sha512.XXX 00:24:08.110 21:17:23 -- nvmf/common.sh@717 -- # file=/tmp/spdk.key-sha512.PLF 00:24:08.110 21:17:23 -- nvmf/common.sh@718 -- # format_dhchap_key 6192a176295fdff5ce29434124bc330e9db8afe894cc6fe17631d72e9b736b8b 3 00:24:08.110 21:17:23 -- nvmf/common.sh@708 -- # format_key DHHC-1 6192a176295fdff5ce29434124bc330e9db8afe894cc6fe17631d72e9b736b8b 3 00:24:08.110 21:17:23 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:08.110 21:17:23 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:08.110 21:17:23 -- nvmf/common.sh@693 -- # key=6192a176295fdff5ce29434124bc330e9db8afe894cc6fe17631d72e9b736b8b 00:24:08.110 21:17:23 -- nvmf/common.sh@693 -- # digest=3 00:24:08.110 21:17:23 -- nvmf/common.sh@694 -- # python - 00:24:08.110 21:17:24 -- nvmf/common.sh@719 -- # chmod 0600 /tmp/spdk.key-sha512.PLF 00:24:08.110 21:17:24 -- nvmf/common.sh@721 -- # echo /tmp/spdk.key-sha512.PLF 00:24:08.110 21:17:24 -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.PLF 00:24:08.110 21:17:24 -- host/auth.sh@77 -- # ckeys[4]= 00:24:08.110 21:17:24 -- host/auth.sh@79 -- # waitforlisten 3173046 00:24:08.110 21:17:24 -- common/autotest_common.sh@817 -- # '[' -z 3173046 ']' 00:24:08.110 21:17:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:08.110 21:17:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:08.111 21:17:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:08.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:08.111 21:17:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:08.111 21:17:24 -- common/autotest_common.sh@10 -- # set +x 00:24:08.369 21:17:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:08.369 21:17:24 -- common/autotest_common.sh@850 -- # return 0 00:24:08.369 21:17:24 -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:08.369 21:17:24 -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.llJ 00:24:08.369 21:17:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.369 21:17:24 -- common/autotest_common.sh@10 -- # set +x 00:24:08.369 21:17:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.369 21:17:24 -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.knJ ]] 00:24:08.369 21:17:24 -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.knJ 00:24:08.369 21:17:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.369 21:17:24 -- common/autotest_common.sh@10 -- # set +x 00:24:08.369 21:17:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.369 21:17:24 -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:08.369 21:17:24 -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.iEz 00:24:08.369 21:17:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.369 21:17:24 -- common/autotest_common.sh@10 -- # set +x 00:24:08.369 21:17:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.369 21:17:24 -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Wxb ]] 00:24:08.369 21:17:24 -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Wxb 00:24:08.369 21:17:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.369 21:17:24 -- common/autotest_common.sh@10 -- # set +x 00:24:08.369 21:17:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.369 21:17:24 -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:08.369 21:17:24 -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.QSL 00:24:08.369 21:17:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.369 21:17:24 -- common/autotest_common.sh@10 -- # set +x 00:24:08.369 21:17:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.369 21:17:24 -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.wWu ]] 00:24:08.369 21:17:24 -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.wWu 00:24:08.369 21:17:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.369 21:17:24 -- common/autotest_common.sh@10 -- # set +x 00:24:08.369 21:17:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.369 21:17:24 -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:08.369 21:17:24 -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.8XC 00:24:08.369 21:17:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.369 21:17:24 -- common/autotest_common.sh@10 -- # set +x 00:24:08.369 21:17:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.369 21:17:24 -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.cFR ]] 00:24:08.369 21:17:24 -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.cFR 00:24:08.369 21:17:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.369 21:17:24 -- common/autotest_common.sh@10 -- # set +x 00:24:08.369 21:17:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.369 21:17:24 -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:08.369 21:17:24 -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.PLF 00:24:08.369 21:17:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.369 21:17:24 -- common/autotest_common.sh@10 -- # set +x 00:24:08.369 21:17:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.369 21:17:24 -- host/auth.sh@82 -- # [[ -n '' ]] 00:24:08.369 21:17:24 -- host/auth.sh@85 -- # nvmet_auth_init 00:24:08.369 21:17:24 -- host/auth.sh@35 -- # get_main_ns_ip 00:24:08.369 21:17:24 -- nvmf/common.sh@730 -- # local ip 00:24:08.369 21:17:24 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:08.369 21:17:24 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:08.369 21:17:24 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.369 21:17:24 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.369 21:17:24 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:08.369 21:17:24 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.369 21:17:24 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:08.369 21:17:24 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:08.369 21:17:24 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:08.369 21:17:24 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:24:08.369 21:17:24 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:24:08.369 21:17:24 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:24:08.369 21:17:24 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:08.369 21:17:24 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:08.369 21:17:24 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:08.369 21:17:24 -- nvmf/common.sh@628 -- # local block nvme 00:24:08.369 21:17:24 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:24:08.369 21:17:24 -- nvmf/common.sh@631 -- # modprobe nvmet 00:24:08.627 21:17:24 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:08.627 21:17:24 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:11.914 Waiting for block devices as requested 00:24:11.914 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:24:11.914 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:11.915 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:11.915 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:11.915 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:11.915 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:11.915 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:11.915 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:11.915 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:12.173 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:12.173 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:12.173 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:12.432 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:12.432 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:12.432 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:12.432 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:12.691 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:13.259 21:17:29 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:24:13.259 21:17:29 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:13.259 21:17:29 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:24:13.259 21:17:29 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:24:13.259 21:17:29 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:13.259 21:17:29 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:13.259 21:17:29 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:24:13.259 21:17:29 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:13.259 21:17:29 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:13.259 No valid GPT data, bailing 00:24:13.259 21:17:29 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:13.259 21:17:29 -- scripts/common.sh@391 -- # pt= 00:24:13.259 21:17:29 -- scripts/common.sh@392 -- # return 1 00:24:13.259 21:17:29 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:24:13.259 21:17:29 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:24:13.259 21:17:29 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:13.259 21:17:29 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:13.259 21:17:29 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:13.259 21:17:29 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:24:13.259 21:17:29 -- nvmf/common.sh@656 -- # echo 1 00:24:13.259 21:17:29 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:24:13.259 21:17:29 -- nvmf/common.sh@658 -- # echo 1 00:24:13.259 21:17:29 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:24:13.259 21:17:29 -- nvmf/common.sh@661 -- # echo tcp 00:24:13.259 21:17:29 -- nvmf/common.sh@662 -- # echo 4420 00:24:13.259 21:17:29 -- nvmf/common.sh@663 -- # echo ipv4 00:24:13.259 21:17:29 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:13.259 21:17:29 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:24:13.259 00:24:13.259 Discovery Log Number of Records 2, Generation counter 2 00:24:13.259 =====Discovery Log Entry 0====== 00:24:13.259 trtype: tcp 00:24:13.259 adrfam: ipv4 00:24:13.259 subtype: current discovery subsystem 00:24:13.259 treq: not specified, sq flow control disable supported 00:24:13.259 portid: 1 00:24:13.259 trsvcid: 4420 00:24:13.259 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:13.259 traddr: 10.0.0.1 00:24:13.259 eflags: none 00:24:13.259 sectype: none 00:24:13.259 =====Discovery Log Entry 1====== 00:24:13.259 trtype: tcp 00:24:13.259 adrfam: ipv4 00:24:13.259 subtype: nvme subsystem 00:24:13.259 treq: not specified, sq flow control disable supported 00:24:13.259 portid: 1 00:24:13.259 trsvcid: 4420 00:24:13.259 subnqn: nqn.2024-02.io.spdk:cnode0 00:24:13.259 traddr: 10.0.0.1 00:24:13.259 eflags: none 00:24:13.259 sectype: none 00:24:13.259 21:17:29 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:13.518 21:17:29 -- host/auth.sh@37 -- # echo 0 00:24:13.518 21:17:29 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:13.518 21:17:29 -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:13.518 21:17:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.518 21:17:29 -- host/auth.sh@44 -- # digest=sha256 00:24:13.518 21:17:29 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:13.518 21:17:29 -- host/auth.sh@44 -- # keyid=1 00:24:13.518 21:17:29 -- host/auth.sh@45 -- # key=DHHC-1:00:YzRhMDE1YTYyMWFiMjk0YWEyNDA5NDI5OTA4YTM1MmFiNThlMzVkYjRlZDJmYTI0YwqxSg==: 00:24:13.518 21:17:29 -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: 00:24:13.518 21:17:29 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:13.518 21:17:29 -- host/auth.sh@49 -- # echo ffdhe2048 00:24:13.518 21:17:29 -- host/auth.sh@50 -- # echo DHHC-1:00:YzRhMDE1YTYyMWFiMjk0YWEyNDA5NDI5OTA4YTM1MmFiNThlMzVkYjRlZDJmYTI0YwqxSg==: 00:24:13.518 21:17:29 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: ]] 00:24:13.518 21:17:29 -- host/auth.sh@51 -- # echo DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: 00:24:13.518 21:17:29 -- host/auth.sh@93 -- # IFS=, 00:24:13.518 21:17:29 -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:24:13.518 21:17:29 -- host/auth.sh@93 -- # IFS=, 00:24:13.518 21:17:29 -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:13.518 21:17:29 -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:24:13.518 21:17:29 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.518 21:17:29 -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:24:13.518 21:17:29 -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:13.518 21:17:29 -- host/auth.sh@57 -- # keyid=1 00:24:13.518 21:17:29 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.518 21:17:29 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:13.518 21:17:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.518 21:17:29 -- common/autotest_common.sh@10 -- # set +x 00:24:13.518 21:17:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.518 21:17:29 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.518 21:17:29 -- nvmf/common.sh@730 -- # local ip 00:24:13.518 21:17:29 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:13.518 21:17:29 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:13.518 21:17:29 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.518 21:17:29 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.518 21:17:29 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:13.518 21:17:29 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.518 21:17:29 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:13.518 21:17:29 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:13.518 21:17:29 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:13.518 21:17:29 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:13.518 21:17:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.518 21:17:29 -- common/autotest_common.sh@10 -- # set +x 00:24:13.518 nvme0n1 00:24:13.518 21:17:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.518 21:17:29 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.518 21:17:29 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.518 21:17:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.518 21:17:29 -- common/autotest_common.sh@10 -- # set +x 00:24:13.518 21:17:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.518 21:17:29 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.518 21:17:29 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.518 21:17:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.518 21:17:29 -- common/autotest_common.sh@10 -- # set +x 00:24:13.518 21:17:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.518 21:17:29 -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:13.518 21:17:29 -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:13.518 21:17:29 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.518 21:17:29 -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:24:13.518 21:17:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.518 21:17:29 -- host/auth.sh@44 -- # digest=sha256 00:24:13.518 21:17:29 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:13.518 21:17:29 -- host/auth.sh@44 -- # keyid=0 00:24:13.518 21:17:29 -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhhNWZiYjNkMDdhOTczZjM3NmQzYmE5MGNmNjg1OTmyq3lw: 00:24:13.518 21:17:29 -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzc2YmZiNTk4NjQ1ZWE3OWIzZDM2OWZkMjlhMzYzY2NmY2NhMTJhMzY2MzA3YmZjNzFlZGM4NDg4OWMxNzE4OfrUzHU=: 00:24:13.518 21:17:29 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:13.518 21:17:29 -- host/auth.sh@49 -- # echo ffdhe2048 00:24:13.518 21:17:29 -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhhNWZiYjNkMDdhOTczZjM3NmQzYmE5MGNmNjg1OTmyq3lw: 00:24:13.518 21:17:29 -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzc2YmZiNTk4NjQ1ZWE3OWIzZDM2OWZkMjlhMzYzY2NmY2NhMTJhMzY2MzA3YmZjNzFlZGM4NDg4OWMxNzE4OfrUzHU=: ]] 00:24:13.518 21:17:29 -- host/auth.sh@51 -- # echo DHHC-1:03:Nzc2YmZiNTk4NjQ1ZWE3OWIzZDM2OWZkMjlhMzYzY2NmY2NhMTJhMzY2MzA3YmZjNzFlZGM4NDg4OWMxNzE4OfrUzHU=: 00:24:13.518 21:17:29 -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:24:13.518 21:17:29 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.518 21:17:29 -- host/auth.sh@57 -- # digest=sha256 00:24:13.518 21:17:29 -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:13.518 21:17:29 -- host/auth.sh@57 -- # keyid=0 00:24:13.518 21:17:29 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.518 21:17:29 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:13.518 21:17:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.518 21:17:29 -- common/autotest_common.sh@10 -- # set +x 00:24:13.518 21:17:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.518 21:17:29 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.519 21:17:29 -- nvmf/common.sh@730 -- # local ip 00:24:13.519 21:17:29 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:13.519 21:17:29 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:13.519 21:17:29 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.519 21:17:29 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.519 21:17:29 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:13.519 21:17:29 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.519 21:17:29 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:13.519 21:17:29 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:13.519 21:17:29 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:13.519 21:17:29 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:13.519 21:17:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.519 21:17:29 -- common/autotest_common.sh@10 -- # set +x 00:24:13.777 nvme0n1 00:24:13.777 21:17:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.777 21:17:29 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.777 21:17:29 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.777 21:17:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.777 21:17:29 -- common/autotest_common.sh@10 -- # set +x 00:24:13.777 21:17:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.777 21:17:29 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.777 21:17:29 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.777 21:17:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.777 21:17:29 -- common/autotest_common.sh@10 -- # set +x 00:24:13.777 21:17:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.777 21:17:29 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.777 21:17:29 -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:13.777 21:17:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.777 21:17:29 -- host/auth.sh@44 -- # digest=sha256 00:24:13.777 21:17:29 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:13.777 21:17:29 -- host/auth.sh@44 -- # keyid=1 00:24:13.777 21:17:29 -- host/auth.sh@45 -- # key=DHHC-1:00:YzRhMDE1YTYyMWFiMjk0YWEyNDA5NDI5OTA4YTM1MmFiNThlMzVkYjRlZDJmYTI0YwqxSg==: 00:24:13.777 21:17:29 -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: 00:24:13.777 21:17:29 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:13.777 21:17:29 -- host/auth.sh@49 -- # echo ffdhe2048 00:24:13.777 21:17:29 -- host/auth.sh@50 -- # echo DHHC-1:00:YzRhMDE1YTYyMWFiMjk0YWEyNDA5NDI5OTA4YTM1MmFiNThlMzVkYjRlZDJmYTI0YwqxSg==: 00:24:13.777 21:17:29 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: ]] 00:24:13.777 21:17:29 -- host/auth.sh@51 -- # echo DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: 00:24:13.777 21:17:29 -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:24:13.777 21:17:29 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.777 21:17:29 -- host/auth.sh@57 -- # digest=sha256 00:24:13.777 21:17:29 -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:13.777 21:17:29 -- host/auth.sh@57 -- # keyid=1 00:24:13.777 21:17:29 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.777 21:17:29 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:13.777 21:17:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.777 21:17:29 -- common/autotest_common.sh@10 -- # set +x 00:24:13.777 21:17:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.777 21:17:29 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.777 21:17:29 -- nvmf/common.sh@730 -- # local ip 00:24:13.777 21:17:29 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:13.777 21:17:29 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:13.777 21:17:29 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.777 21:17:29 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.777 21:17:29 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:13.777 21:17:29 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.777 21:17:29 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:13.777 21:17:29 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:13.777 21:17:29 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:13.777 21:17:29 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:13.777 21:17:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.777 21:17:29 -- common/autotest_common.sh@10 -- # set +x 00:24:14.037 nvme0n1 00:24:14.037 21:17:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.037 21:17:29 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.037 21:17:29 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.037 21:17:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.037 21:17:29 -- common/autotest_common.sh@10 -- # set +x 00:24:14.037 21:17:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.037 21:17:29 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.037 21:17:29 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.037 21:17:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.037 21:17:29 -- common/autotest_common.sh@10 -- # set +x 00:24:14.037 21:17:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.037 21:17:29 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.037 21:17:29 -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:14.037 21:17:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.037 21:17:29 -- host/auth.sh@44 -- # digest=sha256 00:24:14.037 21:17:29 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:14.037 21:17:29 -- host/auth.sh@44 -- # keyid=2 00:24:14.037 21:17:29 -- host/auth.sh@45 -- # key=DHHC-1:01:NzAwYjhhZmQ3YzlkMjkxMDMyYzYzMDk4NGZhM2VlNDU/jjTP: 00:24:14.037 21:17:29 -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWZmNzBiODU0MzQ3MDJlYmMxYmVkZmI2MGE2N2RjMTEbLFxR: 00:24:14.037 21:17:29 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:14.037 21:17:29 -- host/auth.sh@49 -- # echo ffdhe2048 00:24:14.037 21:17:29 -- host/auth.sh@50 -- # echo DHHC-1:01:NzAwYjhhZmQ3YzlkMjkxMDMyYzYzMDk4NGZhM2VlNDU/jjTP: 00:24:14.037 21:17:29 -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWZmNzBiODU0MzQ3MDJlYmMxYmVkZmI2MGE2N2RjMTEbLFxR: ]] 00:24:14.037 21:17:29 -- host/auth.sh@51 -- # echo DHHC-1:01:MWZmNzBiODU0MzQ3MDJlYmMxYmVkZmI2MGE2N2RjMTEbLFxR: 00:24:14.037 21:17:29 -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:24:14.037 21:17:29 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.037 21:17:29 -- host/auth.sh@57 -- # digest=sha256 00:24:14.037 21:17:29 -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:14.037 21:17:29 -- host/auth.sh@57 -- # keyid=2 00:24:14.037 21:17:29 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.037 21:17:29 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:14.037 21:17:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.037 21:17:29 -- common/autotest_common.sh@10 -- # set +x 00:24:14.037 21:17:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.037 21:17:29 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.037 21:17:29 -- nvmf/common.sh@730 -- # local ip 00:24:14.037 21:17:29 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:14.037 21:17:29 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:14.037 21:17:29 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.037 21:17:29 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.037 21:17:29 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:14.037 21:17:29 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.037 21:17:29 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:14.037 21:17:29 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:14.037 21:17:29 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:14.037 21:17:29 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:14.037 21:17:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.037 21:17:29 -- common/autotest_common.sh@10 -- # set +x 00:24:14.295 nvme0n1 00:24:14.295 21:17:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.296 21:17:29 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.296 21:17:29 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.296 21:17:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.296 21:17:29 -- common/autotest_common.sh@10 -- # set +x 00:24:14.296 21:17:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.296 21:17:30 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.296 21:17:30 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.296 21:17:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.296 21:17:30 -- common/autotest_common.sh@10 -- # set +x 00:24:14.296 21:17:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.296 21:17:30 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.296 21:17:30 -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:14.296 21:17:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.296 21:17:30 -- host/auth.sh@44 -- # digest=sha256 00:24:14.296 21:17:30 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:14.296 21:17:30 -- host/auth.sh@44 -- # keyid=3 00:24:14.296 21:17:30 -- host/auth.sh@45 -- # key=DHHC-1:02:NTVmNGJkYWJkZDY2ZmJlMzgzOGYwNWZiMTlkMzVkZTJiM2RkZjU5OTllMmE5YzgxVvBHHw==: 00:24:14.296 21:17:30 -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWFiY2JlYTZmODQ0NTk3NGViNjE0ZDlkMWE0ZjhhMDZld/vL: 00:24:14.296 21:17:30 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:14.296 21:17:30 -- host/auth.sh@49 -- # echo ffdhe2048 00:24:14.296 21:17:30 -- host/auth.sh@50 -- # echo DHHC-1:02:NTVmNGJkYWJkZDY2ZmJlMzgzOGYwNWZiMTlkMzVkZTJiM2RkZjU5OTllMmE5YzgxVvBHHw==: 00:24:14.296 21:17:30 -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWFiY2JlYTZmODQ0NTk3NGViNjE0ZDlkMWE0ZjhhMDZld/vL: ]] 00:24:14.296 21:17:30 -- host/auth.sh@51 -- # echo DHHC-1:00:ZWFiY2JlYTZmODQ0NTk3NGViNjE0ZDlkMWE0ZjhhMDZld/vL: 00:24:14.296 21:17:30 -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:24:14.296 21:17:30 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.296 21:17:30 -- host/auth.sh@57 -- # digest=sha256 00:24:14.296 21:17:30 -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:14.296 21:17:30 -- host/auth.sh@57 -- # keyid=3 00:24:14.296 21:17:30 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.296 21:17:30 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:14.296 21:17:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.296 21:17:30 -- common/autotest_common.sh@10 -- # set +x 00:24:14.296 21:17:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.296 21:17:30 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.296 21:17:30 -- nvmf/common.sh@730 -- # local ip 00:24:14.296 21:17:30 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:14.296 21:17:30 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:14.296 21:17:30 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.296 21:17:30 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.296 21:17:30 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:14.296 21:17:30 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.296 21:17:30 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:14.296 21:17:30 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:14.296 21:17:30 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:14.296 21:17:30 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:14.296 21:17:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.296 21:17:30 -- common/autotest_common.sh@10 -- # set +x 00:24:14.296 nvme0n1 00:24:14.296 21:17:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.296 21:17:30 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.296 21:17:30 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.296 21:17:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.296 21:17:30 -- common/autotest_common.sh@10 -- # set +x 00:24:14.296 21:17:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.296 21:17:30 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.296 21:17:30 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.296 21:17:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.296 21:17:30 -- common/autotest_common.sh@10 -- # set +x 00:24:14.555 21:17:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.555 21:17:30 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.555 21:17:30 -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:14.555 21:17:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.555 21:17:30 -- host/auth.sh@44 -- # digest=sha256 00:24:14.555 21:17:30 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:14.555 21:17:30 -- host/auth.sh@44 -- # keyid=4 00:24:14.555 21:17:30 -- host/auth.sh@45 -- # key=DHHC-1:03:NjE5MmExNzYyOTVmZGZmNWNlMjk0MzQxMjRiYzMzMGU5ZGI4YWZlODk0Y2M2ZmUxNzYzMWQ3MmU5YjczNmI4YkbRucU=: 00:24:14.555 21:17:30 -- host/auth.sh@46 -- # ckey= 00:24:14.555 21:17:30 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:14.555 21:17:30 -- host/auth.sh@49 -- # echo ffdhe2048 00:24:14.555 21:17:30 -- host/auth.sh@50 -- # echo DHHC-1:03:NjE5MmExNzYyOTVmZGZmNWNlMjk0MzQxMjRiYzMzMGU5ZGI4YWZlODk0Y2M2ZmUxNzYzMWQ3MmU5YjczNmI4YkbRucU=: 00:24:14.555 21:17:30 -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:14.555 21:17:30 -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:24:14.555 21:17:30 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.555 21:17:30 -- host/auth.sh@57 -- # digest=sha256 00:24:14.555 21:17:30 -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:14.555 21:17:30 -- host/auth.sh@57 -- # keyid=4 00:24:14.555 21:17:30 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.555 21:17:30 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:14.555 21:17:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.555 21:17:30 -- common/autotest_common.sh@10 -- # set +x 00:24:14.555 21:17:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.555 21:17:30 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.555 21:17:30 -- nvmf/common.sh@730 -- # local ip 00:24:14.555 21:17:30 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:14.555 21:17:30 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:14.555 21:17:30 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.555 21:17:30 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.555 21:17:30 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:14.555 21:17:30 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.555 21:17:30 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:14.555 21:17:30 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:14.555 21:17:30 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:14.555 21:17:30 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:14.555 21:17:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.555 21:17:30 -- common/autotest_common.sh@10 -- # set +x 00:24:14.555 nvme0n1 00:24:14.555 21:17:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.555 21:17:30 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.555 21:17:30 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.555 21:17:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.555 21:17:30 -- common/autotest_common.sh@10 -- # set +x 00:24:14.555 21:17:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.555 21:17:30 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.555 21:17:30 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.555 21:17:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.555 21:17:30 -- common/autotest_common.sh@10 -- # set +x 00:24:14.555 21:17:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.555 21:17:30 -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:14.555 21:17:30 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.555 21:17:30 -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:14.555 21:17:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.555 21:17:30 -- host/auth.sh@44 -- # digest=sha256 00:24:14.555 21:17:30 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:14.555 21:17:30 -- host/auth.sh@44 -- # keyid=0 00:24:14.555 21:17:30 -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhhNWZiYjNkMDdhOTczZjM3NmQzYmE5MGNmNjg1OTmyq3lw: 00:24:14.555 21:17:30 -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzc2YmZiNTk4NjQ1ZWE3OWIzZDM2OWZkMjlhMzYzY2NmY2NhMTJhMzY2MzA3YmZjNzFlZGM4NDg4OWMxNzE4OfrUzHU=: 00:24:14.555 21:17:30 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:14.555 21:17:30 -- host/auth.sh@49 -- # echo ffdhe3072 00:24:14.555 21:17:30 -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhhNWZiYjNkMDdhOTczZjM3NmQzYmE5MGNmNjg1OTmyq3lw: 00:24:14.555 21:17:30 -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzc2YmZiNTk4NjQ1ZWE3OWIzZDM2OWZkMjlhMzYzY2NmY2NhMTJhMzY2MzA3YmZjNzFlZGM4NDg4OWMxNzE4OfrUzHU=: ]] 00:24:14.555 21:17:30 -- host/auth.sh@51 -- # echo DHHC-1:03:Nzc2YmZiNTk4NjQ1ZWE3OWIzZDM2OWZkMjlhMzYzY2NmY2NhMTJhMzY2MzA3YmZjNzFlZGM4NDg4OWMxNzE4OfrUzHU=: 00:24:14.555 21:17:30 -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:24:14.555 21:17:30 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.555 21:17:30 -- host/auth.sh@57 -- # digest=sha256 00:24:14.555 21:17:30 -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:14.555 21:17:30 -- host/auth.sh@57 -- # keyid=0 00:24:14.555 21:17:30 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.555 21:17:30 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:14.555 21:17:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.555 21:17:30 -- common/autotest_common.sh@10 -- # set +x 00:24:14.555 21:17:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.555 21:17:30 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.555 21:17:30 -- nvmf/common.sh@730 -- # local ip 00:24:14.555 21:17:30 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:14.555 21:17:30 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:14.555 21:17:30 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.555 21:17:30 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.555 21:17:30 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:14.555 21:17:30 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.555 21:17:30 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:14.555 21:17:30 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:14.555 21:17:30 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:14.555 21:17:30 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:14.555 21:17:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.555 21:17:30 -- common/autotest_common.sh@10 -- # set +x 00:24:14.814 nvme0n1 00:24:14.814 21:17:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.814 21:17:30 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.814 21:17:30 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.814 21:17:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.814 21:17:30 -- common/autotest_common.sh@10 -- # set +x 00:24:14.814 21:17:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.814 21:17:30 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.814 21:17:30 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.814 21:17:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.814 21:17:30 -- common/autotest_common.sh@10 -- # set +x 00:24:14.814 21:17:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.814 21:17:30 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.814 21:17:30 -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:14.814 21:17:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.814 21:17:30 -- host/auth.sh@44 -- # digest=sha256 00:24:14.814 21:17:30 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:14.814 21:17:30 -- host/auth.sh@44 -- # keyid=1 00:24:14.814 21:17:30 -- host/auth.sh@45 -- # key=DHHC-1:00:YzRhMDE1YTYyMWFiMjk0YWEyNDA5NDI5OTA4YTM1MmFiNThlMzVkYjRlZDJmYTI0YwqxSg==: 00:24:14.814 21:17:30 -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: 00:24:14.814 21:17:30 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:14.814 21:17:30 -- host/auth.sh@49 -- # echo ffdhe3072 00:24:14.814 21:17:30 -- host/auth.sh@50 -- # echo DHHC-1:00:YzRhMDE1YTYyMWFiMjk0YWEyNDA5NDI5OTA4YTM1MmFiNThlMzVkYjRlZDJmYTI0YwqxSg==: 00:24:14.814 21:17:30 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: ]] 00:24:14.814 21:17:30 -- host/auth.sh@51 -- # echo DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: 00:24:14.814 21:17:30 -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:24:14.814 21:17:30 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.815 21:17:30 -- host/auth.sh@57 -- # digest=sha256 00:24:14.815 21:17:30 -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:14.815 21:17:30 -- host/auth.sh@57 -- # keyid=1 00:24:14.815 21:17:30 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.815 21:17:30 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:14.815 21:17:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.815 21:17:30 -- common/autotest_common.sh@10 -- # set +x 00:24:14.815 21:17:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.815 21:17:30 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.815 21:17:30 -- nvmf/common.sh@730 -- # local ip 00:24:14.815 21:17:30 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:14.815 21:17:30 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:14.815 21:17:30 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.815 21:17:30 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.815 21:17:30 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:14.815 21:17:30 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.815 21:17:30 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:14.815 21:17:30 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:14.815 21:17:30 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:14.815 21:17:30 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:14.815 21:17:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.815 21:17:30 -- common/autotest_common.sh@10 -- # set +x 00:24:15.074 nvme0n1 00:24:15.074 21:17:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.074 21:17:30 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.074 21:17:30 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:15.074 21:17:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.074 21:17:30 -- common/autotest_common.sh@10 -- # set +x 00:24:15.074 21:17:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.074 21:17:30 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.074 21:17:30 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.074 21:17:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.074 21:17:30 -- common/autotest_common.sh@10 -- # set +x 00:24:15.074 21:17:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.074 21:17:30 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:15.074 21:17:30 -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:15.074 21:17:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:15.074 21:17:30 -- host/auth.sh@44 -- # digest=sha256 00:24:15.074 21:17:30 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:15.074 21:17:30 -- host/auth.sh@44 -- # keyid=2 00:24:15.074 21:17:30 -- host/auth.sh@45 -- # key=DHHC-1:01:NzAwYjhhZmQ3YzlkMjkxMDMyYzYzMDk4NGZhM2VlNDU/jjTP: 00:24:15.074 21:17:30 -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWZmNzBiODU0MzQ3MDJlYmMxYmVkZmI2MGE2N2RjMTEbLFxR: 00:24:15.074 21:17:30 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:15.074 21:17:30 -- host/auth.sh@49 -- # echo ffdhe3072 00:24:15.074 21:17:30 -- host/auth.sh@50 -- # echo DHHC-1:01:NzAwYjhhZmQ3YzlkMjkxMDMyYzYzMDk4NGZhM2VlNDU/jjTP: 00:24:15.074 21:17:30 -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWZmNzBiODU0MzQ3MDJlYmMxYmVkZmI2MGE2N2RjMTEbLFxR: ]] 00:24:15.074 21:17:30 -- host/auth.sh@51 -- # echo DHHC-1:01:MWZmNzBiODU0MzQ3MDJlYmMxYmVkZmI2MGE2N2RjMTEbLFxR: 00:24:15.074 21:17:30 -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:24:15.074 21:17:30 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:15.074 21:17:30 -- host/auth.sh@57 -- # digest=sha256 00:24:15.074 21:17:30 -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:15.074 21:17:30 -- host/auth.sh@57 -- # keyid=2 00:24:15.074 21:17:30 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:15.074 21:17:30 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:15.074 21:17:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.074 21:17:30 -- common/autotest_common.sh@10 -- # set +x 00:24:15.074 21:17:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.074 21:17:30 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:15.074 21:17:30 -- nvmf/common.sh@730 -- # local ip 00:24:15.074 21:17:30 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:15.074 21:17:30 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:15.074 21:17:30 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.074 21:17:30 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.074 21:17:30 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:15.074 21:17:30 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.074 21:17:30 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:15.074 21:17:30 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:15.074 21:17:30 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:15.074 21:17:30 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:15.074 21:17:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.074 21:17:30 -- common/autotest_common.sh@10 -- # set +x 00:24:15.333 nvme0n1 00:24:15.333 21:17:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.333 21:17:31 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.333 21:17:31 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:15.333 21:17:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.333 21:17:31 -- common/autotest_common.sh@10 -- # set +x 00:24:15.333 21:17:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.333 21:17:31 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.333 21:17:31 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.333 21:17:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.333 21:17:31 -- common/autotest_common.sh@10 -- # set +x 00:24:15.333 21:17:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.333 21:17:31 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:15.333 21:17:31 -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:15.333 21:17:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:15.333 21:17:31 -- host/auth.sh@44 -- # digest=sha256 00:24:15.333 21:17:31 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:15.333 21:17:31 -- host/auth.sh@44 -- # keyid=3 00:24:15.333 21:17:31 -- host/auth.sh@45 -- # key=DHHC-1:02:NTVmNGJkYWJkZDY2ZmJlMzgzOGYwNWZiMTlkMzVkZTJiM2RkZjU5OTllMmE5YzgxVvBHHw==: 00:24:15.333 21:17:31 -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWFiY2JlYTZmODQ0NTk3NGViNjE0ZDlkMWE0ZjhhMDZld/vL: 00:24:15.333 21:17:31 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:15.333 21:17:31 -- host/auth.sh@49 -- # echo ffdhe3072 00:24:15.333 21:17:31 -- host/auth.sh@50 -- # echo DHHC-1:02:NTVmNGJkYWJkZDY2ZmJlMzgzOGYwNWZiMTlkMzVkZTJiM2RkZjU5OTllMmE5YzgxVvBHHw==: 00:24:15.333 21:17:31 -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWFiY2JlYTZmODQ0NTk3NGViNjE0ZDlkMWE0ZjhhMDZld/vL: ]] 00:24:15.333 21:17:31 -- host/auth.sh@51 -- # echo DHHC-1:00:ZWFiY2JlYTZmODQ0NTk3NGViNjE0ZDlkMWE0ZjhhMDZld/vL: 00:24:15.333 21:17:31 -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:24:15.333 21:17:31 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:15.333 21:17:31 -- host/auth.sh@57 -- # digest=sha256 00:24:15.333 21:17:31 -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:15.333 21:17:31 -- host/auth.sh@57 -- # keyid=3 00:24:15.333 21:17:31 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:15.333 21:17:31 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:15.333 21:17:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.333 21:17:31 -- common/autotest_common.sh@10 -- # set +x 00:24:15.333 21:17:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.333 21:17:31 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:15.333 21:17:31 -- nvmf/common.sh@730 -- # local ip 00:24:15.333 21:17:31 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:15.333 21:17:31 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:15.333 21:17:31 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.333 21:17:31 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.333 21:17:31 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:15.333 21:17:31 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.333 21:17:31 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:15.333 21:17:31 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:15.333 21:17:31 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:15.333 21:17:31 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:15.333 21:17:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.333 21:17:31 -- common/autotest_common.sh@10 -- # set +x 00:24:15.592 nvme0n1 00:24:15.592 21:17:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.592 21:17:31 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.592 21:17:31 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:15.592 21:17:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.592 21:17:31 -- common/autotest_common.sh@10 -- # set +x 00:24:15.592 21:17:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.592 21:17:31 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.592 21:17:31 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.592 21:17:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.592 21:17:31 -- common/autotest_common.sh@10 -- # set +x 00:24:15.592 21:17:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.592 21:17:31 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:15.592 21:17:31 -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:15.592 21:17:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:15.592 21:17:31 -- host/auth.sh@44 -- # digest=sha256 00:24:15.592 21:17:31 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:15.592 21:17:31 -- host/auth.sh@44 -- # keyid=4 00:24:15.592 21:17:31 -- host/auth.sh@45 -- # key=DHHC-1:03:NjE5MmExNzYyOTVmZGZmNWNlMjk0MzQxMjRiYzMzMGU5ZGI4YWZlODk0Y2M2ZmUxNzYzMWQ3MmU5YjczNmI4YkbRucU=: 00:24:15.592 21:17:31 -- host/auth.sh@46 -- # ckey= 00:24:15.592 21:17:31 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:15.592 21:17:31 -- host/auth.sh@49 -- # echo ffdhe3072 00:24:15.592 21:17:31 -- host/auth.sh@50 -- # echo DHHC-1:03:NjE5MmExNzYyOTVmZGZmNWNlMjk0MzQxMjRiYzMzMGU5ZGI4YWZlODk0Y2M2ZmUxNzYzMWQ3MmU5YjczNmI4YkbRucU=: 00:24:15.592 21:17:31 -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:15.592 21:17:31 -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:24:15.592 21:17:31 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:15.592 21:17:31 -- host/auth.sh@57 -- # digest=sha256 00:24:15.592 21:17:31 -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:15.592 21:17:31 -- host/auth.sh@57 -- # keyid=4 00:24:15.592 21:17:31 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:15.592 21:17:31 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:15.592 21:17:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.592 21:17:31 -- common/autotest_common.sh@10 -- # set +x 00:24:15.592 21:17:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.592 21:17:31 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:15.592 21:17:31 -- nvmf/common.sh@730 -- # local ip 00:24:15.592 21:17:31 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:15.592 21:17:31 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:15.592 21:17:31 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.592 21:17:31 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.592 21:17:31 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:15.592 21:17:31 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.592 21:17:31 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:15.592 21:17:31 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:15.592 21:17:31 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:15.592 21:17:31 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:15.592 21:17:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.592 21:17:31 -- common/autotest_common.sh@10 -- # set +x 00:24:15.851 nvme0n1 00:24:15.851 21:17:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.851 21:17:31 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.851 21:17:31 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:15.851 21:17:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.851 21:17:31 -- common/autotest_common.sh@10 -- # set +x 00:24:15.851 21:17:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.851 21:17:31 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.851 21:17:31 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.851 21:17:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.851 21:17:31 -- common/autotest_common.sh@10 -- # set +x 00:24:15.851 21:17:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.851 21:17:31 -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:15.851 21:17:31 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:15.851 21:17:31 -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:15.851 21:17:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:15.851 21:17:31 -- host/auth.sh@44 -- # digest=sha256 00:24:15.851 21:17:31 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:15.851 21:17:31 -- host/auth.sh@44 -- # keyid=0 00:24:15.851 21:17:31 -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhhNWZiYjNkMDdhOTczZjM3NmQzYmE5MGNmNjg1OTmyq3lw: 00:24:15.851 21:17:31 -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzc2YmZiNTk4NjQ1ZWE3OWIzZDM2OWZkMjlhMzYzY2NmY2NhMTJhMzY2MzA3YmZjNzFlZGM4NDg4OWMxNzE4OfrUzHU=: 00:24:15.851 21:17:31 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:15.851 21:17:31 -- host/auth.sh@49 -- # echo ffdhe4096 00:24:15.851 21:17:31 -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhhNWZiYjNkMDdhOTczZjM3NmQzYmE5MGNmNjg1OTmyq3lw: 00:24:15.851 21:17:31 -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzc2YmZiNTk4NjQ1ZWE3OWIzZDM2OWZkMjlhMzYzY2NmY2NhMTJhMzY2MzA3YmZjNzFlZGM4NDg4OWMxNzE4OfrUzHU=: ]] 00:24:15.851 21:17:31 -- host/auth.sh@51 -- # echo DHHC-1:03:Nzc2YmZiNTk4NjQ1ZWE3OWIzZDM2OWZkMjlhMzYzY2NmY2NhMTJhMzY2MzA3YmZjNzFlZGM4NDg4OWMxNzE4OfrUzHU=: 00:24:15.851 21:17:31 -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:24:15.851 21:17:31 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:15.851 21:17:31 -- host/auth.sh@57 -- # digest=sha256 00:24:15.851 21:17:31 -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:15.851 21:17:31 -- host/auth.sh@57 -- # keyid=0 00:24:15.851 21:17:31 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:15.851 21:17:31 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:15.851 21:17:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.851 21:17:31 -- common/autotest_common.sh@10 -- # set +x 00:24:15.851 21:17:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.851 21:17:31 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:15.851 21:17:31 -- nvmf/common.sh@730 -- # local ip 00:24:15.851 21:17:31 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:15.851 21:17:31 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:15.851 21:17:31 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.851 21:17:31 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.851 21:17:31 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:15.851 21:17:31 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.851 21:17:31 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:15.851 21:17:31 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:15.851 21:17:31 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:15.851 21:17:31 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:15.851 21:17:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.851 21:17:31 -- common/autotest_common.sh@10 -- # set +x 00:24:16.110 nvme0n1 00:24:16.110 21:17:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.110 21:17:31 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.110 21:17:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.110 21:17:31 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:16.110 21:17:31 -- common/autotest_common.sh@10 -- # set +x 00:24:16.110 21:17:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.110 21:17:31 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.110 21:17:31 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.110 21:17:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.110 21:17:31 -- common/autotest_common.sh@10 -- # set +x 00:24:16.110 21:17:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.110 21:17:31 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:16.110 21:17:31 -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:16.110 21:17:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:16.110 21:17:31 -- host/auth.sh@44 -- # digest=sha256 00:24:16.110 21:17:31 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:16.110 21:17:31 -- host/auth.sh@44 -- # keyid=1 00:24:16.110 21:17:31 -- host/auth.sh@45 -- # key=DHHC-1:00:YzRhMDE1YTYyMWFiMjk0YWEyNDA5NDI5OTA4YTM1MmFiNThlMzVkYjRlZDJmYTI0YwqxSg==: 00:24:16.110 21:17:31 -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: 00:24:16.110 21:17:31 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:16.110 21:17:31 -- host/auth.sh@49 -- # echo ffdhe4096 00:24:16.110 21:17:31 -- host/auth.sh@50 -- # echo DHHC-1:00:YzRhMDE1YTYyMWFiMjk0YWEyNDA5NDI5OTA4YTM1MmFiNThlMzVkYjRlZDJmYTI0YwqxSg==: 00:24:16.110 21:17:31 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: ]] 00:24:16.110 21:17:31 -- host/auth.sh@51 -- # echo DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: 00:24:16.110 21:17:31 -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:24:16.110 21:17:31 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:16.110 21:17:31 -- host/auth.sh@57 -- # digest=sha256 00:24:16.110 21:17:31 -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:16.110 21:17:31 -- host/auth.sh@57 -- # keyid=1 00:24:16.110 21:17:31 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:16.110 21:17:31 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:16.110 21:17:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.110 21:17:31 -- common/autotest_common.sh@10 -- # set +x 00:24:16.110 21:17:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.110 21:17:31 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:16.110 21:17:31 -- nvmf/common.sh@730 -- # local ip 00:24:16.110 21:17:31 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:16.110 21:17:31 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:16.110 21:17:31 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.110 21:17:31 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.110 21:17:31 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:16.110 21:17:31 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.110 21:17:31 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:16.110 21:17:31 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:16.110 21:17:31 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:16.110 21:17:31 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:16.110 21:17:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.110 21:17:31 -- common/autotest_common.sh@10 -- # set +x 00:24:16.369 nvme0n1 00:24:16.369 21:17:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.369 21:17:32 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.369 21:17:32 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:16.369 21:17:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.369 21:17:32 -- common/autotest_common.sh@10 -- # set +x 00:24:16.369 21:17:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.369 21:17:32 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.369 21:17:32 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.369 21:17:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.369 21:17:32 -- common/autotest_common.sh@10 -- # set +x 00:24:16.369 21:17:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.369 21:17:32 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:16.369 21:17:32 -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:16.369 21:17:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:16.369 21:17:32 -- host/auth.sh@44 -- # digest=sha256 00:24:16.369 21:17:32 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:16.369 21:17:32 -- host/auth.sh@44 -- # keyid=2 00:24:16.369 21:17:32 -- host/auth.sh@45 -- # key=DHHC-1:01:NzAwYjhhZmQ3YzlkMjkxMDMyYzYzMDk4NGZhM2VlNDU/jjTP: 00:24:16.369 21:17:32 -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWZmNzBiODU0MzQ3MDJlYmMxYmVkZmI2MGE2N2RjMTEbLFxR: 00:24:16.369 21:17:32 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:16.369 21:17:32 -- host/auth.sh@49 -- # echo ffdhe4096 00:24:16.369 21:17:32 -- host/auth.sh@50 -- # echo DHHC-1:01:NzAwYjhhZmQ3YzlkMjkxMDMyYzYzMDk4NGZhM2VlNDU/jjTP: 00:24:16.369 21:17:32 -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWZmNzBiODU0MzQ3MDJlYmMxYmVkZmI2MGE2N2RjMTEbLFxR: ]] 00:24:16.369 21:17:32 -- host/auth.sh@51 -- # echo DHHC-1:01:MWZmNzBiODU0MzQ3MDJlYmMxYmVkZmI2MGE2N2RjMTEbLFxR: 00:24:16.369 21:17:32 -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:24:16.369 21:17:32 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:16.369 21:17:32 -- host/auth.sh@57 -- # digest=sha256 00:24:16.369 21:17:32 -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:16.369 21:17:32 -- host/auth.sh@57 -- # keyid=2 00:24:16.369 21:17:32 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:16.369 21:17:32 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:16.369 21:17:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.369 21:17:32 -- common/autotest_common.sh@10 -- # set +x 00:24:16.369 21:17:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.369 21:17:32 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:16.369 21:17:32 -- nvmf/common.sh@730 -- # local ip 00:24:16.369 21:17:32 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:16.369 21:17:32 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:16.369 21:17:32 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.369 21:17:32 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.369 21:17:32 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:16.369 21:17:32 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.369 21:17:32 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:16.369 21:17:32 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:16.369 21:17:32 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:16.369 21:17:32 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:16.369 21:17:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.369 21:17:32 -- common/autotest_common.sh@10 -- # set +x 00:24:16.628 nvme0n1 00:24:16.628 21:17:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.628 21:17:32 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.628 21:17:32 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:16.628 21:17:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.628 21:17:32 -- common/autotest_common.sh@10 -- # set +x 00:24:16.628 21:17:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.628 21:17:32 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.628 21:17:32 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.628 21:17:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.628 21:17:32 -- common/autotest_common.sh@10 -- # set +x 00:24:16.887 21:17:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.887 21:17:32 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:16.887 21:17:32 -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:16.887 21:17:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:16.887 21:17:32 -- host/auth.sh@44 -- # digest=sha256 00:24:16.887 21:17:32 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:16.887 21:17:32 -- host/auth.sh@44 -- # keyid=3 00:24:16.887 21:17:32 -- host/auth.sh@45 -- # key=DHHC-1:02:NTVmNGJkYWJkZDY2ZmJlMzgzOGYwNWZiMTlkMzVkZTJiM2RkZjU5OTllMmE5YzgxVvBHHw==: 00:24:16.887 21:17:32 -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWFiY2JlYTZmODQ0NTk3NGViNjE0ZDlkMWE0ZjhhMDZld/vL: 00:24:16.887 21:17:32 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:16.887 21:17:32 -- host/auth.sh@49 -- # echo ffdhe4096 00:24:16.887 21:17:32 -- host/auth.sh@50 -- # echo DHHC-1:02:NTVmNGJkYWJkZDY2ZmJlMzgzOGYwNWZiMTlkMzVkZTJiM2RkZjU5OTllMmE5YzgxVvBHHw==: 00:24:16.887 21:17:32 -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWFiY2JlYTZmODQ0NTk3NGViNjE0ZDlkMWE0ZjhhMDZld/vL: ]] 00:24:16.887 21:17:32 -- host/auth.sh@51 -- # echo DHHC-1:00:ZWFiY2JlYTZmODQ0NTk3NGViNjE0ZDlkMWE0ZjhhMDZld/vL: 00:24:16.887 21:17:32 -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:24:16.887 21:17:32 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:16.887 21:17:32 -- host/auth.sh@57 -- # digest=sha256 00:24:16.887 21:17:32 -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:16.887 21:17:32 -- host/auth.sh@57 -- # keyid=3 00:24:16.887 21:17:32 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:16.887 21:17:32 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:16.887 21:17:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.887 21:17:32 -- common/autotest_common.sh@10 -- # set +x 00:24:16.887 21:17:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.887 21:17:32 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:16.887 21:17:32 -- nvmf/common.sh@730 -- # local ip 00:24:16.887 21:17:32 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:16.887 21:17:32 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:16.887 21:17:32 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.887 21:17:32 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.887 21:17:32 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:16.887 21:17:32 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.887 21:17:32 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:16.887 21:17:32 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:16.887 21:17:32 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:16.887 21:17:32 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:16.887 21:17:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.887 21:17:32 -- common/autotest_common.sh@10 -- # set +x 00:24:17.146 nvme0n1 00:24:17.146 21:17:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.146 21:17:32 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.146 21:17:32 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.146 21:17:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.146 21:17:32 -- common/autotest_common.sh@10 -- # set +x 00:24:17.146 21:17:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.146 21:17:32 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.146 21:17:32 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.146 21:17:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.146 21:17:32 -- common/autotest_common.sh@10 -- # set +x 00:24:17.146 21:17:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.146 21:17:32 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.146 21:17:32 -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:17.146 21:17:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.146 21:17:32 -- host/auth.sh@44 -- # digest=sha256 00:24:17.146 21:17:32 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:17.146 21:17:32 -- host/auth.sh@44 -- # keyid=4 00:24:17.146 21:17:32 -- host/auth.sh@45 -- # key=DHHC-1:03:NjE5MmExNzYyOTVmZGZmNWNlMjk0MzQxMjRiYzMzMGU5ZGI4YWZlODk0Y2M2ZmUxNzYzMWQ3MmU5YjczNmI4YkbRucU=: 00:24:17.146 21:17:32 -- host/auth.sh@46 -- # ckey= 00:24:17.146 21:17:32 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:17.146 21:17:32 -- host/auth.sh@49 -- # echo ffdhe4096 00:24:17.146 21:17:32 -- host/auth.sh@50 -- # echo DHHC-1:03:NjE5MmExNzYyOTVmZGZmNWNlMjk0MzQxMjRiYzMzMGU5ZGI4YWZlODk0Y2M2ZmUxNzYzMWQ3MmU5YjczNmI4YkbRucU=: 00:24:17.146 21:17:32 -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:17.146 21:17:32 -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:24:17.146 21:17:32 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.146 21:17:32 -- host/auth.sh@57 -- # digest=sha256 00:24:17.146 21:17:32 -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:17.146 21:17:32 -- host/auth.sh@57 -- # keyid=4 00:24:17.146 21:17:32 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.146 21:17:32 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:17.146 21:17:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.146 21:17:32 -- common/autotest_common.sh@10 -- # set +x 00:24:17.146 21:17:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.146 21:17:32 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.146 21:17:32 -- nvmf/common.sh@730 -- # local ip 00:24:17.146 21:17:32 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:17.146 21:17:32 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:17.146 21:17:32 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.146 21:17:32 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.146 21:17:32 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:17.146 21:17:32 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.146 21:17:32 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:17.146 21:17:32 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:17.146 21:17:32 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:17.146 21:17:32 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:17.146 21:17:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.146 21:17:32 -- common/autotest_common.sh@10 -- # set +x 00:24:17.405 nvme0n1 00:24:17.405 21:17:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.405 21:17:33 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.405 21:17:33 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.405 21:17:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.405 21:17:33 -- common/autotest_common.sh@10 -- # set +x 00:24:17.405 21:17:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.405 21:17:33 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.405 21:17:33 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.405 21:17:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.405 21:17:33 -- common/autotest_common.sh@10 -- # set +x 00:24:17.405 21:17:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.405 21:17:33 -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:17.405 21:17:33 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.405 21:17:33 -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:17.405 21:17:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.405 21:17:33 -- host/auth.sh@44 -- # digest=sha256 00:24:17.405 21:17:33 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:17.405 21:17:33 -- host/auth.sh@44 -- # keyid=0 00:24:17.405 21:17:33 -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhhNWZiYjNkMDdhOTczZjM3NmQzYmE5MGNmNjg1OTmyq3lw: 00:24:17.405 21:17:33 -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzc2YmZiNTk4NjQ1ZWE3OWIzZDM2OWZkMjlhMzYzY2NmY2NhMTJhMzY2MzA3YmZjNzFlZGM4NDg4OWMxNzE4OfrUzHU=: 00:24:17.405 21:17:33 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:17.405 21:17:33 -- host/auth.sh@49 -- # echo ffdhe6144 00:24:17.405 21:17:33 -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhhNWZiYjNkMDdhOTczZjM3NmQzYmE5MGNmNjg1OTmyq3lw: 00:24:17.405 21:17:33 -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzc2YmZiNTk4NjQ1ZWE3OWIzZDM2OWZkMjlhMzYzY2NmY2NhMTJhMzY2MzA3YmZjNzFlZGM4NDg4OWMxNzE4OfrUzHU=: ]] 00:24:17.405 21:17:33 -- host/auth.sh@51 -- # echo DHHC-1:03:Nzc2YmZiNTk4NjQ1ZWE3OWIzZDM2OWZkMjlhMzYzY2NmY2NhMTJhMzY2MzA3YmZjNzFlZGM4NDg4OWMxNzE4OfrUzHU=: 00:24:17.405 21:17:33 -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:24:17.405 21:17:33 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.405 21:17:33 -- host/auth.sh@57 -- # digest=sha256 00:24:17.405 21:17:33 -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:17.405 21:17:33 -- host/auth.sh@57 -- # keyid=0 00:24:17.405 21:17:33 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.405 21:17:33 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:17.405 21:17:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.405 21:17:33 -- common/autotest_common.sh@10 -- # set +x 00:24:17.405 21:17:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.405 21:17:33 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.405 21:17:33 -- nvmf/common.sh@730 -- # local ip 00:24:17.405 21:17:33 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:17.405 21:17:33 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:17.405 21:17:33 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.405 21:17:33 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.405 21:17:33 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:17.405 21:17:33 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.405 21:17:33 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:17.405 21:17:33 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:17.405 21:17:33 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:17.405 21:17:33 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:17.405 21:17:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.405 21:17:33 -- common/autotest_common.sh@10 -- # set +x 00:24:17.663 nvme0n1 00:24:17.663 21:17:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.663 21:17:33 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.663 21:17:33 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.663 21:17:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.663 21:17:33 -- common/autotest_common.sh@10 -- # set +x 00:24:17.663 21:17:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.922 21:17:33 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.922 21:17:33 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.922 21:17:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.922 21:17:33 -- common/autotest_common.sh@10 -- # set +x 00:24:17.922 21:17:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.922 21:17:33 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.922 21:17:33 -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:17.922 21:17:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.922 21:17:33 -- host/auth.sh@44 -- # digest=sha256 00:24:17.922 21:17:33 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:17.923 21:17:33 -- host/auth.sh@44 -- # keyid=1 00:24:17.923 21:17:33 -- host/auth.sh@45 -- # key=DHHC-1:00:YzRhMDE1YTYyMWFiMjk0YWEyNDA5NDI5OTA4YTM1MmFiNThlMzVkYjRlZDJmYTI0YwqxSg==: 00:24:17.923 21:17:33 -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: 00:24:17.923 21:17:33 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:17.923 21:17:33 -- host/auth.sh@49 -- # echo ffdhe6144 00:24:17.923 21:17:33 -- host/auth.sh@50 -- # echo DHHC-1:00:YzRhMDE1YTYyMWFiMjk0YWEyNDA5NDI5OTA4YTM1MmFiNThlMzVkYjRlZDJmYTI0YwqxSg==: 00:24:17.923 21:17:33 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: ]] 00:24:17.923 21:17:33 -- host/auth.sh@51 -- # echo DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: 00:24:17.923 21:17:33 -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:24:17.923 21:17:33 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.923 21:17:33 -- host/auth.sh@57 -- # digest=sha256 00:24:17.923 21:17:33 -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:17.923 21:17:33 -- host/auth.sh@57 -- # keyid=1 00:24:17.923 21:17:33 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.923 21:17:33 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:17.923 21:17:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.923 21:17:33 -- common/autotest_common.sh@10 -- # set +x 00:24:17.923 21:17:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.923 21:17:33 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.923 21:17:33 -- nvmf/common.sh@730 -- # local ip 00:24:17.923 21:17:33 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:17.923 21:17:33 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:17.923 21:17:33 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.923 21:17:33 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.923 21:17:33 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:17.923 21:17:33 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.923 21:17:33 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:17.923 21:17:33 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:17.923 21:17:33 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:17.923 21:17:33 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:17.923 21:17:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.923 21:17:33 -- common/autotest_common.sh@10 -- # set +x 00:24:18.182 nvme0n1 00:24:18.182 21:17:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.182 21:17:34 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:18.182 21:17:34 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.182 21:17:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.182 21:17:34 -- common/autotest_common.sh@10 -- # set +x 00:24:18.182 21:17:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.182 21:17:34 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.182 21:17:34 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.182 21:17:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.182 21:17:34 -- common/autotest_common.sh@10 -- # set +x 00:24:18.182 21:17:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.182 21:17:34 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:18.182 21:17:34 -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:18.182 21:17:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:18.182 21:17:34 -- host/auth.sh@44 -- # digest=sha256 00:24:18.182 21:17:34 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:18.182 21:17:34 -- host/auth.sh@44 -- # keyid=2 00:24:18.182 21:17:34 -- host/auth.sh@45 -- # key=DHHC-1:01:NzAwYjhhZmQ3YzlkMjkxMDMyYzYzMDk4NGZhM2VlNDU/jjTP: 00:24:18.182 21:17:34 -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWZmNzBiODU0MzQ3MDJlYmMxYmVkZmI2MGE2N2RjMTEbLFxR: 00:24:18.182 21:17:34 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:18.182 21:17:34 -- host/auth.sh@49 -- # echo ffdhe6144 00:24:18.182 21:17:34 -- host/auth.sh@50 -- # echo DHHC-1:01:NzAwYjhhZmQ3YzlkMjkxMDMyYzYzMDk4NGZhM2VlNDU/jjTP: 00:24:18.182 21:17:34 -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWZmNzBiODU0MzQ3MDJlYmMxYmVkZmI2MGE2N2RjMTEbLFxR: ]] 00:24:18.182 21:17:34 -- host/auth.sh@51 -- # echo DHHC-1:01:MWZmNzBiODU0MzQ3MDJlYmMxYmVkZmI2MGE2N2RjMTEbLFxR: 00:24:18.182 21:17:34 -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:24:18.182 21:17:34 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:18.182 21:17:34 -- host/auth.sh@57 -- # digest=sha256 00:24:18.182 21:17:34 -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:18.182 21:17:34 -- host/auth.sh@57 -- # keyid=2 00:24:18.182 21:17:34 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:18.182 21:17:34 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:18.182 21:17:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.182 21:17:34 -- common/autotest_common.sh@10 -- # set +x 00:24:18.182 21:17:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.182 21:17:34 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:18.182 21:17:34 -- nvmf/common.sh@730 -- # local ip 00:24:18.182 21:17:34 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:18.182 21:17:34 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:18.182 21:17:34 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.182 21:17:34 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.182 21:17:34 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:18.182 21:17:34 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.182 21:17:34 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:18.182 21:17:34 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:18.182 21:17:34 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:18.182 21:17:34 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:18.182 21:17:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.182 21:17:34 -- common/autotest_common.sh@10 -- # set +x 00:24:18.749 nvme0n1 00:24:18.749 21:17:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.749 21:17:34 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.749 21:17:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.749 21:17:34 -- common/autotest_common.sh@10 -- # set +x 00:24:18.749 21:17:34 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:18.749 21:17:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.750 21:17:34 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.750 21:17:34 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.750 21:17:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.750 21:17:34 -- common/autotest_common.sh@10 -- # set +x 00:24:18.750 21:17:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.750 21:17:34 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:18.750 21:17:34 -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:24:18.750 21:17:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:18.750 21:17:34 -- host/auth.sh@44 -- # digest=sha256 00:24:18.750 21:17:34 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:18.750 21:17:34 -- host/auth.sh@44 -- # keyid=3 00:24:18.750 21:17:34 -- host/auth.sh@45 -- # key=DHHC-1:02:NTVmNGJkYWJkZDY2ZmJlMzgzOGYwNWZiMTlkMzVkZTJiM2RkZjU5OTllMmE5YzgxVvBHHw==: 00:24:18.750 21:17:34 -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWFiY2JlYTZmODQ0NTk3NGViNjE0ZDlkMWE0ZjhhMDZld/vL: 00:24:18.750 21:17:34 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:18.750 21:17:34 -- host/auth.sh@49 -- # echo ffdhe6144 00:24:18.750 21:17:34 -- host/auth.sh@50 -- # echo DHHC-1:02:NTVmNGJkYWJkZDY2ZmJlMzgzOGYwNWZiMTlkMzVkZTJiM2RkZjU5OTllMmE5YzgxVvBHHw==: 00:24:18.750 21:17:34 -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWFiY2JlYTZmODQ0NTk3NGViNjE0ZDlkMWE0ZjhhMDZld/vL: ]] 00:24:18.750 21:17:34 -- host/auth.sh@51 -- # echo DHHC-1:00:ZWFiY2JlYTZmODQ0NTk3NGViNjE0ZDlkMWE0ZjhhMDZld/vL: 00:24:18.750 21:17:34 -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:24:18.750 21:17:34 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:18.750 21:17:34 -- host/auth.sh@57 -- # digest=sha256 00:24:18.750 21:17:34 -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:18.750 21:17:34 -- host/auth.sh@57 -- # keyid=3 00:24:18.750 21:17:34 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:18.750 21:17:34 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:18.750 21:17:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.750 21:17:34 -- common/autotest_common.sh@10 -- # set +x 00:24:18.750 21:17:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.750 21:17:34 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:18.750 21:17:34 -- nvmf/common.sh@730 -- # local ip 00:24:18.750 21:17:34 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:18.750 21:17:34 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:18.750 21:17:34 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.750 21:17:34 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.750 21:17:34 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:18.750 21:17:34 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.750 21:17:34 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:18.750 21:17:34 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:18.750 21:17:34 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:18.750 21:17:34 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:18.750 21:17:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.750 21:17:34 -- common/autotest_common.sh@10 -- # set +x 00:24:19.009 nvme0n1 00:24:19.009 21:17:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.009 21:17:34 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.009 21:17:34 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:19.009 21:17:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.009 21:17:34 -- common/autotest_common.sh@10 -- # set +x 00:24:19.009 21:17:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.009 21:17:34 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.009 21:17:34 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.009 21:17:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.009 21:17:34 -- common/autotest_common.sh@10 -- # set +x 00:24:19.009 21:17:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.009 21:17:34 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.009 21:17:34 -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:24:19.009 21:17:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.009 21:17:34 -- host/auth.sh@44 -- # digest=sha256 00:24:19.009 21:17:34 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:19.009 21:17:34 -- host/auth.sh@44 -- # keyid=4 00:24:19.009 21:17:34 -- host/auth.sh@45 -- # key=DHHC-1:03:NjE5MmExNzYyOTVmZGZmNWNlMjk0MzQxMjRiYzMzMGU5ZGI4YWZlODk0Y2M2ZmUxNzYzMWQ3MmU5YjczNmI4YkbRucU=: 00:24:19.009 21:17:34 -- host/auth.sh@46 -- # ckey= 00:24:19.009 21:17:34 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:19.009 21:17:34 -- host/auth.sh@49 -- # echo ffdhe6144 00:24:19.009 21:17:34 -- host/auth.sh@50 -- # echo DHHC-1:03:NjE5MmExNzYyOTVmZGZmNWNlMjk0MzQxMjRiYzMzMGU5ZGI4YWZlODk0Y2M2ZmUxNzYzMWQ3MmU5YjczNmI4YkbRucU=: 00:24:19.009 21:17:34 -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:19.009 21:17:34 -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:24:19.009 21:17:34 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:19.009 21:17:34 -- host/auth.sh@57 -- # digest=sha256 00:24:19.009 21:17:34 -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:19.009 21:17:34 -- host/auth.sh@57 -- # keyid=4 00:24:19.009 21:17:34 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:19.009 21:17:34 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:19.009 21:17:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.009 21:17:34 -- common/autotest_common.sh@10 -- # set +x 00:24:19.268 21:17:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.268 21:17:34 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.268 21:17:34 -- nvmf/common.sh@730 -- # local ip 00:24:19.268 21:17:34 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:19.268 21:17:34 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:19.268 21:17:34 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.268 21:17:34 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.268 21:17:34 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:19.268 21:17:34 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.268 21:17:34 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:19.268 21:17:34 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:19.268 21:17:34 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:19.268 21:17:34 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:19.268 21:17:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.268 21:17:34 -- common/autotest_common.sh@10 -- # set +x 00:24:19.527 nvme0n1 00:24:19.527 21:17:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.527 21:17:35 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.527 21:17:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.527 21:17:35 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:19.527 21:17:35 -- common/autotest_common.sh@10 -- # set +x 00:24:19.527 21:17:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.527 21:17:35 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.527 21:17:35 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.527 21:17:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.527 21:17:35 -- common/autotest_common.sh@10 -- # set +x 00:24:19.527 21:17:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.527 21:17:35 -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:19.527 21:17:35 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.527 21:17:35 -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:24:19.527 21:17:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.527 21:17:35 -- host/auth.sh@44 -- # digest=sha256 00:24:19.527 21:17:35 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:19.527 21:17:35 -- host/auth.sh@44 -- # keyid=0 00:24:19.527 21:17:35 -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhhNWZiYjNkMDdhOTczZjM3NmQzYmE5MGNmNjg1OTmyq3lw: 00:24:19.527 21:17:35 -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzc2YmZiNTk4NjQ1ZWE3OWIzZDM2OWZkMjlhMzYzY2NmY2NhMTJhMzY2MzA3YmZjNzFlZGM4NDg4OWMxNzE4OfrUzHU=: 00:24:19.527 21:17:35 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:19.527 21:17:35 -- host/auth.sh@49 -- # echo ffdhe8192 00:24:19.527 21:17:35 -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhhNWZiYjNkMDdhOTczZjM3NmQzYmE5MGNmNjg1OTmyq3lw: 00:24:19.527 21:17:35 -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzc2YmZiNTk4NjQ1ZWE3OWIzZDM2OWZkMjlhMzYzY2NmY2NhMTJhMzY2MzA3YmZjNzFlZGM4NDg4OWMxNzE4OfrUzHU=: ]] 00:24:19.527 21:17:35 -- host/auth.sh@51 -- # echo DHHC-1:03:Nzc2YmZiNTk4NjQ1ZWE3OWIzZDM2OWZkMjlhMzYzY2NmY2NhMTJhMzY2MzA3YmZjNzFlZGM4NDg4OWMxNzE4OfrUzHU=: 00:24:19.527 21:17:35 -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:24:19.527 21:17:35 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:19.527 21:17:35 -- host/auth.sh@57 -- # digest=sha256 00:24:19.527 21:17:35 -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:19.527 21:17:35 -- host/auth.sh@57 -- # keyid=0 00:24:19.527 21:17:35 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:19.527 21:17:35 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:19.527 21:17:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.527 21:17:35 -- common/autotest_common.sh@10 -- # set +x 00:24:19.527 21:17:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.527 21:17:35 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.527 21:17:35 -- nvmf/common.sh@730 -- # local ip 00:24:19.527 21:17:35 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:19.527 21:17:35 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:19.527 21:17:35 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.527 21:17:35 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.527 21:17:35 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:19.527 21:17:35 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.527 21:17:35 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:19.527 21:17:35 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:19.527 21:17:35 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:19.527 21:17:35 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:19.527 21:17:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.527 21:17:35 -- common/autotest_common.sh@10 -- # set +x 00:24:20.095 nvme0n1 00:24:20.095 21:17:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.095 21:17:35 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.095 21:17:35 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.095 21:17:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.095 21:17:35 -- common/autotest_common.sh@10 -- # set +x 00:24:20.095 21:17:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.354 21:17:36 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.354 21:17:36 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.354 21:17:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.354 21:17:36 -- common/autotest_common.sh@10 -- # set +x 00:24:20.354 21:17:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.354 21:17:36 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.354 21:17:36 -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:20.354 21:17:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.354 21:17:36 -- host/auth.sh@44 -- # digest=sha256 00:24:20.354 21:17:36 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:20.354 21:17:36 -- host/auth.sh@44 -- # keyid=1 00:24:20.354 21:17:36 -- host/auth.sh@45 -- # key=DHHC-1:00:YzRhMDE1YTYyMWFiMjk0YWEyNDA5NDI5OTA4YTM1MmFiNThlMzVkYjRlZDJmYTI0YwqxSg==: 00:24:20.354 21:17:36 -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: 00:24:20.354 21:17:36 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:20.354 21:17:36 -- host/auth.sh@49 -- # echo ffdhe8192 00:24:20.354 21:17:36 -- host/auth.sh@50 -- # echo DHHC-1:00:YzRhMDE1YTYyMWFiMjk0YWEyNDA5NDI5OTA4YTM1MmFiNThlMzVkYjRlZDJmYTI0YwqxSg==: 00:24:20.354 21:17:36 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: ]] 00:24:20.354 21:17:36 -- host/auth.sh@51 -- # echo DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: 00:24:20.354 21:17:36 -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:24:20.354 21:17:36 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.354 21:17:36 -- host/auth.sh@57 -- # digest=sha256 00:24:20.354 21:17:36 -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:20.354 21:17:36 -- host/auth.sh@57 -- # keyid=1 00:24:20.354 21:17:36 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.354 21:17:36 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:20.354 21:17:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.354 21:17:36 -- common/autotest_common.sh@10 -- # set +x 00:24:20.354 21:17:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.354 21:17:36 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.354 21:17:36 -- nvmf/common.sh@730 -- # local ip 00:24:20.354 21:17:36 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:20.354 21:17:36 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:20.354 21:17:36 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.354 21:17:36 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.354 21:17:36 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:20.354 21:17:36 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.354 21:17:36 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:20.354 21:17:36 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:20.354 21:17:36 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:20.354 21:17:36 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:20.354 21:17:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.354 21:17:36 -- common/autotest_common.sh@10 -- # set +x 00:24:20.921 nvme0n1 00:24:20.921 21:17:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.921 21:17:36 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.921 21:17:36 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.921 21:17:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.921 21:17:36 -- common/autotest_common.sh@10 -- # set +x 00:24:20.921 21:17:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.921 21:17:36 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.921 21:17:36 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.921 21:17:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.921 21:17:36 -- common/autotest_common.sh@10 -- # set +x 00:24:20.921 21:17:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.921 21:17:36 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.921 21:17:36 -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:20.921 21:17:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.921 21:17:36 -- host/auth.sh@44 -- # digest=sha256 00:24:20.921 21:17:36 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:20.921 21:17:36 -- host/auth.sh@44 -- # keyid=2 00:24:20.922 21:17:36 -- host/auth.sh@45 -- # key=DHHC-1:01:NzAwYjhhZmQ3YzlkMjkxMDMyYzYzMDk4NGZhM2VlNDU/jjTP: 00:24:20.922 21:17:36 -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWZmNzBiODU0MzQ3MDJlYmMxYmVkZmI2MGE2N2RjMTEbLFxR: 00:24:20.922 21:17:36 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:20.922 21:17:36 -- host/auth.sh@49 -- # echo ffdhe8192 00:24:20.922 21:17:36 -- host/auth.sh@50 -- # echo DHHC-1:01:NzAwYjhhZmQ3YzlkMjkxMDMyYzYzMDk4NGZhM2VlNDU/jjTP: 00:24:20.922 21:17:36 -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWZmNzBiODU0MzQ3MDJlYmMxYmVkZmI2MGE2N2RjMTEbLFxR: ]] 00:24:20.922 21:17:36 -- host/auth.sh@51 -- # echo DHHC-1:01:MWZmNzBiODU0MzQ3MDJlYmMxYmVkZmI2MGE2N2RjMTEbLFxR: 00:24:20.922 21:17:36 -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:24:20.922 21:17:36 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.922 21:17:36 -- host/auth.sh@57 -- # digest=sha256 00:24:20.922 21:17:36 -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:20.922 21:17:36 -- host/auth.sh@57 -- # keyid=2 00:24:20.922 21:17:36 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.922 21:17:36 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:20.922 21:17:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.922 21:17:36 -- common/autotest_common.sh@10 -- # set +x 00:24:20.922 21:17:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.922 21:17:36 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.922 21:17:36 -- nvmf/common.sh@730 -- # local ip 00:24:20.922 21:17:36 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:20.922 21:17:36 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:20.922 21:17:36 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.922 21:17:36 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.922 21:17:36 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:20.922 21:17:36 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.922 21:17:36 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:20.922 21:17:36 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:20.922 21:17:36 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:20.922 21:17:36 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:20.922 21:17:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.922 21:17:36 -- common/autotest_common.sh@10 -- # set +x 00:24:21.524 nvme0n1 00:24:21.524 21:17:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.524 21:17:37 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:21.524 21:17:37 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.524 21:17:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.524 21:17:37 -- common/autotest_common.sh@10 -- # set +x 00:24:21.524 21:17:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.524 21:17:37 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.524 21:17:37 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.524 21:17:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.524 21:17:37 -- common/autotest_common.sh@10 -- # set +x 00:24:21.524 21:17:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.524 21:17:37 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:21.524 21:17:37 -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:21.524 21:17:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.524 21:17:37 -- host/auth.sh@44 -- # digest=sha256 00:24:21.524 21:17:37 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:21.524 21:17:37 -- host/auth.sh@44 -- # keyid=3 00:24:21.524 21:17:37 -- host/auth.sh@45 -- # key=DHHC-1:02:NTVmNGJkYWJkZDY2ZmJlMzgzOGYwNWZiMTlkMzVkZTJiM2RkZjU5OTllMmE5YzgxVvBHHw==: 00:24:21.524 21:17:37 -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWFiY2JlYTZmODQ0NTk3NGViNjE0ZDlkMWE0ZjhhMDZld/vL: 00:24:21.524 21:17:37 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:21.524 21:17:37 -- host/auth.sh@49 -- # echo ffdhe8192 00:24:21.524 21:17:37 -- host/auth.sh@50 -- # echo DHHC-1:02:NTVmNGJkYWJkZDY2ZmJlMzgzOGYwNWZiMTlkMzVkZTJiM2RkZjU5OTllMmE5YzgxVvBHHw==: 00:24:21.524 21:17:37 -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWFiY2JlYTZmODQ0NTk3NGViNjE0ZDlkMWE0ZjhhMDZld/vL: ]] 00:24:21.524 21:17:37 -- host/auth.sh@51 -- # echo DHHC-1:00:ZWFiY2JlYTZmODQ0NTk3NGViNjE0ZDlkMWE0ZjhhMDZld/vL: 00:24:21.524 21:17:37 -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:24:21.524 21:17:37 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:21.524 21:17:37 -- host/auth.sh@57 -- # digest=sha256 00:24:21.524 21:17:37 -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:21.524 21:17:37 -- host/auth.sh@57 -- # keyid=3 00:24:21.524 21:17:37 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:21.524 21:17:37 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:21.524 21:17:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.524 21:17:37 -- common/autotest_common.sh@10 -- # set +x 00:24:21.524 21:17:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.524 21:17:37 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:21.524 21:17:37 -- nvmf/common.sh@730 -- # local ip 00:24:21.524 21:17:37 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:21.524 21:17:37 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:21.524 21:17:37 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.524 21:17:37 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.524 21:17:37 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:21.524 21:17:37 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.524 21:17:37 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:21.524 21:17:37 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:21.524 21:17:37 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:21.524 21:17:37 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:21.524 21:17:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.524 21:17:37 -- common/autotest_common.sh@10 -- # set +x 00:24:22.096 nvme0n1 00:24:22.096 21:17:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.096 21:17:37 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.096 21:17:37 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.096 21:17:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.096 21:17:37 -- common/autotest_common.sh@10 -- # set +x 00:24:22.096 21:17:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.096 21:17:37 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.096 21:17:37 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.096 21:17:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.096 21:17:37 -- common/autotest_common.sh@10 -- # set +x 00:24:22.096 21:17:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.096 21:17:37 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.096 21:17:37 -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:22.096 21:17:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.096 21:17:37 -- host/auth.sh@44 -- # digest=sha256 00:24:22.096 21:17:37 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:22.096 21:17:37 -- host/auth.sh@44 -- # keyid=4 00:24:22.096 21:17:37 -- host/auth.sh@45 -- # key=DHHC-1:03:NjE5MmExNzYyOTVmZGZmNWNlMjk0MzQxMjRiYzMzMGU5ZGI4YWZlODk0Y2M2ZmUxNzYzMWQ3MmU5YjczNmI4YkbRucU=: 00:24:22.096 21:17:37 -- host/auth.sh@46 -- # ckey= 00:24:22.096 21:17:37 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:22.096 21:17:37 -- host/auth.sh@49 -- # echo ffdhe8192 00:24:22.096 21:17:37 -- host/auth.sh@50 -- # echo DHHC-1:03:NjE5MmExNzYyOTVmZGZmNWNlMjk0MzQxMjRiYzMzMGU5ZGI4YWZlODk0Y2M2ZmUxNzYzMWQ3MmU5YjczNmI4YkbRucU=: 00:24:22.096 21:17:37 -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:22.096 21:17:37 -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:24:22.096 21:17:37 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.096 21:17:37 -- host/auth.sh@57 -- # digest=sha256 00:24:22.096 21:17:37 -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:22.096 21:17:37 -- host/auth.sh@57 -- # keyid=4 00:24:22.096 21:17:37 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.096 21:17:37 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:22.096 21:17:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.096 21:17:37 -- common/autotest_common.sh@10 -- # set +x 00:24:22.096 21:17:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.096 21:17:37 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.096 21:17:37 -- nvmf/common.sh@730 -- # local ip 00:24:22.096 21:17:37 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:22.096 21:17:37 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:22.096 21:17:37 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.096 21:17:37 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.096 21:17:37 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:22.096 21:17:37 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.096 21:17:37 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:22.096 21:17:37 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:22.096 21:17:37 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:22.096 21:17:37 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:22.096 21:17:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.096 21:17:37 -- common/autotest_common.sh@10 -- # set +x 00:24:22.664 nvme0n1 00:24:22.664 21:17:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.664 21:17:38 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.664 21:17:38 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.664 21:17:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.664 21:17:38 -- common/autotest_common.sh@10 -- # set +x 00:24:22.664 21:17:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.664 21:17:38 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.944 21:17:38 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.944 21:17:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.944 21:17:38 -- common/autotest_common.sh@10 -- # set +x 00:24:22.944 21:17:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.944 21:17:38 -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:22.944 21:17:38 -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:22.944 21:17:38 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.944 21:17:38 -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:22.944 21:17:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.944 21:17:38 -- host/auth.sh@44 -- # digest=sha384 00:24:22.944 21:17:38 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:22.944 21:17:38 -- host/auth.sh@44 -- # keyid=0 00:24:22.944 21:17:38 -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhhNWZiYjNkMDdhOTczZjM3NmQzYmE5MGNmNjg1OTmyq3lw: 00:24:22.944 21:17:38 -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzc2YmZiNTk4NjQ1ZWE3OWIzZDM2OWZkMjlhMzYzY2NmY2NhMTJhMzY2MzA3YmZjNzFlZGM4NDg4OWMxNzE4OfrUzHU=: 00:24:22.944 21:17:38 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:22.944 21:17:38 -- host/auth.sh@49 -- # echo ffdhe2048 00:24:22.944 21:17:38 -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhhNWZiYjNkMDdhOTczZjM3NmQzYmE5MGNmNjg1OTmyq3lw: 00:24:22.944 21:17:38 -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzc2YmZiNTk4NjQ1ZWE3OWIzZDM2OWZkMjlhMzYzY2NmY2NhMTJhMzY2MzA3YmZjNzFlZGM4NDg4OWMxNzE4OfrUzHU=: ]] 00:24:22.944 21:17:38 -- host/auth.sh@51 -- # echo DHHC-1:03:Nzc2YmZiNTk4NjQ1ZWE3OWIzZDM2OWZkMjlhMzYzY2NmY2NhMTJhMzY2MzA3YmZjNzFlZGM4NDg4OWMxNzE4OfrUzHU=: 00:24:22.944 21:17:38 -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:24:22.944 21:17:38 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.944 21:17:38 -- host/auth.sh@57 -- # digest=sha384 00:24:22.944 21:17:38 -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:22.944 21:17:38 -- host/auth.sh@57 -- # keyid=0 00:24:22.945 21:17:38 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.945 21:17:38 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:22.945 21:17:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.945 21:17:38 -- common/autotest_common.sh@10 -- # set +x 00:24:22.945 21:17:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.945 21:17:38 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.945 21:17:38 -- nvmf/common.sh@730 -- # local ip 00:24:22.945 21:17:38 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:22.945 21:17:38 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:22.945 21:17:38 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.945 21:17:38 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.945 21:17:38 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:22.945 21:17:38 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.945 21:17:38 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:22.945 21:17:38 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:22.945 21:17:38 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:22.945 21:17:38 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:22.945 21:17:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.945 21:17:38 -- common/autotest_common.sh@10 -- # set +x 00:24:22.945 nvme0n1 00:24:22.945 21:17:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.945 21:17:38 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.945 21:17:38 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.945 21:17:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.945 21:17:38 -- common/autotest_common.sh@10 -- # set +x 00:24:22.945 21:17:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.945 21:17:38 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.945 21:17:38 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.945 21:17:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.945 21:17:38 -- common/autotest_common.sh@10 -- # set +x 00:24:22.945 21:17:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.945 21:17:38 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.945 21:17:38 -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:22.945 21:17:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.945 21:17:38 -- host/auth.sh@44 -- # digest=sha384 00:24:22.945 21:17:38 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:22.945 21:17:38 -- host/auth.sh@44 -- # keyid=1 00:24:22.945 21:17:38 -- host/auth.sh@45 -- # key=DHHC-1:00:YzRhMDE1YTYyMWFiMjk0YWEyNDA5NDI5OTA4YTM1MmFiNThlMzVkYjRlZDJmYTI0YwqxSg==: 00:24:22.945 21:17:38 -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: 00:24:22.945 21:17:38 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:22.945 21:17:38 -- host/auth.sh@49 -- # echo ffdhe2048 00:24:22.945 21:17:38 -- host/auth.sh@50 -- # echo DHHC-1:00:YzRhMDE1YTYyMWFiMjk0YWEyNDA5NDI5OTA4YTM1MmFiNThlMzVkYjRlZDJmYTI0YwqxSg==: 00:24:22.945 21:17:38 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: ]] 00:24:22.945 21:17:38 -- host/auth.sh@51 -- # echo DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: 00:24:22.945 21:17:38 -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:24:22.945 21:17:38 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.945 21:17:38 -- host/auth.sh@57 -- # digest=sha384 00:24:22.945 21:17:38 -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:22.945 21:17:38 -- host/auth.sh@57 -- # keyid=1 00:24:22.945 21:17:38 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.945 21:17:38 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:22.945 21:17:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.945 21:17:38 -- common/autotest_common.sh@10 -- # set +x 00:24:22.945 21:17:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.945 21:17:38 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.945 21:17:38 -- nvmf/common.sh@730 -- # local ip 00:24:22.945 21:17:38 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:22.945 21:17:38 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:22.945 21:17:38 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.945 21:17:38 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.945 21:17:38 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:22.945 21:17:38 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.945 21:17:38 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:22.945 21:17:38 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:22.945 21:17:38 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:22.945 21:17:38 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:22.945 21:17:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.945 21:17:38 -- common/autotest_common.sh@10 -- # set +x 00:24:23.205 nvme0n1 00:24:23.205 21:17:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.205 21:17:38 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.205 21:17:38 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.205 21:17:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.205 21:17:38 -- common/autotest_common.sh@10 -- # set +x 00:24:23.205 21:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.205 21:17:39 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.205 21:17:39 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.205 21:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.205 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:24:23.205 21:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.205 21:17:39 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.205 21:17:39 -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:24:23.205 21:17:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.205 21:17:39 -- host/auth.sh@44 -- # digest=sha384 00:24:23.205 21:17:39 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:23.205 21:17:39 -- host/auth.sh@44 -- # keyid=2 00:24:23.205 21:17:39 -- host/auth.sh@45 -- # key=DHHC-1:01:NzAwYjhhZmQ3YzlkMjkxMDMyYzYzMDk4NGZhM2VlNDU/jjTP: 00:24:23.205 21:17:39 -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWZmNzBiODU0MzQ3MDJlYmMxYmVkZmI2MGE2N2RjMTEbLFxR: 00:24:23.205 21:17:39 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:23.205 21:17:39 -- host/auth.sh@49 -- # echo ffdhe2048 00:24:23.205 21:17:39 -- host/auth.sh@50 -- # echo DHHC-1:01:NzAwYjhhZmQ3YzlkMjkxMDMyYzYzMDk4NGZhM2VlNDU/jjTP: 00:24:23.205 21:17:39 -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWZmNzBiODU0MzQ3MDJlYmMxYmVkZmI2MGE2N2RjMTEbLFxR: ]] 00:24:23.205 21:17:39 -- host/auth.sh@51 -- # echo DHHC-1:01:MWZmNzBiODU0MzQ3MDJlYmMxYmVkZmI2MGE2N2RjMTEbLFxR: 00:24:23.205 21:17:39 -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:24:23.205 21:17:39 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.205 21:17:39 -- host/auth.sh@57 -- # digest=sha384 00:24:23.205 21:17:39 -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:23.205 21:17:39 -- host/auth.sh@57 -- # keyid=2 00:24:23.205 21:17:39 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.205 21:17:39 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:23.205 21:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.205 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:24:23.205 21:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.205 21:17:39 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.205 21:17:39 -- nvmf/common.sh@730 -- # local ip 00:24:23.205 21:17:39 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:23.205 21:17:39 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:23.205 21:17:39 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.205 21:17:39 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.205 21:17:39 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:23.205 21:17:39 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.205 21:17:39 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:23.205 21:17:39 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:23.205 21:17:39 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:23.205 21:17:39 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:23.205 21:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.205 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:24:23.464 nvme0n1 00:24:23.464 21:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.464 21:17:39 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.464 21:17:39 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.464 21:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.464 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:24:23.464 21:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.464 21:17:39 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.464 21:17:39 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.464 21:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.464 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:24:23.464 21:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.464 21:17:39 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.464 21:17:39 -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:24:23.464 21:17:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.464 21:17:39 -- host/auth.sh@44 -- # digest=sha384 00:24:23.464 21:17:39 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:23.464 21:17:39 -- host/auth.sh@44 -- # keyid=3 00:24:23.464 21:17:39 -- host/auth.sh@45 -- # key=DHHC-1:02:NTVmNGJkYWJkZDY2ZmJlMzgzOGYwNWZiMTlkMzVkZTJiM2RkZjU5OTllMmE5YzgxVvBHHw==: 00:24:23.464 21:17:39 -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWFiY2JlYTZmODQ0NTk3NGViNjE0ZDlkMWE0ZjhhMDZld/vL: 00:24:23.464 21:17:39 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:23.464 21:17:39 -- host/auth.sh@49 -- # echo ffdhe2048 00:24:23.464 21:17:39 -- host/auth.sh@50 -- # echo DHHC-1:02:NTVmNGJkYWJkZDY2ZmJlMzgzOGYwNWZiMTlkMzVkZTJiM2RkZjU5OTllMmE5YzgxVvBHHw==: 00:24:23.464 21:17:39 -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWFiY2JlYTZmODQ0NTk3NGViNjE0ZDlkMWE0ZjhhMDZld/vL: ]] 00:24:23.464 21:17:39 -- host/auth.sh@51 -- # echo DHHC-1:00:ZWFiY2JlYTZmODQ0NTk3NGViNjE0ZDlkMWE0ZjhhMDZld/vL: 00:24:23.464 21:17:39 -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:24:23.464 21:17:39 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.464 21:17:39 -- host/auth.sh@57 -- # digest=sha384 00:24:23.464 21:17:39 -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:23.464 21:17:39 -- host/auth.sh@57 -- # keyid=3 00:24:23.464 21:17:39 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.464 21:17:39 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:23.464 21:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.464 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:24:23.464 21:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.464 21:17:39 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.464 21:17:39 -- nvmf/common.sh@730 -- # local ip 00:24:23.464 21:17:39 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:23.464 21:17:39 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:23.464 21:17:39 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.464 21:17:39 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.464 21:17:39 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:23.464 21:17:39 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.464 21:17:39 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:23.464 21:17:39 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:23.464 21:17:39 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:23.464 21:17:39 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:23.464 21:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.464 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:24:23.724 nvme0n1 00:24:23.724 21:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.724 21:17:39 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.724 21:17:39 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.724 21:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.724 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:24:23.724 21:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.724 21:17:39 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.724 21:17:39 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.724 21:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.724 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:24:23.724 21:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.724 21:17:39 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.724 21:17:39 -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:24:23.724 21:17:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.724 21:17:39 -- host/auth.sh@44 -- # digest=sha384 00:24:23.724 21:17:39 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:23.724 21:17:39 -- host/auth.sh@44 -- # keyid=4 00:24:23.724 21:17:39 -- host/auth.sh@45 -- # key=DHHC-1:03:NjE5MmExNzYyOTVmZGZmNWNlMjk0MzQxMjRiYzMzMGU5ZGI4YWZlODk0Y2M2ZmUxNzYzMWQ3MmU5YjczNmI4YkbRucU=: 00:24:23.724 21:17:39 -- host/auth.sh@46 -- # ckey= 00:24:23.724 21:17:39 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:23.724 21:17:39 -- host/auth.sh@49 -- # echo ffdhe2048 00:24:23.724 21:17:39 -- host/auth.sh@50 -- # echo DHHC-1:03:NjE5MmExNzYyOTVmZGZmNWNlMjk0MzQxMjRiYzMzMGU5ZGI4YWZlODk0Y2M2ZmUxNzYzMWQ3MmU5YjczNmI4YkbRucU=: 00:24:23.724 21:17:39 -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:23.724 21:17:39 -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:24:23.724 21:17:39 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.724 21:17:39 -- host/auth.sh@57 -- # digest=sha384 00:24:23.724 21:17:39 -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:23.724 21:17:39 -- host/auth.sh@57 -- # keyid=4 00:24:23.724 21:17:39 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.724 21:17:39 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:23.724 21:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.724 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:24:23.724 21:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.724 21:17:39 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.724 21:17:39 -- nvmf/common.sh@730 -- # local ip 00:24:23.724 21:17:39 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:23.724 21:17:39 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:23.724 21:17:39 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.724 21:17:39 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.724 21:17:39 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:23.724 21:17:39 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.724 21:17:39 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:23.724 21:17:39 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:23.724 21:17:39 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:23.724 21:17:39 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:23.724 21:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.724 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:24:23.724 nvme0n1 00:24:23.724 21:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.724 21:17:39 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.724 21:17:39 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.724 21:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.724 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:24:23.724 21:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.983 21:17:39 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.983 21:17:39 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.983 21:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.983 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:24:23.983 21:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.983 21:17:39 -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:23.983 21:17:39 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.983 21:17:39 -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:24:23.983 21:17:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.983 21:17:39 -- host/auth.sh@44 -- # digest=sha384 00:24:23.983 21:17:39 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:23.983 21:17:39 -- host/auth.sh@44 -- # keyid=0 00:24:23.983 21:17:39 -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhhNWZiYjNkMDdhOTczZjM3NmQzYmE5MGNmNjg1OTmyq3lw: 00:24:23.983 21:17:39 -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzc2YmZiNTk4NjQ1ZWE3OWIzZDM2OWZkMjlhMzYzY2NmY2NhMTJhMzY2MzA3YmZjNzFlZGM4NDg4OWMxNzE4OfrUzHU=: 00:24:23.983 21:17:39 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:23.983 21:17:39 -- host/auth.sh@49 -- # echo ffdhe3072 00:24:23.983 21:17:39 -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhhNWZiYjNkMDdhOTczZjM3NmQzYmE5MGNmNjg1OTmyq3lw: 00:24:23.983 21:17:39 -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzc2YmZiNTk4NjQ1ZWE3OWIzZDM2OWZkMjlhMzYzY2NmY2NhMTJhMzY2MzA3YmZjNzFlZGM4NDg4OWMxNzE4OfrUzHU=: ]] 00:24:23.983 21:17:39 -- host/auth.sh@51 -- # echo DHHC-1:03:Nzc2YmZiNTk4NjQ1ZWE3OWIzZDM2OWZkMjlhMzYzY2NmY2NhMTJhMzY2MzA3YmZjNzFlZGM4NDg4OWMxNzE4OfrUzHU=: 00:24:23.983 21:17:39 -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:24:23.983 21:17:39 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.983 21:17:39 -- host/auth.sh@57 -- # digest=sha384 00:24:23.983 21:17:39 -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:23.983 21:17:39 -- host/auth.sh@57 -- # keyid=0 00:24:23.983 21:17:39 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.983 21:17:39 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:23.983 21:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.983 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:24:23.983 21:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.983 21:17:39 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.983 21:17:39 -- nvmf/common.sh@730 -- # local ip 00:24:23.983 21:17:39 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:23.983 21:17:39 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:23.984 21:17:39 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.984 21:17:39 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.984 21:17:39 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:23.984 21:17:39 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.984 21:17:39 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:23.984 21:17:39 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:23.984 21:17:39 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:23.984 21:17:39 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:23.984 21:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.984 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:24:23.984 nvme0n1 00:24:23.984 21:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.984 21:17:39 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.984 21:17:39 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.984 21:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.984 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:24:23.984 21:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.242 21:17:39 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.242 21:17:39 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.242 21:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.242 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:24:24.242 21:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.242 21:17:39 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.242 21:17:39 -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:24:24.242 21:17:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.242 21:17:39 -- host/auth.sh@44 -- # digest=sha384 00:24:24.242 21:17:39 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:24.242 21:17:39 -- host/auth.sh@44 -- # keyid=1 00:24:24.242 21:17:39 -- host/auth.sh@45 -- # key=DHHC-1:00:YzRhMDE1YTYyMWFiMjk0YWEyNDA5NDI5OTA4YTM1MmFiNThlMzVkYjRlZDJmYTI0YwqxSg==: 00:24:24.242 21:17:39 -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: 00:24:24.242 21:17:39 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:24.242 21:17:39 -- host/auth.sh@49 -- # echo ffdhe3072 00:24:24.242 21:17:39 -- host/auth.sh@50 -- # echo DHHC-1:00:YzRhMDE1YTYyMWFiMjk0YWEyNDA5NDI5OTA4YTM1MmFiNThlMzVkYjRlZDJmYTI0YwqxSg==: 00:24:24.243 21:17:39 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: ]] 00:24:24.243 21:17:39 -- host/auth.sh@51 -- # echo DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: 00:24:24.243 21:17:39 -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:24:24.243 21:17:39 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.243 21:17:39 -- host/auth.sh@57 -- # digest=sha384 00:24:24.243 21:17:39 -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:24.243 21:17:39 -- host/auth.sh@57 -- # keyid=1 00:24:24.243 21:17:39 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.243 21:17:39 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:24.243 21:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.243 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:24:24.243 21:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.243 21:17:39 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.243 21:17:39 -- nvmf/common.sh@730 -- # local ip 00:24:24.243 21:17:39 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:24.243 21:17:39 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:24.243 21:17:39 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.243 21:17:39 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.243 21:17:39 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:24.243 21:17:39 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.243 21:17:39 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:24.243 21:17:39 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:24.243 21:17:39 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:24.243 21:17:39 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:24.243 21:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.243 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:24:24.243 nvme0n1 00:24:24.243 21:17:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.243 21:17:40 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.243 21:17:40 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.243 21:17:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.243 21:17:40 -- common/autotest_common.sh@10 -- # set +x 00:24:24.243 21:17:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.502 21:17:40 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.502 21:17:40 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.502 21:17:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.502 21:17:40 -- common/autotest_common.sh@10 -- # set +x 00:24:24.502 21:17:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.502 21:17:40 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.502 21:17:40 -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:24:24.502 21:17:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.502 21:17:40 -- host/auth.sh@44 -- # digest=sha384 00:24:24.502 21:17:40 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:24.502 21:17:40 -- host/auth.sh@44 -- # keyid=2 00:24:24.502 21:17:40 -- host/auth.sh@45 -- # key=DHHC-1:01:NzAwYjhhZmQ3YzlkMjkxMDMyYzYzMDk4NGZhM2VlNDU/jjTP: 00:24:24.502 21:17:40 -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWZmNzBiODU0MzQ3MDJlYmMxYmVkZmI2MGE2N2RjMTEbLFxR: 00:24:24.502 21:17:40 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:24.502 21:17:40 -- host/auth.sh@49 -- # echo ffdhe3072 00:24:24.502 21:17:40 -- host/auth.sh@50 -- # echo DHHC-1:01:NzAwYjhhZmQ3YzlkMjkxMDMyYzYzMDk4NGZhM2VlNDU/jjTP: 00:24:24.502 21:17:40 -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWZmNzBiODU0MzQ3MDJlYmMxYmVkZmI2MGE2N2RjMTEbLFxR: ]] 00:24:24.502 21:17:40 -- host/auth.sh@51 -- # echo DHHC-1:01:MWZmNzBiODU0MzQ3MDJlYmMxYmVkZmI2MGE2N2RjMTEbLFxR: 00:24:24.502 21:17:40 -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:24:24.502 21:17:40 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.502 21:17:40 -- host/auth.sh@57 -- # digest=sha384 00:24:24.502 21:17:40 -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:24.502 21:17:40 -- host/auth.sh@57 -- # keyid=2 00:24:24.502 21:17:40 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.502 21:17:40 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:24.502 21:17:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.502 21:17:40 -- common/autotest_common.sh@10 -- # set +x 00:24:24.502 21:17:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.502 21:17:40 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.502 21:17:40 -- nvmf/common.sh@730 -- # local ip 00:24:24.502 21:17:40 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:24.502 21:17:40 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:24.502 21:17:40 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.502 21:17:40 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.502 21:17:40 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:24.502 21:17:40 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.502 21:17:40 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:24.502 21:17:40 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:24.502 21:17:40 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:24.502 21:17:40 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:24.502 21:17:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.502 21:17:40 -- common/autotest_common.sh@10 -- # set +x 00:24:24.502 nvme0n1 00:24:24.502 21:17:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.502 21:17:40 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.502 21:17:40 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.502 21:17:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.502 21:17:40 -- common/autotest_common.sh@10 -- # set +x 00:24:24.502 21:17:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.502 21:17:40 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.502 21:17:40 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.502 21:17:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.502 21:17:40 -- common/autotest_common.sh@10 -- # set +x 00:24:24.503 21:17:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.503 21:17:40 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.503 21:17:40 -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:24:24.503 21:17:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.503 21:17:40 -- host/auth.sh@44 -- # digest=sha384 00:24:24.503 21:17:40 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:24.503 21:17:40 -- host/auth.sh@44 -- # keyid=3 00:24:24.503 21:17:40 -- host/auth.sh@45 -- # key=DHHC-1:02:NTVmNGJkYWJkZDY2ZmJlMzgzOGYwNWZiMTlkMzVkZTJiM2RkZjU5OTllMmE5YzgxVvBHHw==: 00:24:24.503 21:17:40 -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWFiY2JlYTZmODQ0NTk3NGViNjE0ZDlkMWE0ZjhhMDZld/vL: 00:24:24.503 21:17:40 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:24.503 21:17:40 -- host/auth.sh@49 -- # echo ffdhe3072 00:24:24.503 21:17:40 -- host/auth.sh@50 -- # echo DHHC-1:02:NTVmNGJkYWJkZDY2ZmJlMzgzOGYwNWZiMTlkMzVkZTJiM2RkZjU5OTllMmE5YzgxVvBHHw==: 00:24:24.503 21:17:40 -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWFiY2JlYTZmODQ0NTk3NGViNjE0ZDlkMWE0ZjhhMDZld/vL: ]] 00:24:24.503 21:17:40 -- host/auth.sh@51 -- # echo DHHC-1:00:ZWFiY2JlYTZmODQ0NTk3NGViNjE0ZDlkMWE0ZjhhMDZld/vL: 00:24:24.503 21:17:40 -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:24:24.503 21:17:40 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.503 21:17:40 -- host/auth.sh@57 -- # digest=sha384 00:24:24.503 21:17:40 -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:24.503 21:17:40 -- host/auth.sh@57 -- # keyid=3 00:24:24.503 21:17:40 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.503 21:17:40 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:24.503 21:17:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.503 21:17:40 -- common/autotest_common.sh@10 -- # set +x 00:24:24.503 21:17:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.503 21:17:40 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.503 21:17:40 -- nvmf/common.sh@730 -- # local ip 00:24:24.503 21:17:40 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:24.503 21:17:40 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:24.503 21:17:40 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.503 21:17:40 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.503 21:17:40 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:24.503 21:17:40 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.503 21:17:40 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:24.503 21:17:40 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:24.503 21:17:40 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:24.503 21:17:40 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:24.503 21:17:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.503 21:17:40 -- common/autotest_common.sh@10 -- # set +x 00:24:24.762 nvme0n1 00:24:24.762 21:17:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.762 21:17:40 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.762 21:17:40 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.762 21:17:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.762 21:17:40 -- common/autotest_common.sh@10 -- # set +x 00:24:24.762 21:17:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.762 21:17:40 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.762 21:17:40 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.762 21:17:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.762 21:17:40 -- common/autotest_common.sh@10 -- # set +x 00:24:24.762 21:17:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.762 21:17:40 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.762 21:17:40 -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:24:24.762 21:17:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.762 21:17:40 -- host/auth.sh@44 -- # digest=sha384 00:24:24.762 21:17:40 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:24.762 21:17:40 -- host/auth.sh@44 -- # keyid=4 00:24:24.762 21:17:40 -- host/auth.sh@45 -- # key=DHHC-1:03:NjE5MmExNzYyOTVmZGZmNWNlMjk0MzQxMjRiYzMzMGU5ZGI4YWZlODk0Y2M2ZmUxNzYzMWQ3MmU5YjczNmI4YkbRucU=: 00:24:24.762 21:17:40 -- host/auth.sh@46 -- # ckey= 00:24:24.762 21:17:40 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:24.762 21:17:40 -- host/auth.sh@49 -- # echo ffdhe3072 00:24:24.762 21:17:40 -- host/auth.sh@50 -- # echo DHHC-1:03:NjE5MmExNzYyOTVmZGZmNWNlMjk0MzQxMjRiYzMzMGU5ZGI4YWZlODk0Y2M2ZmUxNzYzMWQ3MmU5YjczNmI4YkbRucU=: 00:24:24.762 21:17:40 -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:24.762 21:17:40 -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:24:24.762 21:17:40 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.762 21:17:40 -- host/auth.sh@57 -- # digest=sha384 00:24:24.762 21:17:40 -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:24.762 21:17:40 -- host/auth.sh@57 -- # keyid=4 00:24:24.762 21:17:40 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.762 21:17:40 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:24.762 21:17:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.762 21:17:40 -- common/autotest_common.sh@10 -- # set +x 00:24:24.762 21:17:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.762 21:17:40 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.762 21:17:40 -- nvmf/common.sh@730 -- # local ip 00:24:24.762 21:17:40 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:24.762 21:17:40 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:24.762 21:17:40 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.762 21:17:40 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.762 21:17:40 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:24.762 21:17:40 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.762 21:17:40 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:24.762 21:17:40 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:24.762 21:17:40 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:24.762 21:17:40 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:24.762 21:17:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.762 21:17:40 -- common/autotest_common.sh@10 -- # set +x 00:24:25.022 nvme0n1 00:24:25.022 21:17:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.022 21:17:40 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.022 21:17:40 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.022 21:17:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.022 21:17:40 -- common/autotest_common.sh@10 -- # set +x 00:24:25.022 21:17:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.022 21:17:40 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.022 21:17:40 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.022 21:17:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.022 21:17:40 -- common/autotest_common.sh@10 -- # set +x 00:24:25.022 21:17:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.022 21:17:40 -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:25.022 21:17:40 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.022 21:17:40 -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:24:25.022 21:17:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.022 21:17:40 -- host/auth.sh@44 -- # digest=sha384 00:24:25.022 21:17:40 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:25.022 21:17:40 -- host/auth.sh@44 -- # keyid=0 00:24:25.022 21:17:40 -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhhNWZiYjNkMDdhOTczZjM3NmQzYmE5MGNmNjg1OTmyq3lw: 00:24:25.022 21:17:40 -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzc2YmZiNTk4NjQ1ZWE3OWIzZDM2OWZkMjlhMzYzY2NmY2NhMTJhMzY2MzA3YmZjNzFlZGM4NDg4OWMxNzE4OfrUzHU=: 00:24:25.022 21:17:40 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:25.022 21:17:40 -- host/auth.sh@49 -- # echo ffdhe4096 00:24:25.022 21:17:40 -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhhNWZiYjNkMDdhOTczZjM3NmQzYmE5MGNmNjg1OTmyq3lw: 00:24:25.022 21:17:40 -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzc2YmZiNTk4NjQ1ZWE3OWIzZDM2OWZkMjlhMzYzY2NmY2NhMTJhMzY2MzA3YmZjNzFlZGM4NDg4OWMxNzE4OfrUzHU=: ]] 00:24:25.022 21:17:40 -- host/auth.sh@51 -- # echo DHHC-1:03:Nzc2YmZiNTk4NjQ1ZWE3OWIzZDM2OWZkMjlhMzYzY2NmY2NhMTJhMzY2MzA3YmZjNzFlZGM4NDg4OWMxNzE4OfrUzHU=: 00:24:25.022 21:17:40 -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:24:25.022 21:17:40 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.022 21:17:40 -- host/auth.sh@57 -- # digest=sha384 00:24:25.022 21:17:40 -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:25.022 21:17:40 -- host/auth.sh@57 -- # keyid=0 00:24:25.022 21:17:40 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.022 21:17:40 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:25.022 21:17:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.022 21:17:40 -- common/autotest_common.sh@10 -- # set +x 00:24:25.022 21:17:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.022 21:17:40 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.022 21:17:40 -- nvmf/common.sh@730 -- # local ip 00:24:25.022 21:17:40 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:25.022 21:17:40 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:25.022 21:17:40 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.022 21:17:40 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.022 21:17:40 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:25.022 21:17:40 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.022 21:17:40 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:25.022 21:17:40 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:25.022 21:17:40 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:25.022 21:17:40 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:25.022 21:17:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.022 21:17:40 -- common/autotest_common.sh@10 -- # set +x 00:24:25.282 nvme0n1 00:24:25.282 21:17:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.282 21:17:41 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.282 21:17:41 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.282 21:17:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.282 21:17:41 -- common/autotest_common.sh@10 -- # set +x 00:24:25.282 21:17:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.282 21:17:41 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.282 21:17:41 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.282 21:17:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.282 21:17:41 -- common/autotest_common.sh@10 -- # set +x 00:24:25.282 21:17:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.282 21:17:41 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.282 21:17:41 -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:25.282 21:17:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.282 21:17:41 -- host/auth.sh@44 -- # digest=sha384 00:24:25.282 21:17:41 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:25.283 21:17:41 -- host/auth.sh@44 -- # keyid=1 00:24:25.283 21:17:41 -- host/auth.sh@45 -- # key=DHHC-1:00:YzRhMDE1YTYyMWFiMjk0YWEyNDA5NDI5OTA4YTM1MmFiNThlMzVkYjRlZDJmYTI0YwqxSg==: 00:24:25.283 21:17:41 -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: 00:24:25.283 21:17:41 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:25.283 21:17:41 -- host/auth.sh@49 -- # echo ffdhe4096 00:24:25.283 21:17:41 -- host/auth.sh@50 -- # echo DHHC-1:00:YzRhMDE1YTYyMWFiMjk0YWEyNDA5NDI5OTA4YTM1MmFiNThlMzVkYjRlZDJmYTI0YwqxSg==: 00:24:25.283 21:17:41 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: ]] 00:24:25.283 21:17:41 -- host/auth.sh@51 -- # echo DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: 00:24:25.283 21:17:41 -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:24:25.283 21:17:41 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.283 21:17:41 -- host/auth.sh@57 -- # digest=sha384 00:24:25.283 21:17:41 -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:25.283 21:17:41 -- host/auth.sh@57 -- # keyid=1 00:24:25.283 21:17:41 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.283 21:17:41 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:25.283 21:17:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.283 21:17:41 -- common/autotest_common.sh@10 -- # set +x 00:24:25.283 21:17:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.283 21:17:41 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.283 21:17:41 -- nvmf/common.sh@730 -- # local ip 00:24:25.283 21:17:41 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:25.283 21:17:41 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:25.283 21:17:41 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.283 21:17:41 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.283 21:17:41 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:25.283 21:17:41 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.283 21:17:41 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:25.283 21:17:41 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:25.283 21:17:41 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:25.283 21:17:41 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:25.283 21:17:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.283 21:17:41 -- common/autotest_common.sh@10 -- # set +x 00:24:25.542 nvme0n1 00:24:25.542 21:17:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.542 21:17:41 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.542 21:17:41 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.542 21:17:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.542 21:17:41 -- common/autotest_common.sh@10 -- # set +x 00:24:25.542 21:17:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.801 21:17:41 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.801 21:17:41 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.801 21:17:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.801 21:17:41 -- common/autotest_common.sh@10 -- # set +x 00:24:25.801 21:17:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.801 21:17:41 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.801 21:17:41 -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:25.801 21:17:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.801 21:17:41 -- host/auth.sh@44 -- # digest=sha384 00:24:25.801 21:17:41 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:25.801 21:17:41 -- host/auth.sh@44 -- # keyid=2 00:24:25.801 21:17:41 -- host/auth.sh@45 -- # key=DHHC-1:01:NzAwYjhhZmQ3YzlkMjkxMDMyYzYzMDk4NGZhM2VlNDU/jjTP: 00:24:25.801 21:17:41 -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWZmNzBiODU0MzQ3MDJlYmMxYmVkZmI2MGE2N2RjMTEbLFxR: 00:24:25.801 21:17:41 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:25.801 21:17:41 -- host/auth.sh@49 -- # echo ffdhe4096 00:24:25.801 21:17:41 -- host/auth.sh@50 -- # echo DHHC-1:01:NzAwYjhhZmQ3YzlkMjkxMDMyYzYzMDk4NGZhM2VlNDU/jjTP: 00:24:25.801 21:17:41 -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWZmNzBiODU0MzQ3MDJlYmMxYmVkZmI2MGE2N2RjMTEbLFxR: ]] 00:24:25.801 21:17:41 -- host/auth.sh@51 -- # echo DHHC-1:01:MWZmNzBiODU0MzQ3MDJlYmMxYmVkZmI2MGE2N2RjMTEbLFxR: 00:24:25.801 21:17:41 -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:24:25.801 21:17:41 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.801 21:17:41 -- host/auth.sh@57 -- # digest=sha384 00:24:25.801 21:17:41 -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:25.801 21:17:41 -- host/auth.sh@57 -- # keyid=2 00:24:25.801 21:17:41 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.801 21:17:41 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:25.801 21:17:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.801 21:17:41 -- common/autotest_common.sh@10 -- # set +x 00:24:25.801 21:17:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.801 21:17:41 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.801 21:17:41 -- nvmf/common.sh@730 -- # local ip 00:24:25.801 21:17:41 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:25.801 21:17:41 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:25.801 21:17:41 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.801 21:17:41 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.801 21:17:41 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:25.801 21:17:41 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.801 21:17:41 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:25.801 21:17:41 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:25.801 21:17:41 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:25.801 21:17:41 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:25.801 21:17:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.801 21:17:41 -- common/autotest_common.sh@10 -- # set +x 00:24:26.061 nvme0n1 00:24:26.061 21:17:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.061 21:17:41 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.061 21:17:41 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:26.061 21:17:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.061 21:17:41 -- common/autotest_common.sh@10 -- # set +x 00:24:26.061 21:17:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.061 21:17:41 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.061 21:17:41 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.061 21:17:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.061 21:17:41 -- common/autotest_common.sh@10 -- # set +x 00:24:26.061 21:17:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.061 21:17:41 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:26.061 21:17:41 -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:26.061 21:17:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.061 21:17:41 -- host/auth.sh@44 -- # digest=sha384 00:24:26.061 21:17:41 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:26.061 21:17:41 -- host/auth.sh@44 -- # keyid=3 00:24:26.061 21:17:41 -- host/auth.sh@45 -- # key=DHHC-1:02:NTVmNGJkYWJkZDY2ZmJlMzgzOGYwNWZiMTlkMzVkZTJiM2RkZjU5OTllMmE5YzgxVvBHHw==: 00:24:26.061 21:17:41 -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWFiY2JlYTZmODQ0NTk3NGViNjE0ZDlkMWE0ZjhhMDZld/vL: 00:24:26.061 21:17:41 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:26.061 21:17:41 -- host/auth.sh@49 -- # echo ffdhe4096 00:24:26.061 21:17:41 -- host/auth.sh@50 -- # echo DHHC-1:02:NTVmNGJkYWJkZDY2ZmJlMzgzOGYwNWZiMTlkMzVkZTJiM2RkZjU5OTllMmE5YzgxVvBHHw==: 00:24:26.061 21:17:41 -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWFiY2JlYTZmODQ0NTk3NGViNjE0ZDlkMWE0ZjhhMDZld/vL: ]] 00:24:26.061 21:17:41 -- host/auth.sh@51 -- # echo DHHC-1:00:ZWFiY2JlYTZmODQ0NTk3NGViNjE0ZDlkMWE0ZjhhMDZld/vL: 00:24:26.061 21:17:41 -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:24:26.061 21:17:41 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:26.061 21:17:41 -- host/auth.sh@57 -- # digest=sha384 00:24:26.061 21:17:41 -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:26.061 21:17:41 -- host/auth.sh@57 -- # keyid=3 00:24:26.061 21:17:41 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.061 21:17:41 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:26.061 21:17:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.061 21:17:41 -- common/autotest_common.sh@10 -- # set +x 00:24:26.061 21:17:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.061 21:17:41 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:26.061 21:17:41 -- nvmf/common.sh@730 -- # local ip 00:24:26.061 21:17:41 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:26.061 21:17:41 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:26.061 21:17:41 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.061 21:17:41 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.061 21:17:41 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:26.061 21:17:41 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.061 21:17:41 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:26.061 21:17:41 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:26.061 21:17:41 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:26.061 21:17:41 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:26.061 21:17:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.061 21:17:41 -- common/autotest_common.sh@10 -- # set +x 00:24:26.321 nvme0n1 00:24:26.321 21:17:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.321 21:17:42 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:26.321 21:17:42 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.321 21:17:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.321 21:17:42 -- common/autotest_common.sh@10 -- # set +x 00:24:26.321 21:17:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.321 21:17:42 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.321 21:17:42 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.321 21:17:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.321 21:17:42 -- common/autotest_common.sh@10 -- # set +x 00:24:26.321 21:17:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.321 21:17:42 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:26.321 21:17:42 -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:24:26.321 21:17:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.321 21:17:42 -- host/auth.sh@44 -- # digest=sha384 00:24:26.321 21:17:42 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:26.321 21:17:42 -- host/auth.sh@44 -- # keyid=4 00:24:26.321 21:17:42 -- host/auth.sh@45 -- # key=DHHC-1:03:NjE5MmExNzYyOTVmZGZmNWNlMjk0MzQxMjRiYzMzMGU5ZGI4YWZlODk0Y2M2ZmUxNzYzMWQ3MmU5YjczNmI4YkbRucU=: 00:24:26.321 21:17:42 -- host/auth.sh@46 -- # ckey= 00:24:26.321 21:17:42 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:26.321 21:17:42 -- host/auth.sh@49 -- # echo ffdhe4096 00:24:26.321 21:17:42 -- host/auth.sh@50 -- # echo DHHC-1:03:NjE5MmExNzYyOTVmZGZmNWNlMjk0MzQxMjRiYzMzMGU5ZGI4YWZlODk0Y2M2ZmUxNzYzMWQ3MmU5YjczNmI4YkbRucU=: 00:24:26.321 21:17:42 -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:26.321 21:17:42 -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:24:26.321 21:17:42 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:26.321 21:17:42 -- host/auth.sh@57 -- # digest=sha384 00:24:26.321 21:17:42 -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:26.321 21:17:42 -- host/auth.sh@57 -- # keyid=4 00:24:26.321 21:17:42 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.321 21:17:42 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:26.321 21:17:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.321 21:17:42 -- common/autotest_common.sh@10 -- # set +x 00:24:26.321 21:17:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.321 21:17:42 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:26.321 21:17:42 -- nvmf/common.sh@730 -- # local ip 00:24:26.321 21:17:42 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:26.321 21:17:42 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:26.321 21:17:42 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.321 21:17:42 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.321 21:17:42 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:26.321 21:17:42 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.321 21:17:42 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:26.321 21:17:42 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:26.321 21:17:42 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:26.321 21:17:42 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:26.321 21:17:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.321 21:17:42 -- common/autotest_common.sh@10 -- # set +x 00:24:26.580 nvme0n1 00:24:26.580 21:17:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.580 21:17:42 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:26.580 21:17:42 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.580 21:17:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.580 21:17:42 -- common/autotest_common.sh@10 -- # set +x 00:24:26.580 21:17:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.580 21:17:42 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.580 21:17:42 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.580 21:17:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.580 21:17:42 -- common/autotest_common.sh@10 -- # set +x 00:24:26.580 21:17:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.580 21:17:42 -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:26.580 21:17:42 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:26.580 21:17:42 -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:24:26.580 21:17:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.580 21:17:42 -- host/auth.sh@44 -- # digest=sha384 00:24:26.580 21:17:42 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:26.580 21:17:42 -- host/auth.sh@44 -- # keyid=0 00:24:26.580 21:17:42 -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhhNWZiYjNkMDdhOTczZjM3NmQzYmE5MGNmNjg1OTmyq3lw: 00:24:26.580 21:17:42 -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzc2YmZiNTk4NjQ1ZWE3OWIzZDM2OWZkMjlhMzYzY2NmY2NhMTJhMzY2MzA3YmZjNzFlZGM4NDg4OWMxNzE4OfrUzHU=: 00:24:26.580 21:17:42 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:26.580 21:17:42 -- host/auth.sh@49 -- # echo ffdhe6144 00:24:26.580 21:17:42 -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhhNWZiYjNkMDdhOTczZjM3NmQzYmE5MGNmNjg1OTmyq3lw: 00:24:26.580 21:17:42 -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzc2YmZiNTk4NjQ1ZWE3OWIzZDM2OWZkMjlhMzYzY2NmY2NhMTJhMzY2MzA3YmZjNzFlZGM4NDg4OWMxNzE4OfrUzHU=: ]] 00:24:26.580 21:17:42 -- host/auth.sh@51 -- # echo DHHC-1:03:Nzc2YmZiNTk4NjQ1ZWE3OWIzZDM2OWZkMjlhMzYzY2NmY2NhMTJhMzY2MzA3YmZjNzFlZGM4NDg4OWMxNzE4OfrUzHU=: 00:24:26.580 21:17:42 -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:24:26.580 21:17:42 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:26.580 21:17:42 -- host/auth.sh@57 -- # digest=sha384 00:24:26.580 21:17:42 -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:26.580 21:17:42 -- host/auth.sh@57 -- # keyid=0 00:24:26.580 21:17:42 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.580 21:17:42 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:26.580 21:17:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.580 21:17:42 -- common/autotest_common.sh@10 -- # set +x 00:24:26.580 21:17:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.580 21:17:42 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:26.580 21:17:42 -- nvmf/common.sh@730 -- # local ip 00:24:26.581 21:17:42 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:26.581 21:17:42 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:26.581 21:17:42 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.581 21:17:42 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.581 21:17:42 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:26.581 21:17:42 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.581 21:17:42 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:26.581 21:17:42 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:26.581 21:17:42 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:26.581 21:17:42 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:26.581 21:17:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.581 21:17:42 -- common/autotest_common.sh@10 -- # set +x 00:24:27.149 nvme0n1 00:24:27.149 21:17:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.149 21:17:42 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.149 21:17:42 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:27.149 21:17:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.149 21:17:42 -- common/autotest_common.sh@10 -- # set +x 00:24:27.149 21:17:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.149 21:17:42 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.149 21:17:42 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.149 21:17:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.149 21:17:42 -- common/autotest_common.sh@10 -- # set +x 00:24:27.149 21:17:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.149 21:17:42 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:27.149 21:17:42 -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:24:27.149 21:17:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:27.149 21:17:42 -- host/auth.sh@44 -- # digest=sha384 00:24:27.149 21:17:42 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:27.149 21:17:42 -- host/auth.sh@44 -- # keyid=1 00:24:27.149 21:17:42 -- host/auth.sh@45 -- # key=DHHC-1:00:YzRhMDE1YTYyMWFiMjk0YWEyNDA5NDI5OTA4YTM1MmFiNThlMzVkYjRlZDJmYTI0YwqxSg==: 00:24:27.149 21:17:42 -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: 00:24:27.149 21:17:42 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:27.149 21:17:42 -- host/auth.sh@49 -- # echo ffdhe6144 00:24:27.149 21:17:42 -- host/auth.sh@50 -- # echo DHHC-1:00:YzRhMDE1YTYyMWFiMjk0YWEyNDA5NDI5OTA4YTM1MmFiNThlMzVkYjRlZDJmYTI0YwqxSg==: 00:24:27.149 21:17:42 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: ]] 00:24:27.149 21:17:42 -- host/auth.sh@51 -- # echo DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: 00:24:27.149 21:17:42 -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:24:27.149 21:17:42 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:27.149 21:17:42 -- host/auth.sh@57 -- # digest=sha384 00:24:27.149 21:17:42 -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:27.149 21:17:42 -- host/auth.sh@57 -- # keyid=1 00:24:27.149 21:17:42 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:27.149 21:17:42 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:27.150 21:17:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.150 21:17:42 -- common/autotest_common.sh@10 -- # set +x 00:24:27.150 21:17:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.150 21:17:42 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:27.150 21:17:42 -- nvmf/common.sh@730 -- # local ip 00:24:27.150 21:17:42 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:27.150 21:17:42 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:27.150 21:17:42 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.150 21:17:42 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.150 21:17:42 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:27.150 21:17:42 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.150 21:17:42 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:27.150 21:17:42 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:27.150 21:17:42 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:27.150 21:17:42 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:27.150 21:17:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.150 21:17:42 -- common/autotest_common.sh@10 -- # set +x 00:24:27.413 nvme0n1 00:24:27.413 21:17:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.413 21:17:43 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.413 21:17:43 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:27.413 21:17:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.413 21:17:43 -- common/autotest_common.sh@10 -- # set +x 00:24:27.413 21:17:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.413 21:17:43 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.413 21:17:43 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.413 21:17:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.413 21:17:43 -- common/autotest_common.sh@10 -- # set +x 00:24:27.413 21:17:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.413 21:17:43 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:27.413 21:17:43 -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:24:27.413 21:17:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:27.413 21:17:43 -- host/auth.sh@44 -- # digest=sha384 00:24:27.413 21:17:43 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:27.413 21:17:43 -- host/auth.sh@44 -- # keyid=2 00:24:27.413 21:17:43 -- host/auth.sh@45 -- # key=DHHC-1:01:NzAwYjhhZmQ3YzlkMjkxMDMyYzYzMDk4NGZhM2VlNDU/jjTP: 00:24:27.413 21:17:43 -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWZmNzBiODU0MzQ3MDJlYmMxYmVkZmI2MGE2N2RjMTEbLFxR: 00:24:27.413 21:17:43 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:27.413 21:17:43 -- host/auth.sh@49 -- # echo ffdhe6144 00:24:27.413 21:17:43 -- host/auth.sh@50 -- # echo DHHC-1:01:NzAwYjhhZmQ3YzlkMjkxMDMyYzYzMDk4NGZhM2VlNDU/jjTP: 00:24:27.413 21:17:43 -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWZmNzBiODU0MzQ3MDJlYmMxYmVkZmI2MGE2N2RjMTEbLFxR: ]] 00:24:27.413 21:17:43 -- host/auth.sh@51 -- # echo DHHC-1:01:MWZmNzBiODU0MzQ3MDJlYmMxYmVkZmI2MGE2N2RjMTEbLFxR: 00:24:27.413 21:17:43 -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:24:27.413 21:17:43 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:27.413 21:17:43 -- host/auth.sh@57 -- # digest=sha384 00:24:27.413 21:17:43 -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:27.413 21:17:43 -- host/auth.sh@57 -- # keyid=2 00:24:27.413 21:17:43 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:27.413 21:17:43 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:27.413 21:17:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.413 21:17:43 -- common/autotest_common.sh@10 -- # set +x 00:24:27.413 21:17:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.413 21:17:43 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:27.413 21:17:43 -- nvmf/common.sh@730 -- # local ip 00:24:27.413 21:17:43 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:27.413 21:17:43 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:27.413 21:17:43 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.413 21:17:43 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.413 21:17:43 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:27.413 21:17:43 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.413 21:17:43 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:27.413 21:17:43 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:27.413 21:17:43 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:27.413 21:17:43 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:27.413 21:17:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.413 21:17:43 -- common/autotest_common.sh@10 -- # set +x 00:24:27.984 nvme0n1 00:24:27.984 21:17:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.984 21:17:43 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:27.984 21:17:43 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.984 21:17:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.984 21:17:43 -- common/autotest_common.sh@10 -- # set +x 00:24:27.984 21:17:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.984 21:17:43 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.984 21:17:43 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.984 21:17:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.984 21:17:43 -- common/autotest_common.sh@10 -- # set +x 00:24:27.984 21:17:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.984 21:17:43 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:27.984 21:17:43 -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:24:27.984 21:17:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:27.984 21:17:43 -- host/auth.sh@44 -- # digest=sha384 00:24:27.984 21:17:43 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:27.984 21:17:43 -- host/auth.sh@44 -- # keyid=3 00:24:27.984 21:17:43 -- host/auth.sh@45 -- # key=DHHC-1:02:NTVmNGJkYWJkZDY2ZmJlMzgzOGYwNWZiMTlkMzVkZTJiM2RkZjU5OTllMmE5YzgxVvBHHw==: 00:24:27.984 21:17:43 -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWFiY2JlYTZmODQ0NTk3NGViNjE0ZDlkMWE0ZjhhMDZld/vL: 00:24:27.984 21:17:43 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:27.984 21:17:43 -- host/auth.sh@49 -- # echo ffdhe6144 00:24:27.984 21:17:43 -- host/auth.sh@50 -- # echo DHHC-1:02:NTVmNGJkYWJkZDY2ZmJlMzgzOGYwNWZiMTlkMzVkZTJiM2RkZjU5OTllMmE5YzgxVvBHHw==: 00:24:27.984 21:17:43 -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWFiY2JlYTZmODQ0NTk3NGViNjE0ZDlkMWE0ZjhhMDZld/vL: ]] 00:24:27.984 21:17:43 -- host/auth.sh@51 -- # echo DHHC-1:00:ZWFiY2JlYTZmODQ0NTk3NGViNjE0ZDlkMWE0ZjhhMDZld/vL: 00:24:27.984 21:17:43 -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:24:27.984 21:17:43 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:27.984 21:17:43 -- host/auth.sh@57 -- # digest=sha384 00:24:27.984 21:17:43 -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:27.984 21:17:43 -- host/auth.sh@57 -- # keyid=3 00:24:27.984 21:17:43 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:27.984 21:17:43 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:27.984 21:17:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.984 21:17:43 -- common/autotest_common.sh@10 -- # set +x 00:24:27.984 21:17:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.984 21:17:43 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:27.984 21:17:43 -- nvmf/common.sh@730 -- # local ip 00:24:27.984 21:17:43 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:27.984 21:17:43 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:27.984 21:17:43 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.984 21:17:43 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.984 21:17:43 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:27.984 21:17:43 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.984 21:17:43 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:27.984 21:17:43 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:27.984 21:17:43 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:27.984 21:17:43 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:27.984 21:17:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.984 21:17:43 -- common/autotest_common.sh@10 -- # set +x 00:24:28.243 nvme0n1 00:24:28.243 21:17:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.243 21:17:44 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.243 21:17:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.243 21:17:44 -- common/autotest_common.sh@10 -- # set +x 00:24:28.243 21:17:44 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:28.243 21:17:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.243 21:17:44 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.243 21:17:44 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.243 21:17:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.243 21:17:44 -- common/autotest_common.sh@10 -- # set +x 00:24:28.502 21:17:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.502 21:17:44 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:28.502 21:17:44 -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:24:28.502 21:17:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.502 21:17:44 -- host/auth.sh@44 -- # digest=sha384 00:24:28.502 21:17:44 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:28.502 21:17:44 -- host/auth.sh@44 -- # keyid=4 00:24:28.502 21:17:44 -- host/auth.sh@45 -- # key=DHHC-1:03:NjE5MmExNzYyOTVmZGZmNWNlMjk0MzQxMjRiYzMzMGU5ZGI4YWZlODk0Y2M2ZmUxNzYzMWQ3MmU5YjczNmI4YkbRucU=: 00:24:28.502 21:17:44 -- host/auth.sh@46 -- # ckey= 00:24:28.502 21:17:44 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:28.502 21:17:44 -- host/auth.sh@49 -- # echo ffdhe6144 00:24:28.502 21:17:44 -- host/auth.sh@50 -- # echo DHHC-1:03:NjE5MmExNzYyOTVmZGZmNWNlMjk0MzQxMjRiYzMzMGU5ZGI4YWZlODk0Y2M2ZmUxNzYzMWQ3MmU5YjczNmI4YkbRucU=: 00:24:28.502 21:17:44 -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:28.502 21:17:44 -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:24:28.502 21:17:44 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:28.502 21:17:44 -- host/auth.sh@57 -- # digest=sha384 00:24:28.502 21:17:44 -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:28.502 21:17:44 -- host/auth.sh@57 -- # keyid=4 00:24:28.502 21:17:44 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:28.502 21:17:44 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:28.502 21:17:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.502 21:17:44 -- common/autotest_common.sh@10 -- # set +x 00:24:28.502 21:17:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.502 21:17:44 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:28.502 21:17:44 -- nvmf/common.sh@730 -- # local ip 00:24:28.502 21:17:44 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:28.502 21:17:44 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:28.502 21:17:44 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.502 21:17:44 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.502 21:17:44 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:28.502 21:17:44 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.502 21:17:44 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:28.502 21:17:44 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:28.502 21:17:44 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:28.502 21:17:44 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:28.502 21:17:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.502 21:17:44 -- common/autotest_common.sh@10 -- # set +x 00:24:28.761 nvme0n1 00:24:28.761 21:17:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.761 21:17:44 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.761 21:17:44 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:28.761 21:17:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.761 21:17:44 -- common/autotest_common.sh@10 -- # set +x 00:24:28.761 21:17:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.761 21:17:44 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.761 21:17:44 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.761 21:17:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.761 21:17:44 -- common/autotest_common.sh@10 -- # set +x 00:24:28.761 21:17:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.761 21:17:44 -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:28.761 21:17:44 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:28.761 21:17:44 -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:24:28.761 21:17:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.761 21:17:44 -- host/auth.sh@44 -- # digest=sha384 00:24:28.761 21:17:44 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:28.761 21:17:44 -- host/auth.sh@44 -- # keyid=0 00:24:28.761 21:17:44 -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhhNWZiYjNkMDdhOTczZjM3NmQzYmE5MGNmNjg1OTmyq3lw: 00:24:28.761 21:17:44 -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzc2YmZiNTk4NjQ1ZWE3OWIzZDM2OWZkMjlhMzYzY2NmY2NhMTJhMzY2MzA3YmZjNzFlZGM4NDg4OWMxNzE4OfrUzHU=: 00:24:28.761 21:17:44 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:28.761 21:17:44 -- host/auth.sh@49 -- # echo ffdhe8192 00:24:28.761 21:17:44 -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhhNWZiYjNkMDdhOTczZjM3NmQzYmE5MGNmNjg1OTmyq3lw: 00:24:28.761 21:17:44 -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzc2YmZiNTk4NjQ1ZWE3OWIzZDM2OWZkMjlhMzYzY2NmY2NhMTJhMzY2MzA3YmZjNzFlZGM4NDg4OWMxNzE4OfrUzHU=: ]] 00:24:28.761 21:17:44 -- host/auth.sh@51 -- # echo DHHC-1:03:Nzc2YmZiNTk4NjQ1ZWE3OWIzZDM2OWZkMjlhMzYzY2NmY2NhMTJhMzY2MzA3YmZjNzFlZGM4NDg4OWMxNzE4OfrUzHU=: 00:24:28.761 21:17:44 -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:24:28.761 21:17:44 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:28.761 21:17:44 -- host/auth.sh@57 -- # digest=sha384 00:24:28.761 21:17:44 -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:28.761 21:17:44 -- host/auth.sh@57 -- # keyid=0 00:24:28.761 21:17:44 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:28.761 21:17:44 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:28.761 21:17:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.761 21:17:44 -- common/autotest_common.sh@10 -- # set +x 00:24:28.761 21:17:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.761 21:17:44 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:28.762 21:17:44 -- nvmf/common.sh@730 -- # local ip 00:24:28.762 21:17:44 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:28.762 21:17:44 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:28.762 21:17:44 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.762 21:17:44 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.762 21:17:44 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:28.762 21:17:44 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.762 21:17:44 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:28.762 21:17:44 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:28.762 21:17:44 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:28.762 21:17:44 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:28.762 21:17:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.762 21:17:44 -- common/autotest_common.sh@10 -- # set +x 00:24:29.331 nvme0n1 00:24:29.331 21:17:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.331 21:17:45 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.331 21:17:45 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:29.331 21:17:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.331 21:17:45 -- common/autotest_common.sh@10 -- # set +x 00:24:29.331 21:17:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.331 21:17:45 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.331 21:17:45 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.331 21:17:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.331 21:17:45 -- common/autotest_common.sh@10 -- # set +x 00:24:29.331 21:17:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.331 21:17:45 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:29.331 21:17:45 -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:24:29.331 21:17:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.331 21:17:45 -- host/auth.sh@44 -- # digest=sha384 00:24:29.331 21:17:45 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:29.331 21:17:45 -- host/auth.sh@44 -- # keyid=1 00:24:29.590 21:17:45 -- host/auth.sh@45 -- # key=DHHC-1:00:YzRhMDE1YTYyMWFiMjk0YWEyNDA5NDI5OTA4YTM1MmFiNThlMzVkYjRlZDJmYTI0YwqxSg==: 00:24:29.590 21:17:45 -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: 00:24:29.590 21:17:45 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:29.590 21:17:45 -- host/auth.sh@49 -- # echo ffdhe8192 00:24:29.590 21:17:45 -- host/auth.sh@50 -- # echo DHHC-1:00:YzRhMDE1YTYyMWFiMjk0YWEyNDA5NDI5OTA4YTM1MmFiNThlMzVkYjRlZDJmYTI0YwqxSg==: 00:24:29.590 21:17:45 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: ]] 00:24:29.590 21:17:45 -- host/auth.sh@51 -- # echo DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: 00:24:29.590 21:17:45 -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:24:29.590 21:17:45 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:29.590 21:17:45 -- host/auth.sh@57 -- # digest=sha384 00:24:29.590 21:17:45 -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:29.590 21:17:45 -- host/auth.sh@57 -- # keyid=1 00:24:29.590 21:17:45 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.590 21:17:45 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:29.590 21:17:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.590 21:17:45 -- common/autotest_common.sh@10 -- # set +x 00:24:29.590 21:17:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.590 21:17:45 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:29.590 21:17:45 -- nvmf/common.sh@730 -- # local ip 00:24:29.590 21:17:45 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:29.590 21:17:45 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:29.590 21:17:45 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.590 21:17:45 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.590 21:17:45 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:29.590 21:17:45 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.591 21:17:45 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:29.591 21:17:45 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:29.591 21:17:45 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:29.591 21:17:45 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:29.591 21:17:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.591 21:17:45 -- common/autotest_common.sh@10 -- # set +x 00:24:30.158 nvme0n1 00:24:30.158 21:17:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.158 21:17:45 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:30.158 21:17:45 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.158 21:17:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.158 21:17:45 -- common/autotest_common.sh@10 -- # set +x 00:24:30.158 21:17:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.158 21:17:45 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.158 21:17:45 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.158 21:17:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.158 21:17:45 -- common/autotest_common.sh@10 -- # set +x 00:24:30.158 21:17:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.158 21:17:45 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:30.158 21:17:45 -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:24:30.158 21:17:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.158 21:17:45 -- host/auth.sh@44 -- # digest=sha384 00:24:30.158 21:17:45 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:30.158 21:17:45 -- host/auth.sh@44 -- # keyid=2 00:24:30.158 21:17:45 -- host/auth.sh@45 -- # key=DHHC-1:01:NzAwYjhhZmQ3YzlkMjkxMDMyYzYzMDk4NGZhM2VlNDU/jjTP: 00:24:30.158 21:17:45 -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWZmNzBiODU0MzQ3MDJlYmMxYmVkZmI2MGE2N2RjMTEbLFxR: 00:24:30.158 21:17:45 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:30.158 21:17:45 -- host/auth.sh@49 -- # echo ffdhe8192 00:24:30.158 21:17:45 -- host/auth.sh@50 -- # echo DHHC-1:01:NzAwYjhhZmQ3YzlkMjkxMDMyYzYzMDk4NGZhM2VlNDU/jjTP: 00:24:30.158 21:17:45 -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWZmNzBiODU0MzQ3MDJlYmMxYmVkZmI2MGE2N2RjMTEbLFxR: ]] 00:24:30.158 21:17:45 -- host/auth.sh@51 -- # echo DHHC-1:01:MWZmNzBiODU0MzQ3MDJlYmMxYmVkZmI2MGE2N2RjMTEbLFxR: 00:24:30.158 21:17:45 -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:24:30.158 21:17:45 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.158 21:17:45 -- host/auth.sh@57 -- # digest=sha384 00:24:30.158 21:17:45 -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:30.158 21:17:45 -- host/auth.sh@57 -- # keyid=2 00:24:30.158 21:17:45 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.158 21:17:45 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:30.158 21:17:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.158 21:17:45 -- common/autotest_common.sh@10 -- # set +x 00:24:30.158 21:17:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.158 21:17:45 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.158 21:17:45 -- nvmf/common.sh@730 -- # local ip 00:24:30.158 21:17:45 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:30.158 21:17:45 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:30.158 21:17:45 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.158 21:17:45 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.158 21:17:45 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:30.158 21:17:45 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.158 21:17:45 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:30.158 21:17:45 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:30.158 21:17:45 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:30.158 21:17:45 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:30.158 21:17:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.158 21:17:45 -- common/autotest_common.sh@10 -- # set +x 00:24:30.726 nvme0n1 00:24:30.726 21:17:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.726 21:17:46 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.726 21:17:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.726 21:17:46 -- common/autotest_common.sh@10 -- # set +x 00:24:30.726 21:17:46 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:30.726 21:17:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.726 21:17:46 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.726 21:17:46 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.726 21:17:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.726 21:17:46 -- common/autotest_common.sh@10 -- # set +x 00:24:30.726 21:17:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.726 21:17:46 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:30.726 21:17:46 -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:24:30.726 21:17:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.726 21:17:46 -- host/auth.sh@44 -- # digest=sha384 00:24:30.726 21:17:46 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:30.726 21:17:46 -- host/auth.sh@44 -- # keyid=3 00:24:30.726 21:17:46 -- host/auth.sh@45 -- # key=DHHC-1:02:NTVmNGJkYWJkZDY2ZmJlMzgzOGYwNWZiMTlkMzVkZTJiM2RkZjU5OTllMmE5YzgxVvBHHw==: 00:24:30.726 21:17:46 -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWFiY2JlYTZmODQ0NTk3NGViNjE0ZDlkMWE0ZjhhMDZld/vL: 00:24:30.726 21:17:46 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:30.726 21:17:46 -- host/auth.sh@49 -- # echo ffdhe8192 00:24:30.726 21:17:46 -- host/auth.sh@50 -- # echo DHHC-1:02:NTVmNGJkYWJkZDY2ZmJlMzgzOGYwNWZiMTlkMzVkZTJiM2RkZjU5OTllMmE5YzgxVvBHHw==: 00:24:30.726 21:17:46 -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWFiY2JlYTZmODQ0NTk3NGViNjE0ZDlkMWE0ZjhhMDZld/vL: ]] 00:24:30.726 21:17:46 -- host/auth.sh@51 -- # echo DHHC-1:00:ZWFiY2JlYTZmODQ0NTk3NGViNjE0ZDlkMWE0ZjhhMDZld/vL: 00:24:30.726 21:17:46 -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:24:30.726 21:17:46 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.726 21:17:46 -- host/auth.sh@57 -- # digest=sha384 00:24:30.726 21:17:46 -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:30.726 21:17:46 -- host/auth.sh@57 -- # keyid=3 00:24:30.726 21:17:46 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.726 21:17:46 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:30.726 21:17:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.726 21:17:46 -- common/autotest_common.sh@10 -- # set +x 00:24:30.727 21:17:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.727 21:17:46 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.727 21:17:46 -- nvmf/common.sh@730 -- # local ip 00:24:30.727 21:17:46 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:30.727 21:17:46 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:30.727 21:17:46 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.727 21:17:46 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.727 21:17:46 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:30.727 21:17:46 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.727 21:17:46 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:30.727 21:17:46 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:30.727 21:17:46 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:30.727 21:17:46 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:30.727 21:17:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.727 21:17:46 -- common/autotest_common.sh@10 -- # set +x 00:24:31.295 nvme0n1 00:24:31.295 21:17:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.295 21:17:47 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.295 21:17:47 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.295 21:17:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.295 21:17:47 -- common/autotest_common.sh@10 -- # set +x 00:24:31.295 21:17:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.295 21:17:47 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.295 21:17:47 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.295 21:17:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.295 21:17:47 -- common/autotest_common.sh@10 -- # set +x 00:24:31.295 21:17:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.295 21:17:47 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.295 21:17:47 -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:24:31.295 21:17:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.295 21:17:47 -- host/auth.sh@44 -- # digest=sha384 00:24:31.295 21:17:47 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:31.295 21:17:47 -- host/auth.sh@44 -- # keyid=4 00:24:31.295 21:17:47 -- host/auth.sh@45 -- # key=DHHC-1:03:NjE5MmExNzYyOTVmZGZmNWNlMjk0MzQxMjRiYzMzMGU5ZGI4YWZlODk0Y2M2ZmUxNzYzMWQ3MmU5YjczNmI4YkbRucU=: 00:24:31.295 21:17:47 -- host/auth.sh@46 -- # ckey= 00:24:31.295 21:17:47 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:31.295 21:17:47 -- host/auth.sh@49 -- # echo ffdhe8192 00:24:31.295 21:17:47 -- host/auth.sh@50 -- # echo DHHC-1:03:NjE5MmExNzYyOTVmZGZmNWNlMjk0MzQxMjRiYzMzMGU5ZGI4YWZlODk0Y2M2ZmUxNzYzMWQ3MmU5YjczNmI4YkbRucU=: 00:24:31.295 21:17:47 -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:31.295 21:17:47 -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:24:31.295 21:17:47 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.295 21:17:47 -- host/auth.sh@57 -- # digest=sha384 00:24:31.295 21:17:47 -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:31.295 21:17:47 -- host/auth.sh@57 -- # keyid=4 00:24:31.295 21:17:47 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.295 21:17:47 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:31.295 21:17:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.295 21:17:47 -- common/autotest_common.sh@10 -- # set +x 00:24:31.295 21:17:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.295 21:17:47 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.295 21:17:47 -- nvmf/common.sh@730 -- # local ip 00:24:31.295 21:17:47 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:31.295 21:17:47 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:31.295 21:17:47 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.295 21:17:47 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.295 21:17:47 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:31.295 21:17:47 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.295 21:17:47 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:31.295 21:17:47 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:31.295 21:17:47 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:31.295 21:17:47 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:31.296 21:17:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.296 21:17:47 -- common/autotest_common.sh@10 -- # set +x 00:24:31.864 nvme0n1 00:24:31.864 21:17:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.864 21:17:47 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.864 21:17:47 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.864 21:17:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.864 21:17:47 -- common/autotest_common.sh@10 -- # set +x 00:24:31.864 21:17:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.864 21:17:47 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.864 21:17:47 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.864 21:17:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.864 21:17:47 -- common/autotest_common.sh@10 -- # set +x 00:24:31.864 21:17:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.864 21:17:47 -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:31.864 21:17:47 -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:31.864 21:17:47 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.864 21:17:47 -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:24:31.864 21:17:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.864 21:17:47 -- host/auth.sh@44 -- # digest=sha512 00:24:31.864 21:17:47 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:31.864 21:17:47 -- host/auth.sh@44 -- # keyid=0 00:24:31.864 21:17:47 -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhhNWZiYjNkMDdhOTczZjM3NmQzYmE5MGNmNjg1OTmyq3lw: 00:24:31.864 21:17:47 -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzc2YmZiNTk4NjQ1ZWE3OWIzZDM2OWZkMjlhMzYzY2NmY2NhMTJhMzY2MzA3YmZjNzFlZGM4NDg4OWMxNzE4OfrUzHU=: 00:24:31.864 21:17:47 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:31.864 21:17:47 -- host/auth.sh@49 -- # echo ffdhe2048 00:24:31.864 21:17:47 -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhhNWZiYjNkMDdhOTczZjM3NmQzYmE5MGNmNjg1OTmyq3lw: 00:24:31.864 21:17:47 -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzc2YmZiNTk4NjQ1ZWE3OWIzZDM2OWZkMjlhMzYzY2NmY2NhMTJhMzY2MzA3YmZjNzFlZGM4NDg4OWMxNzE4OfrUzHU=: ]] 00:24:31.864 21:17:47 -- host/auth.sh@51 -- # echo DHHC-1:03:Nzc2YmZiNTk4NjQ1ZWE3OWIzZDM2OWZkMjlhMzYzY2NmY2NhMTJhMzY2MzA3YmZjNzFlZGM4NDg4OWMxNzE4OfrUzHU=: 00:24:31.864 21:17:47 -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:24:31.864 21:17:47 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.864 21:17:47 -- host/auth.sh@57 -- # digest=sha512 00:24:31.864 21:17:47 -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:31.864 21:17:47 -- host/auth.sh@57 -- # keyid=0 00:24:31.864 21:17:47 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.864 21:17:47 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:31.864 21:17:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.864 21:17:47 -- common/autotest_common.sh@10 -- # set +x 00:24:32.123 21:17:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.123 21:17:47 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.123 21:17:47 -- nvmf/common.sh@730 -- # local ip 00:24:32.123 21:17:47 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:32.123 21:17:47 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:32.123 21:17:47 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.123 21:17:47 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.123 21:17:47 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:32.123 21:17:47 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.123 21:17:47 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:32.123 21:17:47 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:32.123 21:17:47 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:32.124 21:17:47 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:32.124 21:17:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.124 21:17:47 -- common/autotest_common.sh@10 -- # set +x 00:24:32.124 nvme0n1 00:24:32.124 21:17:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.124 21:17:47 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.124 21:17:47 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.124 21:17:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.124 21:17:47 -- common/autotest_common.sh@10 -- # set +x 00:24:32.124 21:17:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.124 21:17:47 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.124 21:17:47 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.124 21:17:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.124 21:17:47 -- common/autotest_common.sh@10 -- # set +x 00:24:32.124 21:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.124 21:17:48 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.124 21:17:48 -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:24:32.124 21:17:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.124 21:17:48 -- host/auth.sh@44 -- # digest=sha512 00:24:32.124 21:17:48 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:32.124 21:17:48 -- host/auth.sh@44 -- # keyid=1 00:24:32.124 21:17:48 -- host/auth.sh@45 -- # key=DHHC-1:00:YzRhMDE1YTYyMWFiMjk0YWEyNDA5NDI5OTA4YTM1MmFiNThlMzVkYjRlZDJmYTI0YwqxSg==: 00:24:32.124 21:17:48 -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: 00:24:32.124 21:17:48 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:32.124 21:17:48 -- host/auth.sh@49 -- # echo ffdhe2048 00:24:32.124 21:17:48 -- host/auth.sh@50 -- # echo DHHC-1:00:YzRhMDE1YTYyMWFiMjk0YWEyNDA5NDI5OTA4YTM1MmFiNThlMzVkYjRlZDJmYTI0YwqxSg==: 00:24:32.124 21:17:48 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: ]] 00:24:32.124 21:17:48 -- host/auth.sh@51 -- # echo DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: 00:24:32.124 21:17:48 -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:24:32.124 21:17:48 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.124 21:17:48 -- host/auth.sh@57 -- # digest=sha512 00:24:32.124 21:17:48 -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:32.124 21:17:48 -- host/auth.sh@57 -- # keyid=1 00:24:32.124 21:17:48 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.124 21:17:48 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:32.124 21:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.124 21:17:48 -- common/autotest_common.sh@10 -- # set +x 00:24:32.124 21:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.124 21:17:48 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.124 21:17:48 -- nvmf/common.sh@730 -- # local ip 00:24:32.124 21:17:48 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:32.124 21:17:48 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:32.124 21:17:48 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.124 21:17:48 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.124 21:17:48 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:32.124 21:17:48 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.124 21:17:48 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:32.124 21:17:48 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:32.124 21:17:48 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:32.124 21:17:48 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:32.124 21:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.124 21:17:48 -- common/autotest_common.sh@10 -- # set +x 00:24:32.393 nvme0n1 00:24:32.393 21:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.393 21:17:48 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.394 21:17:48 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.394 21:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.394 21:17:48 -- common/autotest_common.sh@10 -- # set +x 00:24:32.394 21:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.394 21:17:48 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.394 21:17:48 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.394 21:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.394 21:17:48 -- common/autotest_common.sh@10 -- # set +x 00:24:32.394 21:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.394 21:17:48 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.394 21:17:48 -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:24:32.394 21:17:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.394 21:17:48 -- host/auth.sh@44 -- # digest=sha512 00:24:32.394 21:17:48 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:32.394 21:17:48 -- host/auth.sh@44 -- # keyid=2 00:24:32.394 21:17:48 -- host/auth.sh@45 -- # key=DHHC-1:01:NzAwYjhhZmQ3YzlkMjkxMDMyYzYzMDk4NGZhM2VlNDU/jjTP: 00:24:32.394 21:17:48 -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWZmNzBiODU0MzQ3MDJlYmMxYmVkZmI2MGE2N2RjMTEbLFxR: 00:24:32.394 21:17:48 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:32.394 21:17:48 -- host/auth.sh@49 -- # echo ffdhe2048 00:24:32.394 21:17:48 -- host/auth.sh@50 -- # echo DHHC-1:01:NzAwYjhhZmQ3YzlkMjkxMDMyYzYzMDk4NGZhM2VlNDU/jjTP: 00:24:32.394 21:17:48 -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWZmNzBiODU0MzQ3MDJlYmMxYmVkZmI2MGE2N2RjMTEbLFxR: ]] 00:24:32.394 21:17:48 -- host/auth.sh@51 -- # echo DHHC-1:01:MWZmNzBiODU0MzQ3MDJlYmMxYmVkZmI2MGE2N2RjMTEbLFxR: 00:24:32.394 21:17:48 -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:24:32.394 21:17:48 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.394 21:17:48 -- host/auth.sh@57 -- # digest=sha512 00:24:32.394 21:17:48 -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:32.394 21:17:48 -- host/auth.sh@57 -- # keyid=2 00:24:32.394 21:17:48 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.394 21:17:48 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:32.394 21:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.394 21:17:48 -- common/autotest_common.sh@10 -- # set +x 00:24:32.394 21:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.394 21:17:48 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.394 21:17:48 -- nvmf/common.sh@730 -- # local ip 00:24:32.394 21:17:48 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:32.394 21:17:48 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:32.394 21:17:48 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.394 21:17:48 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.394 21:17:48 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:32.394 21:17:48 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.394 21:17:48 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:32.394 21:17:48 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:32.394 21:17:48 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:32.394 21:17:48 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:32.394 21:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.394 21:17:48 -- common/autotest_common.sh@10 -- # set +x 00:24:32.657 nvme0n1 00:24:32.657 21:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.657 21:17:48 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.657 21:17:48 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.657 21:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.657 21:17:48 -- common/autotest_common.sh@10 -- # set +x 00:24:32.657 21:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.657 21:17:48 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.657 21:17:48 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.657 21:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.657 21:17:48 -- common/autotest_common.sh@10 -- # set +x 00:24:32.657 21:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.657 21:17:48 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.657 21:17:48 -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:24:32.657 21:17:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.657 21:17:48 -- host/auth.sh@44 -- # digest=sha512 00:24:32.657 21:17:48 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:32.657 21:17:48 -- host/auth.sh@44 -- # keyid=3 00:24:32.657 21:17:48 -- host/auth.sh@45 -- # key=DHHC-1:02:NTVmNGJkYWJkZDY2ZmJlMzgzOGYwNWZiMTlkMzVkZTJiM2RkZjU5OTllMmE5YzgxVvBHHw==: 00:24:32.657 21:17:48 -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWFiY2JlYTZmODQ0NTk3NGViNjE0ZDlkMWE0ZjhhMDZld/vL: 00:24:32.657 21:17:48 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:32.657 21:17:48 -- host/auth.sh@49 -- # echo ffdhe2048 00:24:32.657 21:17:48 -- host/auth.sh@50 -- # echo DHHC-1:02:NTVmNGJkYWJkZDY2ZmJlMzgzOGYwNWZiMTlkMzVkZTJiM2RkZjU5OTllMmE5YzgxVvBHHw==: 00:24:32.657 21:17:48 -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWFiY2JlYTZmODQ0NTk3NGViNjE0ZDlkMWE0ZjhhMDZld/vL: ]] 00:24:32.657 21:17:48 -- host/auth.sh@51 -- # echo DHHC-1:00:ZWFiY2JlYTZmODQ0NTk3NGViNjE0ZDlkMWE0ZjhhMDZld/vL: 00:24:32.657 21:17:48 -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:24:32.657 21:17:48 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.657 21:17:48 -- host/auth.sh@57 -- # digest=sha512 00:24:32.657 21:17:48 -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:32.657 21:17:48 -- host/auth.sh@57 -- # keyid=3 00:24:32.657 21:17:48 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.657 21:17:48 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:32.657 21:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.657 21:17:48 -- common/autotest_common.sh@10 -- # set +x 00:24:32.657 21:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.657 21:17:48 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.657 21:17:48 -- nvmf/common.sh@730 -- # local ip 00:24:32.657 21:17:48 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:32.657 21:17:48 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:32.657 21:17:48 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.657 21:17:48 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.657 21:17:48 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:32.657 21:17:48 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.657 21:17:48 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:32.657 21:17:48 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:32.657 21:17:48 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:32.657 21:17:48 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:32.657 21:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.657 21:17:48 -- common/autotest_common.sh@10 -- # set +x 00:24:32.916 nvme0n1 00:24:32.916 21:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.916 21:17:48 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.916 21:17:48 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.916 21:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.916 21:17:48 -- common/autotest_common.sh@10 -- # set +x 00:24:32.916 21:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.916 21:17:48 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.916 21:17:48 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.916 21:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.916 21:17:48 -- common/autotest_common.sh@10 -- # set +x 00:24:32.916 21:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.916 21:17:48 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.916 21:17:48 -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:24:32.916 21:17:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.916 21:17:48 -- host/auth.sh@44 -- # digest=sha512 00:24:32.916 21:17:48 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:32.916 21:17:48 -- host/auth.sh@44 -- # keyid=4 00:24:32.916 21:17:48 -- host/auth.sh@45 -- # key=DHHC-1:03:NjE5MmExNzYyOTVmZGZmNWNlMjk0MzQxMjRiYzMzMGU5ZGI4YWZlODk0Y2M2ZmUxNzYzMWQ3MmU5YjczNmI4YkbRucU=: 00:24:32.916 21:17:48 -- host/auth.sh@46 -- # ckey= 00:24:32.916 21:17:48 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:32.916 21:17:48 -- host/auth.sh@49 -- # echo ffdhe2048 00:24:32.916 21:17:48 -- host/auth.sh@50 -- # echo DHHC-1:03:NjE5MmExNzYyOTVmZGZmNWNlMjk0MzQxMjRiYzMzMGU5ZGI4YWZlODk0Y2M2ZmUxNzYzMWQ3MmU5YjczNmI4YkbRucU=: 00:24:32.916 21:17:48 -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:32.916 21:17:48 -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:24:32.916 21:17:48 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.916 21:17:48 -- host/auth.sh@57 -- # digest=sha512 00:24:32.916 21:17:48 -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:32.916 21:17:48 -- host/auth.sh@57 -- # keyid=4 00:24:32.916 21:17:48 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.916 21:17:48 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:32.916 21:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.916 21:17:48 -- common/autotest_common.sh@10 -- # set +x 00:24:32.916 21:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.916 21:17:48 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.916 21:17:48 -- nvmf/common.sh@730 -- # local ip 00:24:32.916 21:17:48 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:32.916 21:17:48 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:32.916 21:17:48 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.916 21:17:48 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.916 21:17:48 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:32.916 21:17:48 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.916 21:17:48 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:32.916 21:17:48 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:32.916 21:17:48 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:32.916 21:17:48 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:32.916 21:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.916 21:17:48 -- common/autotest_common.sh@10 -- # set +x 00:24:32.916 nvme0n1 00:24:32.916 21:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.916 21:17:48 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.916 21:17:48 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.916 21:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.916 21:17:48 -- common/autotest_common.sh@10 -- # set +x 00:24:32.917 21:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.176 21:17:48 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.176 21:17:48 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.176 21:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.176 21:17:48 -- common/autotest_common.sh@10 -- # set +x 00:24:33.176 21:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.176 21:17:48 -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:33.176 21:17:48 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.176 21:17:48 -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:24:33.176 21:17:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.176 21:17:48 -- host/auth.sh@44 -- # digest=sha512 00:24:33.176 21:17:48 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:33.176 21:17:48 -- host/auth.sh@44 -- # keyid=0 00:24:33.176 21:17:48 -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhhNWZiYjNkMDdhOTczZjM3NmQzYmE5MGNmNjg1OTmyq3lw: 00:24:33.176 21:17:48 -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzc2YmZiNTk4NjQ1ZWE3OWIzZDM2OWZkMjlhMzYzY2NmY2NhMTJhMzY2MzA3YmZjNzFlZGM4NDg4OWMxNzE4OfrUzHU=: 00:24:33.176 21:17:48 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:33.176 21:17:48 -- host/auth.sh@49 -- # echo ffdhe3072 00:24:33.176 21:17:48 -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhhNWZiYjNkMDdhOTczZjM3NmQzYmE5MGNmNjg1OTmyq3lw: 00:24:33.176 21:17:48 -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzc2YmZiNTk4NjQ1ZWE3OWIzZDM2OWZkMjlhMzYzY2NmY2NhMTJhMzY2MzA3YmZjNzFlZGM4NDg4OWMxNzE4OfrUzHU=: ]] 00:24:33.176 21:17:48 -- host/auth.sh@51 -- # echo DHHC-1:03:Nzc2YmZiNTk4NjQ1ZWE3OWIzZDM2OWZkMjlhMzYzY2NmY2NhMTJhMzY2MzA3YmZjNzFlZGM4NDg4OWMxNzE4OfrUzHU=: 00:24:33.176 21:17:48 -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:24:33.176 21:17:48 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.176 21:17:48 -- host/auth.sh@57 -- # digest=sha512 00:24:33.176 21:17:48 -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:33.176 21:17:48 -- host/auth.sh@57 -- # keyid=0 00:24:33.176 21:17:48 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.176 21:17:48 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:33.176 21:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.176 21:17:48 -- common/autotest_common.sh@10 -- # set +x 00:24:33.176 21:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.176 21:17:48 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.176 21:17:48 -- nvmf/common.sh@730 -- # local ip 00:24:33.176 21:17:48 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:33.176 21:17:48 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:33.176 21:17:48 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.176 21:17:48 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.176 21:17:48 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:33.176 21:17:48 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.176 21:17:48 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:33.176 21:17:48 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:33.176 21:17:48 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:33.176 21:17:48 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:33.176 21:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.176 21:17:48 -- common/autotest_common.sh@10 -- # set +x 00:24:33.176 nvme0n1 00:24:33.176 21:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.176 21:17:49 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.176 21:17:49 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.176 21:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.176 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:24:33.176 21:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.176 21:17:49 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.176 21:17:49 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.176 21:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.176 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:24:33.436 21:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.436 21:17:49 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.436 21:17:49 -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:24:33.436 21:17:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.436 21:17:49 -- host/auth.sh@44 -- # digest=sha512 00:24:33.436 21:17:49 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:33.436 21:17:49 -- host/auth.sh@44 -- # keyid=1 00:24:33.436 21:17:49 -- host/auth.sh@45 -- # key=DHHC-1:00:YzRhMDE1YTYyMWFiMjk0YWEyNDA5NDI5OTA4YTM1MmFiNThlMzVkYjRlZDJmYTI0YwqxSg==: 00:24:33.436 21:17:49 -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: 00:24:33.436 21:17:49 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:33.436 21:17:49 -- host/auth.sh@49 -- # echo ffdhe3072 00:24:33.436 21:17:49 -- host/auth.sh@50 -- # echo DHHC-1:00:YzRhMDE1YTYyMWFiMjk0YWEyNDA5NDI5OTA4YTM1MmFiNThlMzVkYjRlZDJmYTI0YwqxSg==: 00:24:33.436 21:17:49 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: ]] 00:24:33.436 21:17:49 -- host/auth.sh@51 -- # echo DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: 00:24:33.436 21:17:49 -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:24:33.436 21:17:49 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.436 21:17:49 -- host/auth.sh@57 -- # digest=sha512 00:24:33.436 21:17:49 -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:33.436 21:17:49 -- host/auth.sh@57 -- # keyid=1 00:24:33.436 21:17:49 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.436 21:17:49 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:33.436 21:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.436 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:24:33.436 21:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.436 21:17:49 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.436 21:17:49 -- nvmf/common.sh@730 -- # local ip 00:24:33.436 21:17:49 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:33.436 21:17:49 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:33.436 21:17:49 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.436 21:17:49 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.436 21:17:49 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:33.436 21:17:49 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.436 21:17:49 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:33.436 21:17:49 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:33.436 21:17:49 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:33.436 21:17:49 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:33.436 21:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.436 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:24:33.436 nvme0n1 00:24:33.436 21:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.436 21:17:49 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.436 21:17:49 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.436 21:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.436 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:24:33.436 21:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.436 21:17:49 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.436 21:17:49 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.436 21:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.436 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:24:33.436 21:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.436 21:17:49 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.436 21:17:49 -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:24:33.436 21:17:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.436 21:17:49 -- host/auth.sh@44 -- # digest=sha512 00:24:33.436 21:17:49 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:33.436 21:17:49 -- host/auth.sh@44 -- # keyid=2 00:24:33.436 21:17:49 -- host/auth.sh@45 -- # key=DHHC-1:01:NzAwYjhhZmQ3YzlkMjkxMDMyYzYzMDk4NGZhM2VlNDU/jjTP: 00:24:33.436 21:17:49 -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWZmNzBiODU0MzQ3MDJlYmMxYmVkZmI2MGE2N2RjMTEbLFxR: 00:24:33.436 21:17:49 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:33.436 21:17:49 -- host/auth.sh@49 -- # echo ffdhe3072 00:24:33.436 21:17:49 -- host/auth.sh@50 -- # echo DHHC-1:01:NzAwYjhhZmQ3YzlkMjkxMDMyYzYzMDk4NGZhM2VlNDU/jjTP: 00:24:33.436 21:17:49 -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWZmNzBiODU0MzQ3MDJlYmMxYmVkZmI2MGE2N2RjMTEbLFxR: ]] 00:24:33.436 21:17:49 -- host/auth.sh@51 -- # echo DHHC-1:01:MWZmNzBiODU0MzQ3MDJlYmMxYmVkZmI2MGE2N2RjMTEbLFxR: 00:24:33.436 21:17:49 -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:24:33.436 21:17:49 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.436 21:17:49 -- host/auth.sh@57 -- # digest=sha512 00:24:33.436 21:17:49 -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:33.436 21:17:49 -- host/auth.sh@57 -- # keyid=2 00:24:33.436 21:17:49 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.436 21:17:49 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:33.436 21:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.436 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:24:33.436 21:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.436 21:17:49 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.436 21:17:49 -- nvmf/common.sh@730 -- # local ip 00:24:33.436 21:17:49 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:33.436 21:17:49 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:33.436 21:17:49 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.436 21:17:49 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.436 21:17:49 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:33.436 21:17:49 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.436 21:17:49 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:33.436 21:17:49 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:33.436 21:17:49 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:33.436 21:17:49 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:33.436 21:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.436 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:24:33.696 nvme0n1 00:24:33.696 21:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.696 21:17:49 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.696 21:17:49 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.696 21:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.696 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:24:33.696 21:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.696 21:17:49 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.696 21:17:49 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.696 21:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.696 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:24:33.696 21:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.696 21:17:49 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.696 21:17:49 -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:24:33.696 21:17:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.696 21:17:49 -- host/auth.sh@44 -- # digest=sha512 00:24:33.696 21:17:49 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:33.696 21:17:49 -- host/auth.sh@44 -- # keyid=3 00:24:33.696 21:17:49 -- host/auth.sh@45 -- # key=DHHC-1:02:NTVmNGJkYWJkZDY2ZmJlMzgzOGYwNWZiMTlkMzVkZTJiM2RkZjU5OTllMmE5YzgxVvBHHw==: 00:24:33.696 21:17:49 -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWFiY2JlYTZmODQ0NTk3NGViNjE0ZDlkMWE0ZjhhMDZld/vL: 00:24:33.696 21:17:49 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:33.696 21:17:49 -- host/auth.sh@49 -- # echo ffdhe3072 00:24:33.696 21:17:49 -- host/auth.sh@50 -- # echo DHHC-1:02:NTVmNGJkYWJkZDY2ZmJlMzgzOGYwNWZiMTlkMzVkZTJiM2RkZjU5OTllMmE5YzgxVvBHHw==: 00:24:33.696 21:17:49 -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWFiY2JlYTZmODQ0NTk3NGViNjE0ZDlkMWE0ZjhhMDZld/vL: ]] 00:24:33.696 21:17:49 -- host/auth.sh@51 -- # echo DHHC-1:00:ZWFiY2JlYTZmODQ0NTk3NGViNjE0ZDlkMWE0ZjhhMDZld/vL: 00:24:33.696 21:17:49 -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:24:33.696 21:17:49 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.696 21:17:49 -- host/auth.sh@57 -- # digest=sha512 00:24:33.696 21:17:49 -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:33.696 21:17:49 -- host/auth.sh@57 -- # keyid=3 00:24:33.696 21:17:49 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.696 21:17:49 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:33.696 21:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.696 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:24:33.696 21:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.696 21:17:49 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.696 21:17:49 -- nvmf/common.sh@730 -- # local ip 00:24:33.696 21:17:49 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:33.696 21:17:49 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:33.696 21:17:49 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.696 21:17:49 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.696 21:17:49 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:33.696 21:17:49 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.697 21:17:49 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:33.697 21:17:49 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:33.697 21:17:49 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:33.697 21:17:49 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:33.697 21:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.697 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:24:33.955 nvme0n1 00:24:33.955 21:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.955 21:17:49 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.955 21:17:49 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.955 21:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.955 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:24:33.956 21:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.956 21:17:49 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.956 21:17:49 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.956 21:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.956 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:24:33.956 21:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.956 21:17:49 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.956 21:17:49 -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:24:33.956 21:17:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.956 21:17:49 -- host/auth.sh@44 -- # digest=sha512 00:24:33.956 21:17:49 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:33.956 21:17:49 -- host/auth.sh@44 -- # keyid=4 00:24:33.956 21:17:49 -- host/auth.sh@45 -- # key=DHHC-1:03:NjE5MmExNzYyOTVmZGZmNWNlMjk0MzQxMjRiYzMzMGU5ZGI4YWZlODk0Y2M2ZmUxNzYzMWQ3MmU5YjczNmI4YkbRucU=: 00:24:33.956 21:17:49 -- host/auth.sh@46 -- # ckey= 00:24:33.956 21:17:49 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:33.956 21:17:49 -- host/auth.sh@49 -- # echo ffdhe3072 00:24:33.956 21:17:49 -- host/auth.sh@50 -- # echo DHHC-1:03:NjE5MmExNzYyOTVmZGZmNWNlMjk0MzQxMjRiYzMzMGU5ZGI4YWZlODk0Y2M2ZmUxNzYzMWQ3MmU5YjczNmI4YkbRucU=: 00:24:33.956 21:17:49 -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:33.956 21:17:49 -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:24:33.956 21:17:49 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.956 21:17:49 -- host/auth.sh@57 -- # digest=sha512 00:24:33.956 21:17:49 -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:33.956 21:17:49 -- host/auth.sh@57 -- # keyid=4 00:24:33.956 21:17:49 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.956 21:17:49 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:33.956 21:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.956 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:24:33.956 21:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.956 21:17:49 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.956 21:17:49 -- nvmf/common.sh@730 -- # local ip 00:24:33.956 21:17:49 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:33.956 21:17:49 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:33.956 21:17:49 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.956 21:17:49 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.956 21:17:49 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:33.956 21:17:49 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.956 21:17:49 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:33.956 21:17:49 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:33.956 21:17:49 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:33.956 21:17:49 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:33.956 21:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.956 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:24:34.215 nvme0n1 00:24:34.215 21:17:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.215 21:17:50 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.215 21:17:50 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.215 21:17:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.215 21:17:50 -- common/autotest_common.sh@10 -- # set +x 00:24:34.215 21:17:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.215 21:17:50 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.215 21:17:50 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.215 21:17:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.215 21:17:50 -- common/autotest_common.sh@10 -- # set +x 00:24:34.215 21:17:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.215 21:17:50 -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:34.215 21:17:50 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.215 21:17:50 -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:24:34.215 21:17:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.215 21:17:50 -- host/auth.sh@44 -- # digest=sha512 00:24:34.215 21:17:50 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:34.215 21:17:50 -- host/auth.sh@44 -- # keyid=0 00:24:34.215 21:17:50 -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhhNWZiYjNkMDdhOTczZjM3NmQzYmE5MGNmNjg1OTmyq3lw: 00:24:34.215 21:17:50 -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzc2YmZiNTk4NjQ1ZWE3OWIzZDM2OWZkMjlhMzYzY2NmY2NhMTJhMzY2MzA3YmZjNzFlZGM4NDg4OWMxNzE4OfrUzHU=: 00:24:34.215 21:17:50 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:34.215 21:17:50 -- host/auth.sh@49 -- # echo ffdhe4096 00:24:34.215 21:17:50 -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhhNWZiYjNkMDdhOTczZjM3NmQzYmE5MGNmNjg1OTmyq3lw: 00:24:34.215 21:17:50 -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzc2YmZiNTk4NjQ1ZWE3OWIzZDM2OWZkMjlhMzYzY2NmY2NhMTJhMzY2MzA3YmZjNzFlZGM4NDg4OWMxNzE4OfrUzHU=: ]] 00:24:34.215 21:17:50 -- host/auth.sh@51 -- # echo DHHC-1:03:Nzc2YmZiNTk4NjQ1ZWE3OWIzZDM2OWZkMjlhMzYzY2NmY2NhMTJhMzY2MzA3YmZjNzFlZGM4NDg4OWMxNzE4OfrUzHU=: 00:24:34.215 21:17:50 -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:24:34.215 21:17:50 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.215 21:17:50 -- host/auth.sh@57 -- # digest=sha512 00:24:34.215 21:17:50 -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:34.215 21:17:50 -- host/auth.sh@57 -- # keyid=0 00:24:34.215 21:17:50 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.215 21:17:50 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:34.215 21:17:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.215 21:17:50 -- common/autotest_common.sh@10 -- # set +x 00:24:34.215 21:17:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.215 21:17:50 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.215 21:17:50 -- nvmf/common.sh@730 -- # local ip 00:24:34.215 21:17:50 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:34.215 21:17:50 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:34.215 21:17:50 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.215 21:17:50 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.215 21:17:50 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:34.215 21:17:50 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.215 21:17:50 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:34.215 21:17:50 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:34.215 21:17:50 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:34.215 21:17:50 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:34.215 21:17:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.215 21:17:50 -- common/autotest_common.sh@10 -- # set +x 00:24:34.474 nvme0n1 00:24:34.474 21:17:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.474 21:17:50 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.474 21:17:50 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.474 21:17:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.474 21:17:50 -- common/autotest_common.sh@10 -- # set +x 00:24:34.474 21:17:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.474 21:17:50 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.474 21:17:50 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.474 21:17:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.474 21:17:50 -- common/autotest_common.sh@10 -- # set +x 00:24:34.474 21:17:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.474 21:17:50 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.474 21:17:50 -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:24:34.474 21:17:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.474 21:17:50 -- host/auth.sh@44 -- # digest=sha512 00:24:34.474 21:17:50 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:34.474 21:17:50 -- host/auth.sh@44 -- # keyid=1 00:24:34.474 21:17:50 -- host/auth.sh@45 -- # key=DHHC-1:00:YzRhMDE1YTYyMWFiMjk0YWEyNDA5NDI5OTA4YTM1MmFiNThlMzVkYjRlZDJmYTI0YwqxSg==: 00:24:34.474 21:17:50 -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: 00:24:34.474 21:17:50 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:34.474 21:17:50 -- host/auth.sh@49 -- # echo ffdhe4096 00:24:34.474 21:17:50 -- host/auth.sh@50 -- # echo DHHC-1:00:YzRhMDE1YTYyMWFiMjk0YWEyNDA5NDI5OTA4YTM1MmFiNThlMzVkYjRlZDJmYTI0YwqxSg==: 00:24:34.474 21:17:50 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: ]] 00:24:34.474 21:17:50 -- host/auth.sh@51 -- # echo DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: 00:24:34.474 21:17:50 -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:24:34.474 21:17:50 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.474 21:17:50 -- host/auth.sh@57 -- # digest=sha512 00:24:34.474 21:17:50 -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:34.474 21:17:50 -- host/auth.sh@57 -- # keyid=1 00:24:34.474 21:17:50 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.474 21:17:50 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:34.474 21:17:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.474 21:17:50 -- common/autotest_common.sh@10 -- # set +x 00:24:34.733 21:17:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.733 21:17:50 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.733 21:17:50 -- nvmf/common.sh@730 -- # local ip 00:24:34.733 21:17:50 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:34.733 21:17:50 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:34.733 21:17:50 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.733 21:17:50 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.733 21:17:50 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:34.733 21:17:50 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.733 21:17:50 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:34.733 21:17:50 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:34.733 21:17:50 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:34.733 21:17:50 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:34.733 21:17:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.733 21:17:50 -- common/autotest_common.sh@10 -- # set +x 00:24:34.733 nvme0n1 00:24:34.733 21:17:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.733 21:17:50 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.733 21:17:50 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.733 21:17:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.733 21:17:50 -- common/autotest_common.sh@10 -- # set +x 00:24:34.993 21:17:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.993 21:17:50 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.993 21:17:50 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.993 21:17:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.993 21:17:50 -- common/autotest_common.sh@10 -- # set +x 00:24:34.993 21:17:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.993 21:17:50 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.993 21:17:50 -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:24:34.993 21:17:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.993 21:17:50 -- host/auth.sh@44 -- # digest=sha512 00:24:34.993 21:17:50 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:34.993 21:17:50 -- host/auth.sh@44 -- # keyid=2 00:24:34.993 21:17:50 -- host/auth.sh@45 -- # key=DHHC-1:01:NzAwYjhhZmQ3YzlkMjkxMDMyYzYzMDk4NGZhM2VlNDU/jjTP: 00:24:34.993 21:17:50 -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWZmNzBiODU0MzQ3MDJlYmMxYmVkZmI2MGE2N2RjMTEbLFxR: 00:24:34.993 21:17:50 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:34.993 21:17:50 -- host/auth.sh@49 -- # echo ffdhe4096 00:24:34.993 21:17:50 -- host/auth.sh@50 -- # echo DHHC-1:01:NzAwYjhhZmQ3YzlkMjkxMDMyYzYzMDk4NGZhM2VlNDU/jjTP: 00:24:34.993 21:17:50 -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWZmNzBiODU0MzQ3MDJlYmMxYmVkZmI2MGE2N2RjMTEbLFxR: ]] 00:24:34.993 21:17:50 -- host/auth.sh@51 -- # echo DHHC-1:01:MWZmNzBiODU0MzQ3MDJlYmMxYmVkZmI2MGE2N2RjMTEbLFxR: 00:24:34.993 21:17:50 -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:24:34.993 21:17:50 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.993 21:17:50 -- host/auth.sh@57 -- # digest=sha512 00:24:34.993 21:17:50 -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:34.993 21:17:50 -- host/auth.sh@57 -- # keyid=2 00:24:34.993 21:17:50 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.993 21:17:50 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:34.993 21:17:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.993 21:17:50 -- common/autotest_common.sh@10 -- # set +x 00:24:34.993 21:17:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.993 21:17:50 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.993 21:17:50 -- nvmf/common.sh@730 -- # local ip 00:24:34.993 21:17:50 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:34.993 21:17:50 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:34.993 21:17:50 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.993 21:17:50 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.993 21:17:50 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:34.993 21:17:50 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.993 21:17:50 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:34.993 21:17:50 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:34.993 21:17:50 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:34.993 21:17:50 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:34.993 21:17:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.993 21:17:50 -- common/autotest_common.sh@10 -- # set +x 00:24:35.253 nvme0n1 00:24:35.253 21:17:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.253 21:17:50 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.253 21:17:50 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:35.253 21:17:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.253 21:17:50 -- common/autotest_common.sh@10 -- # set +x 00:24:35.253 21:17:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.253 21:17:51 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.253 21:17:51 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.253 21:17:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.253 21:17:51 -- common/autotest_common.sh@10 -- # set +x 00:24:35.253 21:17:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.253 21:17:51 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:35.253 21:17:51 -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:24:35.253 21:17:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.253 21:17:51 -- host/auth.sh@44 -- # digest=sha512 00:24:35.253 21:17:51 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:35.253 21:17:51 -- host/auth.sh@44 -- # keyid=3 00:24:35.253 21:17:51 -- host/auth.sh@45 -- # key=DHHC-1:02:NTVmNGJkYWJkZDY2ZmJlMzgzOGYwNWZiMTlkMzVkZTJiM2RkZjU5OTllMmE5YzgxVvBHHw==: 00:24:35.253 21:17:51 -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWFiY2JlYTZmODQ0NTk3NGViNjE0ZDlkMWE0ZjhhMDZld/vL: 00:24:35.253 21:17:51 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:35.253 21:17:51 -- host/auth.sh@49 -- # echo ffdhe4096 00:24:35.253 21:17:51 -- host/auth.sh@50 -- # echo DHHC-1:02:NTVmNGJkYWJkZDY2ZmJlMzgzOGYwNWZiMTlkMzVkZTJiM2RkZjU5OTllMmE5YzgxVvBHHw==: 00:24:35.253 21:17:51 -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWFiY2JlYTZmODQ0NTk3NGViNjE0ZDlkMWE0ZjhhMDZld/vL: ]] 00:24:35.253 21:17:51 -- host/auth.sh@51 -- # echo DHHC-1:00:ZWFiY2JlYTZmODQ0NTk3NGViNjE0ZDlkMWE0ZjhhMDZld/vL: 00:24:35.253 21:17:51 -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:24:35.253 21:17:51 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:35.253 21:17:51 -- host/auth.sh@57 -- # digest=sha512 00:24:35.253 21:17:51 -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:35.253 21:17:51 -- host/auth.sh@57 -- # keyid=3 00:24:35.253 21:17:51 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.253 21:17:51 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:35.253 21:17:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.253 21:17:51 -- common/autotest_common.sh@10 -- # set +x 00:24:35.253 21:17:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.253 21:17:51 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:35.253 21:17:51 -- nvmf/common.sh@730 -- # local ip 00:24:35.253 21:17:51 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:35.253 21:17:51 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:35.253 21:17:51 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.253 21:17:51 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.253 21:17:51 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:35.253 21:17:51 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.253 21:17:51 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:35.253 21:17:51 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:35.253 21:17:51 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:35.253 21:17:51 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:35.253 21:17:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.253 21:17:51 -- common/autotest_common.sh@10 -- # set +x 00:24:35.533 nvme0n1 00:24:35.533 21:17:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.533 21:17:51 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.533 21:17:51 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:35.533 21:17:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.533 21:17:51 -- common/autotest_common.sh@10 -- # set +x 00:24:35.533 21:17:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.533 21:17:51 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.533 21:17:51 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.533 21:17:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.533 21:17:51 -- common/autotest_common.sh@10 -- # set +x 00:24:35.533 21:17:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.533 21:17:51 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:35.533 21:17:51 -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:24:35.533 21:17:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.533 21:17:51 -- host/auth.sh@44 -- # digest=sha512 00:24:35.533 21:17:51 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:35.533 21:17:51 -- host/auth.sh@44 -- # keyid=4 00:24:35.533 21:17:51 -- host/auth.sh@45 -- # key=DHHC-1:03:NjE5MmExNzYyOTVmZGZmNWNlMjk0MzQxMjRiYzMzMGU5ZGI4YWZlODk0Y2M2ZmUxNzYzMWQ3MmU5YjczNmI4YkbRucU=: 00:24:35.533 21:17:51 -- host/auth.sh@46 -- # ckey= 00:24:35.533 21:17:51 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:35.533 21:17:51 -- host/auth.sh@49 -- # echo ffdhe4096 00:24:35.533 21:17:51 -- host/auth.sh@50 -- # echo DHHC-1:03:NjE5MmExNzYyOTVmZGZmNWNlMjk0MzQxMjRiYzMzMGU5ZGI4YWZlODk0Y2M2ZmUxNzYzMWQ3MmU5YjczNmI4YkbRucU=: 00:24:35.533 21:17:51 -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:35.533 21:17:51 -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:24:35.533 21:17:51 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:35.533 21:17:51 -- host/auth.sh@57 -- # digest=sha512 00:24:35.533 21:17:51 -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:35.533 21:17:51 -- host/auth.sh@57 -- # keyid=4 00:24:35.533 21:17:51 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.533 21:17:51 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:35.533 21:17:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.533 21:17:51 -- common/autotest_common.sh@10 -- # set +x 00:24:35.533 21:17:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.533 21:17:51 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:35.533 21:17:51 -- nvmf/common.sh@730 -- # local ip 00:24:35.533 21:17:51 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:35.533 21:17:51 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:35.533 21:17:51 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.533 21:17:51 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.533 21:17:51 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:35.533 21:17:51 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.533 21:17:51 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:35.533 21:17:51 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:35.533 21:17:51 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:35.533 21:17:51 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:35.533 21:17:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.533 21:17:51 -- common/autotest_common.sh@10 -- # set +x 00:24:35.792 nvme0n1 00:24:35.792 21:17:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.792 21:17:51 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.792 21:17:51 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:35.792 21:17:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.792 21:17:51 -- common/autotest_common.sh@10 -- # set +x 00:24:35.792 21:17:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.792 21:17:51 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.792 21:17:51 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.792 21:17:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.792 21:17:51 -- common/autotest_common.sh@10 -- # set +x 00:24:35.792 21:17:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.792 21:17:51 -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:35.792 21:17:51 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:35.792 21:17:51 -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:24:35.792 21:17:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.792 21:17:51 -- host/auth.sh@44 -- # digest=sha512 00:24:35.792 21:17:51 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:35.792 21:17:51 -- host/auth.sh@44 -- # keyid=0 00:24:35.792 21:17:51 -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhhNWZiYjNkMDdhOTczZjM3NmQzYmE5MGNmNjg1OTmyq3lw: 00:24:35.792 21:17:51 -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzc2YmZiNTk4NjQ1ZWE3OWIzZDM2OWZkMjlhMzYzY2NmY2NhMTJhMzY2MzA3YmZjNzFlZGM4NDg4OWMxNzE4OfrUzHU=: 00:24:35.792 21:17:51 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:35.792 21:17:51 -- host/auth.sh@49 -- # echo ffdhe6144 00:24:35.792 21:17:51 -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhhNWZiYjNkMDdhOTczZjM3NmQzYmE5MGNmNjg1OTmyq3lw: 00:24:35.792 21:17:51 -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzc2YmZiNTk4NjQ1ZWE3OWIzZDM2OWZkMjlhMzYzY2NmY2NhMTJhMzY2MzA3YmZjNzFlZGM4NDg4OWMxNzE4OfrUzHU=: ]] 00:24:35.792 21:17:51 -- host/auth.sh@51 -- # echo DHHC-1:03:Nzc2YmZiNTk4NjQ1ZWE3OWIzZDM2OWZkMjlhMzYzY2NmY2NhMTJhMzY2MzA3YmZjNzFlZGM4NDg4OWMxNzE4OfrUzHU=: 00:24:35.792 21:17:51 -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:24:35.792 21:17:51 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:35.792 21:17:51 -- host/auth.sh@57 -- # digest=sha512 00:24:35.792 21:17:51 -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:35.792 21:17:51 -- host/auth.sh@57 -- # keyid=0 00:24:35.792 21:17:51 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.792 21:17:51 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:35.792 21:17:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.792 21:17:51 -- common/autotest_common.sh@10 -- # set +x 00:24:35.792 21:17:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.792 21:17:51 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:35.792 21:17:51 -- nvmf/common.sh@730 -- # local ip 00:24:35.792 21:17:51 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:35.792 21:17:51 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:35.792 21:17:51 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.792 21:17:51 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.792 21:17:51 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:35.792 21:17:51 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.792 21:17:51 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:35.792 21:17:51 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:35.792 21:17:51 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:35.793 21:17:51 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:35.793 21:17:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.793 21:17:51 -- common/autotest_common.sh@10 -- # set +x 00:24:36.361 nvme0n1 00:24:36.361 21:17:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.362 21:17:52 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:36.362 21:17:52 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.362 21:17:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.362 21:17:52 -- common/autotest_common.sh@10 -- # set +x 00:24:36.362 21:17:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.362 21:17:52 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.362 21:17:52 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.362 21:17:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.362 21:17:52 -- common/autotest_common.sh@10 -- # set +x 00:24:36.362 21:17:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.362 21:17:52 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:36.362 21:17:52 -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:24:36.362 21:17:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.362 21:17:52 -- host/auth.sh@44 -- # digest=sha512 00:24:36.362 21:17:52 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:36.362 21:17:52 -- host/auth.sh@44 -- # keyid=1 00:24:36.362 21:17:52 -- host/auth.sh@45 -- # key=DHHC-1:00:YzRhMDE1YTYyMWFiMjk0YWEyNDA5NDI5OTA4YTM1MmFiNThlMzVkYjRlZDJmYTI0YwqxSg==: 00:24:36.362 21:17:52 -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: 00:24:36.362 21:17:52 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:36.362 21:17:52 -- host/auth.sh@49 -- # echo ffdhe6144 00:24:36.362 21:17:52 -- host/auth.sh@50 -- # echo DHHC-1:00:YzRhMDE1YTYyMWFiMjk0YWEyNDA5NDI5OTA4YTM1MmFiNThlMzVkYjRlZDJmYTI0YwqxSg==: 00:24:36.362 21:17:52 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: ]] 00:24:36.362 21:17:52 -- host/auth.sh@51 -- # echo DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: 00:24:36.362 21:17:52 -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:24:36.362 21:17:52 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:36.362 21:17:52 -- host/auth.sh@57 -- # digest=sha512 00:24:36.362 21:17:52 -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:36.362 21:17:52 -- host/auth.sh@57 -- # keyid=1 00:24:36.362 21:17:52 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:36.362 21:17:52 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:36.362 21:17:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.362 21:17:52 -- common/autotest_common.sh@10 -- # set +x 00:24:36.362 21:17:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.362 21:17:52 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:36.362 21:17:52 -- nvmf/common.sh@730 -- # local ip 00:24:36.362 21:17:52 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:36.362 21:17:52 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:36.362 21:17:52 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.362 21:17:52 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.362 21:17:52 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:36.362 21:17:52 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.362 21:17:52 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:36.362 21:17:52 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:36.362 21:17:52 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:36.362 21:17:52 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:36.362 21:17:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.362 21:17:52 -- common/autotest_common.sh@10 -- # set +x 00:24:36.621 nvme0n1 00:24:36.621 21:17:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.621 21:17:52 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:36.621 21:17:52 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.621 21:17:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.621 21:17:52 -- common/autotest_common.sh@10 -- # set +x 00:24:36.621 21:17:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.621 21:17:52 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.621 21:17:52 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.621 21:17:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.621 21:17:52 -- common/autotest_common.sh@10 -- # set +x 00:24:36.621 21:17:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.622 21:17:52 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:36.622 21:17:52 -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:24:36.622 21:17:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.622 21:17:52 -- host/auth.sh@44 -- # digest=sha512 00:24:36.622 21:17:52 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:36.622 21:17:52 -- host/auth.sh@44 -- # keyid=2 00:24:36.622 21:17:52 -- host/auth.sh@45 -- # key=DHHC-1:01:NzAwYjhhZmQ3YzlkMjkxMDMyYzYzMDk4NGZhM2VlNDU/jjTP: 00:24:36.622 21:17:52 -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWZmNzBiODU0MzQ3MDJlYmMxYmVkZmI2MGE2N2RjMTEbLFxR: 00:24:36.622 21:17:52 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:36.622 21:17:52 -- host/auth.sh@49 -- # echo ffdhe6144 00:24:36.622 21:17:52 -- host/auth.sh@50 -- # echo DHHC-1:01:NzAwYjhhZmQ3YzlkMjkxMDMyYzYzMDk4NGZhM2VlNDU/jjTP: 00:24:36.622 21:17:52 -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWZmNzBiODU0MzQ3MDJlYmMxYmVkZmI2MGE2N2RjMTEbLFxR: ]] 00:24:36.622 21:17:52 -- host/auth.sh@51 -- # echo DHHC-1:01:MWZmNzBiODU0MzQ3MDJlYmMxYmVkZmI2MGE2N2RjMTEbLFxR: 00:24:36.622 21:17:52 -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:24:36.622 21:17:52 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:36.622 21:17:52 -- host/auth.sh@57 -- # digest=sha512 00:24:36.622 21:17:52 -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:36.622 21:17:52 -- host/auth.sh@57 -- # keyid=2 00:24:36.622 21:17:52 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:36.622 21:17:52 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:36.622 21:17:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.622 21:17:52 -- common/autotest_common.sh@10 -- # set +x 00:24:36.881 21:17:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.881 21:17:52 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:36.881 21:17:52 -- nvmf/common.sh@730 -- # local ip 00:24:36.881 21:17:52 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:36.881 21:17:52 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:36.881 21:17:52 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.881 21:17:52 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.881 21:17:52 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:36.881 21:17:52 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.881 21:17:52 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:36.881 21:17:52 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:36.881 21:17:52 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:36.881 21:17:52 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:36.881 21:17:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.881 21:17:52 -- common/autotest_common.sh@10 -- # set +x 00:24:37.140 nvme0n1 00:24:37.140 21:17:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.140 21:17:52 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:37.140 21:17:52 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.140 21:17:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.140 21:17:52 -- common/autotest_common.sh@10 -- # set +x 00:24:37.140 21:17:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.140 21:17:52 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.140 21:17:52 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.140 21:17:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.140 21:17:52 -- common/autotest_common.sh@10 -- # set +x 00:24:37.140 21:17:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.140 21:17:52 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:37.140 21:17:52 -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:24:37.140 21:17:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:37.140 21:17:52 -- host/auth.sh@44 -- # digest=sha512 00:24:37.140 21:17:52 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:37.140 21:17:52 -- host/auth.sh@44 -- # keyid=3 00:24:37.140 21:17:52 -- host/auth.sh@45 -- # key=DHHC-1:02:NTVmNGJkYWJkZDY2ZmJlMzgzOGYwNWZiMTlkMzVkZTJiM2RkZjU5OTllMmE5YzgxVvBHHw==: 00:24:37.141 21:17:52 -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWFiY2JlYTZmODQ0NTk3NGViNjE0ZDlkMWE0ZjhhMDZld/vL: 00:24:37.141 21:17:52 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:37.141 21:17:52 -- host/auth.sh@49 -- # echo ffdhe6144 00:24:37.141 21:17:52 -- host/auth.sh@50 -- # echo DHHC-1:02:NTVmNGJkYWJkZDY2ZmJlMzgzOGYwNWZiMTlkMzVkZTJiM2RkZjU5OTllMmE5YzgxVvBHHw==: 00:24:37.141 21:17:52 -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWFiY2JlYTZmODQ0NTk3NGViNjE0ZDlkMWE0ZjhhMDZld/vL: ]] 00:24:37.141 21:17:52 -- host/auth.sh@51 -- # echo DHHC-1:00:ZWFiY2JlYTZmODQ0NTk3NGViNjE0ZDlkMWE0ZjhhMDZld/vL: 00:24:37.141 21:17:52 -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:24:37.141 21:17:52 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:37.141 21:17:52 -- host/auth.sh@57 -- # digest=sha512 00:24:37.141 21:17:52 -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:37.141 21:17:52 -- host/auth.sh@57 -- # keyid=3 00:24:37.141 21:17:52 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:37.141 21:17:52 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:37.141 21:17:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.141 21:17:52 -- common/autotest_common.sh@10 -- # set +x 00:24:37.141 21:17:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.141 21:17:52 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:37.141 21:17:52 -- nvmf/common.sh@730 -- # local ip 00:24:37.141 21:17:52 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:37.141 21:17:52 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:37.141 21:17:52 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.141 21:17:52 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.141 21:17:52 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:37.141 21:17:52 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.141 21:17:52 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:37.141 21:17:52 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:37.141 21:17:52 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:37.141 21:17:52 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:37.141 21:17:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.141 21:17:52 -- common/autotest_common.sh@10 -- # set +x 00:24:37.801 nvme0n1 00:24:37.801 21:17:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.801 21:17:53 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.801 21:17:53 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:37.801 21:17:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.801 21:17:53 -- common/autotest_common.sh@10 -- # set +x 00:24:37.801 21:17:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.801 21:17:53 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.801 21:17:53 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.801 21:17:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.801 21:17:53 -- common/autotest_common.sh@10 -- # set +x 00:24:37.801 21:17:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.801 21:17:53 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:37.801 21:17:53 -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:24:37.801 21:17:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:37.801 21:17:53 -- host/auth.sh@44 -- # digest=sha512 00:24:37.801 21:17:53 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:37.801 21:17:53 -- host/auth.sh@44 -- # keyid=4 00:24:37.801 21:17:53 -- host/auth.sh@45 -- # key=DHHC-1:03:NjE5MmExNzYyOTVmZGZmNWNlMjk0MzQxMjRiYzMzMGU5ZGI4YWZlODk0Y2M2ZmUxNzYzMWQ3MmU5YjczNmI4YkbRucU=: 00:24:37.801 21:17:53 -- host/auth.sh@46 -- # ckey= 00:24:37.801 21:17:53 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:37.801 21:17:53 -- host/auth.sh@49 -- # echo ffdhe6144 00:24:37.801 21:17:53 -- host/auth.sh@50 -- # echo DHHC-1:03:NjE5MmExNzYyOTVmZGZmNWNlMjk0MzQxMjRiYzMzMGU5ZGI4YWZlODk0Y2M2ZmUxNzYzMWQ3MmU5YjczNmI4YkbRucU=: 00:24:37.801 21:17:53 -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:37.801 21:17:53 -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:24:37.801 21:17:53 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:37.801 21:17:53 -- host/auth.sh@57 -- # digest=sha512 00:24:37.801 21:17:53 -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:37.801 21:17:53 -- host/auth.sh@57 -- # keyid=4 00:24:37.801 21:17:53 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:37.801 21:17:53 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:37.801 21:17:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.801 21:17:53 -- common/autotest_common.sh@10 -- # set +x 00:24:37.801 21:17:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.801 21:17:53 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:37.801 21:17:53 -- nvmf/common.sh@730 -- # local ip 00:24:37.801 21:17:53 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:37.801 21:17:53 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:37.801 21:17:53 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.801 21:17:53 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.801 21:17:53 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:37.801 21:17:53 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.801 21:17:53 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:37.801 21:17:53 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:37.801 21:17:53 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:37.801 21:17:53 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:37.801 21:17:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.801 21:17:53 -- common/autotest_common.sh@10 -- # set +x 00:24:38.060 nvme0n1 00:24:38.060 21:17:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.060 21:17:53 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.060 21:17:53 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:38.060 21:17:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.060 21:17:53 -- common/autotest_common.sh@10 -- # set +x 00:24:38.060 21:17:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.060 21:17:53 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.060 21:17:53 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.060 21:17:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.060 21:17:53 -- common/autotest_common.sh@10 -- # set +x 00:24:38.060 21:17:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.060 21:17:53 -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:38.060 21:17:53 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:38.060 21:17:53 -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:24:38.060 21:17:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.060 21:17:53 -- host/auth.sh@44 -- # digest=sha512 00:24:38.060 21:17:53 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:38.060 21:17:53 -- host/auth.sh@44 -- # keyid=0 00:24:38.060 21:17:53 -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhhNWZiYjNkMDdhOTczZjM3NmQzYmE5MGNmNjg1OTmyq3lw: 00:24:38.060 21:17:53 -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzc2YmZiNTk4NjQ1ZWE3OWIzZDM2OWZkMjlhMzYzY2NmY2NhMTJhMzY2MzA3YmZjNzFlZGM4NDg4OWMxNzE4OfrUzHU=: 00:24:38.060 21:17:53 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:38.060 21:17:53 -- host/auth.sh@49 -- # echo ffdhe8192 00:24:38.060 21:17:53 -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhhNWZiYjNkMDdhOTczZjM3NmQzYmE5MGNmNjg1OTmyq3lw: 00:24:38.060 21:17:53 -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzc2YmZiNTk4NjQ1ZWE3OWIzZDM2OWZkMjlhMzYzY2NmY2NhMTJhMzY2MzA3YmZjNzFlZGM4NDg4OWMxNzE4OfrUzHU=: ]] 00:24:38.060 21:17:53 -- host/auth.sh@51 -- # echo DHHC-1:03:Nzc2YmZiNTk4NjQ1ZWE3OWIzZDM2OWZkMjlhMzYzY2NmY2NhMTJhMzY2MzA3YmZjNzFlZGM4NDg4OWMxNzE4OfrUzHU=: 00:24:38.060 21:17:53 -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:24:38.060 21:17:53 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:38.060 21:17:53 -- host/auth.sh@57 -- # digest=sha512 00:24:38.060 21:17:53 -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:38.060 21:17:53 -- host/auth.sh@57 -- # keyid=0 00:24:38.060 21:17:53 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:38.060 21:17:53 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:38.060 21:17:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.060 21:17:53 -- common/autotest_common.sh@10 -- # set +x 00:24:38.060 21:17:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.060 21:17:53 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:38.060 21:17:53 -- nvmf/common.sh@730 -- # local ip 00:24:38.060 21:17:53 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:38.060 21:17:53 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:38.060 21:17:53 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.060 21:17:53 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.060 21:17:53 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:38.060 21:17:53 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.060 21:17:53 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:38.060 21:17:53 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:38.060 21:17:53 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:38.060 21:17:53 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:38.060 21:17:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.060 21:17:53 -- common/autotest_common.sh@10 -- # set +x 00:24:38.627 nvme0n1 00:24:38.627 21:17:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.627 21:17:54 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.627 21:17:54 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:38.627 21:17:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.627 21:17:54 -- common/autotest_common.sh@10 -- # set +x 00:24:38.627 21:17:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.627 21:17:54 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.627 21:17:54 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.627 21:17:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.627 21:17:54 -- common/autotest_common.sh@10 -- # set +x 00:24:38.627 21:17:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.627 21:17:54 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:38.627 21:17:54 -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:24:38.627 21:17:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.627 21:17:54 -- host/auth.sh@44 -- # digest=sha512 00:24:38.627 21:17:54 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:38.627 21:17:54 -- host/auth.sh@44 -- # keyid=1 00:24:38.627 21:17:54 -- host/auth.sh@45 -- # key=DHHC-1:00:YzRhMDE1YTYyMWFiMjk0YWEyNDA5NDI5OTA4YTM1MmFiNThlMzVkYjRlZDJmYTI0YwqxSg==: 00:24:38.627 21:17:54 -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: 00:24:38.627 21:17:54 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:38.627 21:17:54 -- host/auth.sh@49 -- # echo ffdhe8192 00:24:38.628 21:17:54 -- host/auth.sh@50 -- # echo DHHC-1:00:YzRhMDE1YTYyMWFiMjk0YWEyNDA5NDI5OTA4YTM1MmFiNThlMzVkYjRlZDJmYTI0YwqxSg==: 00:24:38.628 21:17:54 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: ]] 00:24:38.628 21:17:54 -- host/auth.sh@51 -- # echo DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: 00:24:38.628 21:17:54 -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:24:38.628 21:17:54 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:38.628 21:17:54 -- host/auth.sh@57 -- # digest=sha512 00:24:38.628 21:17:54 -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:38.628 21:17:54 -- host/auth.sh@57 -- # keyid=1 00:24:38.628 21:17:54 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:38.628 21:17:54 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:38.628 21:17:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.628 21:17:54 -- common/autotest_common.sh@10 -- # set +x 00:24:38.628 21:17:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.628 21:17:54 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:38.628 21:17:54 -- nvmf/common.sh@730 -- # local ip 00:24:38.628 21:17:54 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:38.628 21:17:54 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:38.628 21:17:54 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.628 21:17:54 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.628 21:17:54 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:38.628 21:17:54 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.628 21:17:54 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:38.628 21:17:54 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:38.628 21:17:54 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:38.628 21:17:54 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:38.628 21:17:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.628 21:17:54 -- common/autotest_common.sh@10 -- # set +x 00:24:39.564 nvme0n1 00:24:39.564 21:17:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.564 21:17:55 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.564 21:17:55 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:39.564 21:17:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.564 21:17:55 -- common/autotest_common.sh@10 -- # set +x 00:24:39.564 21:17:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.564 21:17:55 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.564 21:17:55 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.564 21:17:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.564 21:17:55 -- common/autotest_common.sh@10 -- # set +x 00:24:39.564 21:17:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.564 21:17:55 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:39.564 21:17:55 -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:24:39.564 21:17:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.564 21:17:55 -- host/auth.sh@44 -- # digest=sha512 00:24:39.564 21:17:55 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:39.564 21:17:55 -- host/auth.sh@44 -- # keyid=2 00:24:39.564 21:17:55 -- host/auth.sh@45 -- # key=DHHC-1:01:NzAwYjhhZmQ3YzlkMjkxMDMyYzYzMDk4NGZhM2VlNDU/jjTP: 00:24:39.564 21:17:55 -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWZmNzBiODU0MzQ3MDJlYmMxYmVkZmI2MGE2N2RjMTEbLFxR: 00:24:39.564 21:17:55 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:39.564 21:17:55 -- host/auth.sh@49 -- # echo ffdhe8192 00:24:39.564 21:17:55 -- host/auth.sh@50 -- # echo DHHC-1:01:NzAwYjhhZmQ3YzlkMjkxMDMyYzYzMDk4NGZhM2VlNDU/jjTP: 00:24:39.564 21:17:55 -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWZmNzBiODU0MzQ3MDJlYmMxYmVkZmI2MGE2N2RjMTEbLFxR: ]] 00:24:39.564 21:17:55 -- host/auth.sh@51 -- # echo DHHC-1:01:MWZmNzBiODU0MzQ3MDJlYmMxYmVkZmI2MGE2N2RjMTEbLFxR: 00:24:39.564 21:17:55 -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:24:39.564 21:17:55 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:39.564 21:17:55 -- host/auth.sh@57 -- # digest=sha512 00:24:39.564 21:17:55 -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:39.564 21:17:55 -- host/auth.sh@57 -- # keyid=2 00:24:39.564 21:17:55 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:39.564 21:17:55 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:39.564 21:17:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.564 21:17:55 -- common/autotest_common.sh@10 -- # set +x 00:24:39.564 21:17:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.564 21:17:55 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:39.564 21:17:55 -- nvmf/common.sh@730 -- # local ip 00:24:39.564 21:17:55 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:39.564 21:17:55 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:39.564 21:17:55 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.564 21:17:55 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.564 21:17:55 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:39.564 21:17:55 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.564 21:17:55 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:39.564 21:17:55 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:39.564 21:17:55 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:39.564 21:17:55 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:39.564 21:17:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.564 21:17:55 -- common/autotest_common.sh@10 -- # set +x 00:24:40.131 nvme0n1 00:24:40.131 21:17:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.131 21:17:55 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.131 21:17:55 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:40.131 21:17:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.131 21:17:55 -- common/autotest_common.sh@10 -- # set +x 00:24:40.131 21:17:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.131 21:17:55 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.131 21:17:55 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.131 21:17:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.131 21:17:55 -- common/autotest_common.sh@10 -- # set +x 00:24:40.131 21:17:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.131 21:17:55 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:40.132 21:17:55 -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:24:40.132 21:17:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.132 21:17:55 -- host/auth.sh@44 -- # digest=sha512 00:24:40.132 21:17:55 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:40.132 21:17:55 -- host/auth.sh@44 -- # keyid=3 00:24:40.132 21:17:55 -- host/auth.sh@45 -- # key=DHHC-1:02:NTVmNGJkYWJkZDY2ZmJlMzgzOGYwNWZiMTlkMzVkZTJiM2RkZjU5OTllMmE5YzgxVvBHHw==: 00:24:40.132 21:17:55 -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWFiY2JlYTZmODQ0NTk3NGViNjE0ZDlkMWE0ZjhhMDZld/vL: 00:24:40.132 21:17:55 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:40.132 21:17:55 -- host/auth.sh@49 -- # echo ffdhe8192 00:24:40.132 21:17:55 -- host/auth.sh@50 -- # echo DHHC-1:02:NTVmNGJkYWJkZDY2ZmJlMzgzOGYwNWZiMTlkMzVkZTJiM2RkZjU5OTllMmE5YzgxVvBHHw==: 00:24:40.132 21:17:55 -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWFiY2JlYTZmODQ0NTk3NGViNjE0ZDlkMWE0ZjhhMDZld/vL: ]] 00:24:40.132 21:17:55 -- host/auth.sh@51 -- # echo DHHC-1:00:ZWFiY2JlYTZmODQ0NTk3NGViNjE0ZDlkMWE0ZjhhMDZld/vL: 00:24:40.132 21:17:55 -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:24:40.132 21:17:55 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:40.132 21:17:55 -- host/auth.sh@57 -- # digest=sha512 00:24:40.132 21:17:55 -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:40.132 21:17:55 -- host/auth.sh@57 -- # keyid=3 00:24:40.132 21:17:55 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:40.132 21:17:55 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:40.132 21:17:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.132 21:17:55 -- common/autotest_common.sh@10 -- # set +x 00:24:40.132 21:17:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.132 21:17:55 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:40.132 21:17:55 -- nvmf/common.sh@730 -- # local ip 00:24:40.132 21:17:55 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:40.132 21:17:55 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:40.132 21:17:55 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.132 21:17:55 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.132 21:17:55 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:40.132 21:17:55 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.132 21:17:55 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:40.132 21:17:55 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:40.132 21:17:55 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:40.132 21:17:55 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:40.132 21:17:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.132 21:17:55 -- common/autotest_common.sh@10 -- # set +x 00:24:40.700 nvme0n1 00:24:40.700 21:17:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.700 21:17:56 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.700 21:17:56 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:40.700 21:17:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.700 21:17:56 -- common/autotest_common.sh@10 -- # set +x 00:24:40.700 21:17:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.700 21:17:56 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.700 21:17:56 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.700 21:17:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.700 21:17:56 -- common/autotest_common.sh@10 -- # set +x 00:24:40.700 21:17:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.700 21:17:56 -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:40.700 21:17:56 -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:24:40.700 21:17:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.700 21:17:56 -- host/auth.sh@44 -- # digest=sha512 00:24:40.700 21:17:56 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:40.700 21:17:56 -- host/auth.sh@44 -- # keyid=4 00:24:40.700 21:17:56 -- host/auth.sh@45 -- # key=DHHC-1:03:NjE5MmExNzYyOTVmZGZmNWNlMjk0MzQxMjRiYzMzMGU5ZGI4YWZlODk0Y2M2ZmUxNzYzMWQ3MmU5YjczNmI4YkbRucU=: 00:24:40.700 21:17:56 -- host/auth.sh@46 -- # ckey= 00:24:40.700 21:17:56 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:40.700 21:17:56 -- host/auth.sh@49 -- # echo ffdhe8192 00:24:40.700 21:17:56 -- host/auth.sh@50 -- # echo DHHC-1:03:NjE5MmExNzYyOTVmZGZmNWNlMjk0MzQxMjRiYzMzMGU5ZGI4YWZlODk0Y2M2ZmUxNzYzMWQ3MmU5YjczNmI4YkbRucU=: 00:24:40.700 21:17:56 -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:40.700 21:17:56 -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:24:40.700 21:17:56 -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:40.700 21:17:56 -- host/auth.sh@57 -- # digest=sha512 00:24:40.700 21:17:56 -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:40.700 21:17:56 -- host/auth.sh@57 -- # keyid=4 00:24:40.700 21:17:56 -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:40.700 21:17:56 -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:40.700 21:17:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.700 21:17:56 -- common/autotest_common.sh@10 -- # set +x 00:24:40.700 21:17:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.700 21:17:56 -- host/auth.sh@61 -- # get_main_ns_ip 00:24:40.700 21:17:56 -- nvmf/common.sh@730 -- # local ip 00:24:40.700 21:17:56 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:40.700 21:17:56 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:40.700 21:17:56 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.700 21:17:56 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.700 21:17:56 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:40.700 21:17:56 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.700 21:17:56 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:40.700 21:17:56 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:40.700 21:17:56 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:40.700 21:17:56 -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:40.700 21:17:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.700 21:17:56 -- common/autotest_common.sh@10 -- # set +x 00:24:41.268 nvme0n1 00:24:41.268 21:17:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:41.268 21:17:57 -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.268 21:17:57 -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:41.268 21:17:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:41.269 21:17:57 -- common/autotest_common.sh@10 -- # set +x 00:24:41.269 21:17:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:41.269 21:17:57 -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.269 21:17:57 -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.269 21:17:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:41.269 21:17:57 -- common/autotest_common.sh@10 -- # set +x 00:24:41.269 21:17:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:41.269 21:17:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:41.269 21:17:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:41.269 21:17:57 -- host/auth.sh@44 -- # digest=sha256 00:24:41.269 21:17:57 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:41.269 21:17:57 -- host/auth.sh@44 -- # keyid=1 00:24:41.269 21:17:57 -- host/auth.sh@45 -- # key=DHHC-1:00:YzRhMDE1YTYyMWFiMjk0YWEyNDA5NDI5OTA4YTM1MmFiNThlMzVkYjRlZDJmYTI0YwqxSg==: 00:24:41.269 21:17:57 -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: 00:24:41.269 21:17:57 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:41.269 21:17:57 -- host/auth.sh@49 -- # echo ffdhe2048 00:24:41.269 21:17:57 -- host/auth.sh@50 -- # echo DHHC-1:00:YzRhMDE1YTYyMWFiMjk0YWEyNDA5NDI5OTA4YTM1MmFiNThlMzVkYjRlZDJmYTI0YwqxSg==: 00:24:41.269 21:17:57 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: ]] 00:24:41.269 21:17:57 -- host/auth.sh@51 -- # echo DHHC-1:02:YjY0OTc1ZDBlOTAwMDlmYzdmMzA0OGRmZGJmOTIyODJhZjhlNjQ5MDVkNGQ5YTliAnqD2g==: 00:24:41.269 21:17:57 -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:41.269 21:17:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:41.269 21:17:57 -- common/autotest_common.sh@10 -- # set +x 00:24:41.269 21:17:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:41.269 21:17:57 -- host/auth.sh@112 -- # get_main_ns_ip 00:24:41.269 21:17:57 -- nvmf/common.sh@730 -- # local ip 00:24:41.269 21:17:57 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:41.269 21:17:57 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:41.269 21:17:57 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.269 21:17:57 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.269 21:17:57 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:41.269 21:17:57 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.269 21:17:57 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:41.269 21:17:57 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:41.269 21:17:57 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:41.269 21:17:57 -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:41.269 21:17:57 -- common/autotest_common.sh@638 -- # local es=0 00:24:41.269 21:17:57 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:41.269 21:17:57 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:41.269 21:17:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:41.269 21:17:57 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:41.269 21:17:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:41.269 21:17:57 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:41.269 21:17:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:41.269 21:17:57 -- common/autotest_common.sh@10 -- # set +x 00:24:41.269 request: 00:24:41.269 { 00:24:41.269 "name": "nvme0", 00:24:41.269 "trtype": "tcp", 00:24:41.269 "traddr": "10.0.0.1", 00:24:41.269 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:41.269 "adrfam": "ipv4", 00:24:41.269 "trsvcid": "4420", 00:24:41.269 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:41.269 "method": "bdev_nvme_attach_controller", 00:24:41.269 "req_id": 1 00:24:41.269 } 00:24:41.269 Got JSON-RPC error response 00:24:41.269 response: 00:24:41.269 { 00:24:41.269 "code": -32602, 00:24:41.269 "message": "Invalid parameters" 00:24:41.269 } 00:24:41.269 21:17:57 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:41.269 21:17:57 -- common/autotest_common.sh@641 -- # es=1 00:24:41.269 21:17:57 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:41.529 21:17:57 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:41.529 21:17:57 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:41.529 21:17:57 -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.529 21:17:57 -- host/auth.sh@114 -- # jq length 00:24:41.529 21:17:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:41.529 21:17:57 -- common/autotest_common.sh@10 -- # set +x 00:24:41.529 21:17:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:41.529 21:17:57 -- host/auth.sh@114 -- # (( 0 == 0 )) 00:24:41.529 21:17:57 -- host/auth.sh@117 -- # get_main_ns_ip 00:24:41.529 21:17:57 -- nvmf/common.sh@730 -- # local ip 00:24:41.529 21:17:57 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:41.529 21:17:57 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:41.529 21:17:57 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.529 21:17:57 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.529 21:17:57 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:41.529 21:17:57 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.529 21:17:57 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:41.529 21:17:57 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:41.529 21:17:57 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:41.529 21:17:57 -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:41.529 21:17:57 -- common/autotest_common.sh@638 -- # local es=0 00:24:41.529 21:17:57 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:41.529 21:17:57 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:41.529 21:17:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:41.529 21:17:57 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:41.529 21:17:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:41.529 21:17:57 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:41.529 21:17:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:41.529 21:17:57 -- common/autotest_common.sh@10 -- # set +x 00:24:41.529 request: 00:24:41.529 { 00:24:41.529 "name": "nvme0", 00:24:41.529 "trtype": "tcp", 00:24:41.529 "traddr": "10.0.0.1", 00:24:41.529 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:41.529 "adrfam": "ipv4", 00:24:41.529 "trsvcid": "4420", 00:24:41.529 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:41.529 "dhchap_key": "key2", 00:24:41.529 "method": "bdev_nvme_attach_controller", 00:24:41.529 "req_id": 1 00:24:41.529 } 00:24:41.529 Got JSON-RPC error response 00:24:41.529 response: 00:24:41.529 { 00:24:41.529 "code": -32602, 00:24:41.529 "message": "Invalid parameters" 00:24:41.529 } 00:24:41.529 21:17:57 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:41.529 21:17:57 -- common/autotest_common.sh@641 -- # es=1 00:24:41.529 21:17:57 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:41.529 21:17:57 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:41.529 21:17:57 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:41.529 21:17:57 -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.529 21:17:57 -- host/auth.sh@120 -- # jq length 00:24:41.529 21:17:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:41.529 21:17:57 -- common/autotest_common.sh@10 -- # set +x 00:24:41.529 21:17:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:41.529 21:17:57 -- host/auth.sh@120 -- # (( 0 == 0 )) 00:24:41.529 21:17:57 -- host/auth.sh@123 -- # get_main_ns_ip 00:24:41.529 21:17:57 -- nvmf/common.sh@730 -- # local ip 00:24:41.529 21:17:57 -- nvmf/common.sh@731 -- # ip_candidates=() 00:24:41.529 21:17:57 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:24:41.529 21:17:57 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.529 21:17:57 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.529 21:17:57 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:24:41.529 21:17:57 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.529 21:17:57 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:24:41.529 21:17:57 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:24:41.529 21:17:57 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:24:41.529 21:17:57 -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:41.529 21:17:57 -- common/autotest_common.sh@638 -- # local es=0 00:24:41.529 21:17:57 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:41.529 21:17:57 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:41.529 21:17:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:41.529 21:17:57 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:41.529 21:17:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:41.529 21:17:57 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:41.529 21:17:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:41.529 21:17:57 -- common/autotest_common.sh@10 -- # set +x 00:24:41.529 request: 00:24:41.529 { 00:24:41.529 "name": "nvme0", 00:24:41.529 "trtype": "tcp", 00:24:41.529 "traddr": "10.0.0.1", 00:24:41.529 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:41.529 "adrfam": "ipv4", 00:24:41.529 "trsvcid": "4420", 00:24:41.529 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:41.529 "dhchap_key": "key1", 00:24:41.529 "dhchap_ctrlr_key": "ckey2", 00:24:41.529 "method": "bdev_nvme_attach_controller", 00:24:41.529 "req_id": 1 00:24:41.529 } 00:24:41.529 Got JSON-RPC error response 00:24:41.529 response: 00:24:41.529 { 00:24:41.529 "code": -32602, 00:24:41.529 "message": "Invalid parameters" 00:24:41.529 } 00:24:41.529 21:17:57 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:41.529 21:17:57 -- common/autotest_common.sh@641 -- # es=1 00:24:41.529 21:17:57 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:41.529 21:17:57 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:41.529 21:17:57 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:41.529 21:17:57 -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:24:41.529 21:17:57 -- host/auth.sh@128 -- # cleanup 00:24:41.529 21:17:57 -- host/auth.sh@24 -- # nvmftestfini 00:24:41.529 21:17:57 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:41.529 21:17:57 -- nvmf/common.sh@117 -- # sync 00:24:41.529 21:17:57 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:41.529 21:17:57 -- nvmf/common.sh@120 -- # set +e 00:24:41.529 21:17:57 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:41.529 21:17:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:41.529 rmmod nvme_tcp 00:24:41.789 rmmod nvme_fabrics 00:24:41.789 21:17:57 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:41.789 21:17:57 -- nvmf/common.sh@124 -- # set -e 00:24:41.789 21:17:57 -- nvmf/common.sh@125 -- # return 0 00:24:41.789 21:17:57 -- nvmf/common.sh@478 -- # '[' -n 3173046 ']' 00:24:41.789 21:17:57 -- nvmf/common.sh@479 -- # killprocess 3173046 00:24:41.789 21:17:57 -- common/autotest_common.sh@936 -- # '[' -z 3173046 ']' 00:24:41.789 21:17:57 -- common/autotest_common.sh@940 -- # kill -0 3173046 00:24:41.789 21:17:57 -- common/autotest_common.sh@941 -- # uname 00:24:41.789 21:17:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:41.789 21:17:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3173046 00:24:41.789 21:17:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:41.789 21:17:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:41.789 21:17:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3173046' 00:24:41.789 killing process with pid 3173046 00:24:41.789 21:17:57 -- common/autotest_common.sh@955 -- # kill 3173046 00:24:41.789 21:17:57 -- common/autotest_common.sh@960 -- # wait 3173046 00:24:42.048 21:17:57 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:42.048 21:17:57 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:42.048 21:17:57 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:42.048 21:17:57 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:42.048 21:17:57 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:42.048 21:17:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:42.048 21:17:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:42.048 21:17:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:43.954 21:17:59 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:43.954 21:17:59 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:43.954 21:17:59 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:43.954 21:17:59 -- host/auth.sh@27 -- # clean_kernel_target 00:24:43.954 21:17:59 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:24:43.954 21:17:59 -- nvmf/common.sh@675 -- # echo 0 00:24:43.954 21:17:59 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:43.954 21:17:59 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:43.954 21:17:59 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:43.954 21:17:59 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:43.954 21:17:59 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:24:43.954 21:17:59 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:24:43.954 21:17:59 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:47.239 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:47.239 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:47.239 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:47.239 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:47.239 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:47.239 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:47.239 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:47.239 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:47.239 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:47.239 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:47.239 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:47.239 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:47.239 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:47.239 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:47.239 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:47.239 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:47.807 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:24:47.807 21:18:03 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.llJ /tmp/spdk.key-null.iEz /tmp/spdk.key-sha256.QSL /tmp/spdk.key-sha384.8XC /tmp/spdk.key-sha512.PLF /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:24:47.807 21:18:03 -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:51.094 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:24:51.094 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:24:51.094 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:24:51.094 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:24:51.094 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:24:51.094 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:24:51.094 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:24:51.094 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:24:51.094 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:24:51.094 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:24:51.094 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:24:51.094 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:24:51.094 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:24:51.094 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:24:51.094 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:24:51.094 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:24:51.094 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:24:51.094 00:24:51.094 real 0m50.600s 00:24:51.095 user 0m44.200s 00:24:51.095 sys 0m12.913s 00:24:51.095 21:18:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:51.095 21:18:06 -- common/autotest_common.sh@10 -- # set +x 00:24:51.095 ************************************ 00:24:51.095 END TEST nvmf_auth_host 00:24:51.095 ************************************ 00:24:51.095 21:18:06 -- nvmf/nvmf.sh@105 -- # [[ tcp == \t\c\p ]] 00:24:51.095 21:18:06 -- nvmf/nvmf.sh@106 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:51.095 21:18:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:51.095 21:18:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:51.095 21:18:06 -- common/autotest_common.sh@10 -- # set +x 00:24:51.095 ************************************ 00:24:51.095 START TEST nvmf_digest 00:24:51.095 ************************************ 00:24:51.095 21:18:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:51.095 * Looking for test storage... 00:24:51.095 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:51.095 21:18:06 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:51.095 21:18:06 -- nvmf/common.sh@7 -- # uname -s 00:24:51.095 21:18:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:51.095 21:18:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:51.095 21:18:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:51.095 21:18:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:51.095 21:18:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:51.095 21:18:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:51.095 21:18:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:51.095 21:18:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:51.095 21:18:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:51.095 21:18:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:51.095 21:18:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:51.095 21:18:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:51.095 21:18:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:51.095 21:18:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:51.095 21:18:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:51.095 21:18:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:51.095 21:18:06 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:51.095 21:18:06 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:51.095 21:18:06 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:51.095 21:18:06 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:51.095 21:18:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.095 21:18:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.095 21:18:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.095 21:18:06 -- paths/export.sh@5 -- # export PATH 00:24:51.095 21:18:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.095 21:18:06 -- nvmf/common.sh@47 -- # : 0 00:24:51.095 21:18:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:51.095 21:18:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:51.095 21:18:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:51.095 21:18:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:51.095 21:18:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:51.095 21:18:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:51.095 21:18:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:51.095 21:18:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:51.095 21:18:06 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:24:51.095 21:18:06 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:24:51.095 21:18:06 -- host/digest.sh@16 -- # runtime=2 00:24:51.095 21:18:06 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:24:51.095 21:18:06 -- host/digest.sh@138 -- # nvmftestinit 00:24:51.095 21:18:06 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:51.095 21:18:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:51.095 21:18:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:51.095 21:18:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:51.095 21:18:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:51.095 21:18:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.095 21:18:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:51.095 21:18:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:51.095 21:18:06 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:24:51.095 21:18:06 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:51.095 21:18:06 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:51.095 21:18:06 -- common/autotest_common.sh@10 -- # set +x 00:24:57.661 21:18:12 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:57.661 21:18:12 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:57.661 21:18:12 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:57.661 21:18:12 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:57.661 21:18:12 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:57.661 21:18:12 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:57.661 21:18:12 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:57.661 21:18:12 -- nvmf/common.sh@295 -- # net_devs=() 00:24:57.661 21:18:12 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:57.661 21:18:12 -- nvmf/common.sh@296 -- # e810=() 00:24:57.661 21:18:12 -- nvmf/common.sh@296 -- # local -ga e810 00:24:57.661 21:18:12 -- nvmf/common.sh@297 -- # x722=() 00:24:57.661 21:18:12 -- nvmf/common.sh@297 -- # local -ga x722 00:24:57.661 21:18:12 -- nvmf/common.sh@298 -- # mlx=() 00:24:57.661 21:18:12 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:57.661 21:18:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:57.661 21:18:12 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:57.661 21:18:12 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:57.661 21:18:12 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:57.661 21:18:12 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:57.661 21:18:12 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:57.661 21:18:12 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:57.661 21:18:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:57.661 21:18:12 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:57.661 21:18:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:57.661 21:18:12 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:57.661 21:18:12 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:57.661 21:18:12 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:57.661 21:18:12 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:57.661 21:18:12 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:57.661 21:18:12 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:57.661 21:18:12 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:57.661 21:18:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:57.661 21:18:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:57.661 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:57.661 21:18:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:57.661 21:18:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:57.661 21:18:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:57.661 21:18:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:57.661 21:18:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:57.661 21:18:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:57.661 21:18:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:57.661 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:57.661 21:18:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:57.661 21:18:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:57.661 21:18:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:57.661 21:18:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:57.661 21:18:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:57.661 21:18:12 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:57.661 21:18:12 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:57.661 21:18:12 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:57.661 21:18:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:57.661 21:18:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:57.661 21:18:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:57.661 21:18:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:57.661 21:18:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:57.661 Found net devices under 0000:86:00.0: cvl_0_0 00:24:57.661 21:18:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:57.661 21:18:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:57.661 21:18:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:57.661 21:18:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:57.661 21:18:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:57.661 21:18:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:57.661 Found net devices under 0000:86:00.1: cvl_0_1 00:24:57.661 21:18:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:57.661 21:18:12 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:57.661 21:18:12 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:57.661 21:18:12 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:57.661 21:18:12 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:57.661 21:18:12 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:57.661 21:18:12 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:57.661 21:18:12 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:57.661 21:18:12 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:57.661 21:18:12 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:57.662 21:18:12 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:57.662 21:18:12 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:57.662 21:18:12 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:57.662 21:18:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:57.662 21:18:12 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:57.662 21:18:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:57.662 21:18:12 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:57.662 21:18:12 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:57.662 21:18:12 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:57.662 21:18:13 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:57.662 21:18:13 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:57.662 21:18:13 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:57.662 21:18:13 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:57.662 21:18:13 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:57.662 21:18:13 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:57.662 21:18:13 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:57.662 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:57.662 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:24:57.662 00:24:57.662 --- 10.0.0.2 ping statistics --- 00:24:57.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.662 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:24:57.662 21:18:13 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:57.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:57.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:24:57.662 00:24:57.662 --- 10.0.0.1 ping statistics --- 00:24:57.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.662 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:24:57.662 21:18:13 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:57.662 21:18:13 -- nvmf/common.sh@411 -- # return 0 00:24:57.662 21:18:13 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:57.662 21:18:13 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:57.662 21:18:13 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:57.662 21:18:13 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:57.662 21:18:13 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:57.662 21:18:13 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:57.662 21:18:13 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:57.662 21:18:13 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:57.662 21:18:13 -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:24:57.662 21:18:13 -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:24:57.662 21:18:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:57.662 21:18:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:57.662 21:18:13 -- common/autotest_common.sh@10 -- # set +x 00:24:57.662 ************************************ 00:24:57.662 START TEST nvmf_digest_clean 00:24:57.662 ************************************ 00:24:57.662 21:18:13 -- common/autotest_common.sh@1111 -- # run_digest 00:24:57.662 21:18:13 -- host/digest.sh@120 -- # local dsa_initiator 00:24:57.662 21:18:13 -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:24:57.662 21:18:13 -- host/digest.sh@121 -- # dsa_initiator=false 00:24:57.662 21:18:13 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:24:57.662 21:18:13 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:24:57.662 21:18:13 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:57.662 21:18:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:57.662 21:18:13 -- common/autotest_common.sh@10 -- # set +x 00:24:57.662 21:18:13 -- nvmf/common.sh@470 -- # nvmfpid=3187425 00:24:57.662 21:18:13 -- nvmf/common.sh@471 -- # waitforlisten 3187425 00:24:57.662 21:18:13 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:57.662 21:18:13 -- common/autotest_common.sh@817 -- # '[' -z 3187425 ']' 00:24:57.662 21:18:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:57.662 21:18:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:57.662 21:18:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:57.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:57.662 21:18:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:57.662 21:18:13 -- common/autotest_common.sh@10 -- # set +x 00:24:57.662 [2024-04-18 21:18:13.444255] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:24:57.662 [2024-04-18 21:18:13.444300] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:57.662 EAL: No free 2048 kB hugepages reported on node 1 00:24:57.662 [2024-04-18 21:18:13.510900] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:57.662 [2024-04-18 21:18:13.585264] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:57.662 [2024-04-18 21:18:13.585305] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:57.662 [2024-04-18 21:18:13.585313] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:57.662 [2024-04-18 21:18:13.585319] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:57.662 [2024-04-18 21:18:13.585324] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:57.662 [2024-04-18 21:18:13.585345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:58.597 21:18:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:58.597 21:18:14 -- common/autotest_common.sh@850 -- # return 0 00:24:58.597 21:18:14 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:58.597 21:18:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:58.597 21:18:14 -- common/autotest_common.sh@10 -- # set +x 00:24:58.597 21:18:14 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:58.597 21:18:14 -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:24:58.597 21:18:14 -- host/digest.sh@126 -- # common_target_config 00:24:58.597 21:18:14 -- host/digest.sh@43 -- # rpc_cmd 00:24:58.597 21:18:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:58.597 21:18:14 -- common/autotest_common.sh@10 -- # set +x 00:24:58.597 null0 00:24:58.597 [2024-04-18 21:18:14.360773] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:58.597 [2024-04-18 21:18:14.384955] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:58.597 21:18:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:58.597 21:18:14 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:24:58.597 21:18:14 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:58.597 21:18:14 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:58.597 21:18:14 -- host/digest.sh@80 -- # rw=randread 00:24:58.597 21:18:14 -- host/digest.sh@80 -- # bs=4096 00:24:58.597 21:18:14 -- host/digest.sh@80 -- # qd=128 00:24:58.597 21:18:14 -- host/digest.sh@80 -- # scan_dsa=false 00:24:58.598 21:18:14 -- host/digest.sh@83 -- # bperfpid=3187672 00:24:58.598 21:18:14 -- host/digest.sh@84 -- # waitforlisten 3187672 /var/tmp/bperf.sock 00:24:58.598 21:18:14 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:58.598 21:18:14 -- common/autotest_common.sh@817 -- # '[' -z 3187672 ']' 00:24:58.598 21:18:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:58.598 21:18:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:58.598 21:18:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:58.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:58.598 21:18:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:58.598 21:18:14 -- common/autotest_common.sh@10 -- # set +x 00:24:58.598 [2024-04-18 21:18:14.433668] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:24:58.598 [2024-04-18 21:18:14.433711] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3187672 ] 00:24:58.598 EAL: No free 2048 kB hugepages reported on node 1 00:24:58.598 [2024-04-18 21:18:14.491727] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:58.856 [2024-04-18 21:18:14.570487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:59.422 21:18:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:59.422 21:18:15 -- common/autotest_common.sh@850 -- # return 0 00:24:59.422 21:18:15 -- host/digest.sh@86 -- # false 00:24:59.422 21:18:15 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:59.422 21:18:15 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:59.681 21:18:15 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:59.681 21:18:15 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:59.939 nvme0n1 00:24:59.939 21:18:15 -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:59.939 21:18:15 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:00.197 Running I/O for 2 seconds... 00:25:02.098 00:25:02.098 Latency(us) 00:25:02.098 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:02.098 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:02.098 nvme0n1 : 2.04 25809.14 100.82 0.00 0.00 4877.49 2464.72 44222.55 00:25:02.098 =================================================================================================================== 00:25:02.098 Total : 25809.14 100.82 0.00 0.00 4877.49 2464.72 44222.55 00:25:02.098 0 00:25:02.098 21:18:17 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:02.098 21:18:17 -- host/digest.sh@93 -- # get_accel_stats 00:25:02.098 21:18:17 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:02.098 21:18:17 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:02.098 | select(.opcode=="crc32c") 00:25:02.098 | "\(.module_name) \(.executed)"' 00:25:02.098 21:18:17 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:02.358 21:18:18 -- host/digest.sh@94 -- # false 00:25:02.358 21:18:18 -- host/digest.sh@94 -- # exp_module=software 00:25:02.358 21:18:18 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:02.358 21:18:18 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:02.358 21:18:18 -- host/digest.sh@98 -- # killprocess 3187672 00:25:02.358 21:18:18 -- common/autotest_common.sh@936 -- # '[' -z 3187672 ']' 00:25:02.358 21:18:18 -- common/autotest_common.sh@940 -- # kill -0 3187672 00:25:02.358 21:18:18 -- common/autotest_common.sh@941 -- # uname 00:25:02.358 21:18:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:02.358 21:18:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3187672 00:25:02.358 21:18:18 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:02.358 21:18:18 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:02.358 21:18:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3187672' 00:25:02.358 killing process with pid 3187672 00:25:02.358 21:18:18 -- common/autotest_common.sh@955 -- # kill 3187672 00:25:02.358 Received shutdown signal, test time was about 2.000000 seconds 00:25:02.358 00:25:02.358 Latency(us) 00:25:02.358 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:02.358 =================================================================================================================== 00:25:02.358 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:02.358 21:18:18 -- common/autotest_common.sh@960 -- # wait 3187672 00:25:02.617 21:18:18 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:25:02.617 21:18:18 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:02.617 21:18:18 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:02.617 21:18:18 -- host/digest.sh@80 -- # rw=randread 00:25:02.617 21:18:18 -- host/digest.sh@80 -- # bs=131072 00:25:02.617 21:18:18 -- host/digest.sh@80 -- # qd=16 00:25:02.617 21:18:18 -- host/digest.sh@80 -- # scan_dsa=false 00:25:02.617 21:18:18 -- host/digest.sh@83 -- # bperfpid=3188365 00:25:02.617 21:18:18 -- host/digest.sh@84 -- # waitforlisten 3188365 /var/tmp/bperf.sock 00:25:02.617 21:18:18 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:02.617 21:18:18 -- common/autotest_common.sh@817 -- # '[' -z 3188365 ']' 00:25:02.617 21:18:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:02.617 21:18:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:02.617 21:18:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:02.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:02.617 21:18:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:02.617 21:18:18 -- common/autotest_common.sh@10 -- # set +x 00:25:02.617 [2024-04-18 21:18:18.459273] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:25:02.617 [2024-04-18 21:18:18.459324] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3188365 ] 00:25:02.617 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:02.617 Zero copy mechanism will not be used. 00:25:02.617 EAL: No free 2048 kB hugepages reported on node 1 00:25:02.617 [2024-04-18 21:18:18.518330] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.875 [2024-04-18 21:18:18.597039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:03.442 21:18:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:03.442 21:18:19 -- common/autotest_common.sh@850 -- # return 0 00:25:03.442 21:18:19 -- host/digest.sh@86 -- # false 00:25:03.442 21:18:19 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:03.442 21:18:19 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:03.701 21:18:19 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:03.701 21:18:19 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:03.959 nvme0n1 00:25:03.959 21:18:19 -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:03.959 21:18:19 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:03.959 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:03.959 Zero copy mechanism will not be used. 00:25:03.959 Running I/O for 2 seconds... 00:25:06.492 00:25:06.492 Latency(us) 00:25:06.492 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.492 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:06.492 nvme0n1 : 2.00 3362.68 420.33 0.00 0.00 4755.24 3575.99 12309.37 00:25:06.492 =================================================================================================================== 00:25:06.492 Total : 3362.68 420.33 0.00 0.00 4755.24 3575.99 12309.37 00:25:06.492 0 00:25:06.492 21:18:21 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:06.492 21:18:21 -- host/digest.sh@93 -- # get_accel_stats 00:25:06.492 21:18:21 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:06.492 21:18:21 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:06.492 21:18:21 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:06.492 | select(.opcode=="crc32c") 00:25:06.492 | "\(.module_name) \(.executed)"' 00:25:06.492 21:18:22 -- host/digest.sh@94 -- # false 00:25:06.492 21:18:22 -- host/digest.sh@94 -- # exp_module=software 00:25:06.492 21:18:22 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:06.492 21:18:22 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:06.492 21:18:22 -- host/digest.sh@98 -- # killprocess 3188365 00:25:06.492 21:18:22 -- common/autotest_common.sh@936 -- # '[' -z 3188365 ']' 00:25:06.492 21:18:22 -- common/autotest_common.sh@940 -- # kill -0 3188365 00:25:06.492 21:18:22 -- common/autotest_common.sh@941 -- # uname 00:25:06.492 21:18:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:06.492 21:18:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3188365 00:25:06.492 21:18:22 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:06.492 21:18:22 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:06.492 21:18:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3188365' 00:25:06.492 killing process with pid 3188365 00:25:06.492 21:18:22 -- common/autotest_common.sh@955 -- # kill 3188365 00:25:06.492 Received shutdown signal, test time was about 2.000000 seconds 00:25:06.492 00:25:06.492 Latency(us) 00:25:06.492 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.492 =================================================================================================================== 00:25:06.492 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:06.492 21:18:22 -- common/autotest_common.sh@960 -- # wait 3188365 00:25:06.492 21:18:22 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:25:06.492 21:18:22 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:06.492 21:18:22 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:06.492 21:18:22 -- host/digest.sh@80 -- # rw=randwrite 00:25:06.492 21:18:22 -- host/digest.sh@80 -- # bs=4096 00:25:06.492 21:18:22 -- host/digest.sh@80 -- # qd=128 00:25:06.492 21:18:22 -- host/digest.sh@80 -- # scan_dsa=false 00:25:06.492 21:18:22 -- host/digest.sh@83 -- # bperfpid=3188990 00:25:06.492 21:18:22 -- host/digest.sh@84 -- # waitforlisten 3188990 /var/tmp/bperf.sock 00:25:06.492 21:18:22 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:06.492 21:18:22 -- common/autotest_common.sh@817 -- # '[' -z 3188990 ']' 00:25:06.492 21:18:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:06.492 21:18:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:06.492 21:18:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:06.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:06.492 21:18:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:06.492 21:18:22 -- common/autotest_common.sh@10 -- # set +x 00:25:06.492 [2024-04-18 21:18:22.391622] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:25:06.492 [2024-04-18 21:18:22.391669] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3188990 ] 00:25:06.492 EAL: No free 2048 kB hugepages reported on node 1 00:25:06.751 [2024-04-18 21:18:22.451568] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.751 [2024-04-18 21:18:22.528471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:07.318 21:18:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:07.318 21:18:23 -- common/autotest_common.sh@850 -- # return 0 00:25:07.318 21:18:23 -- host/digest.sh@86 -- # false 00:25:07.318 21:18:23 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:07.318 21:18:23 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:07.576 21:18:23 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:07.576 21:18:23 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:08.143 nvme0n1 00:25:08.143 21:18:23 -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:08.143 21:18:23 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:08.143 Running I/O for 2 seconds... 00:25:10.045 00:25:10.045 Latency(us) 00:25:10.046 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:10.046 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:10.046 nvme0n1 : 2.00 26273.04 102.63 0.00 0.00 4863.25 3989.15 19261.89 00:25:10.046 =================================================================================================================== 00:25:10.046 Total : 26273.04 102.63 0.00 0.00 4863.25 3989.15 19261.89 00:25:10.046 0 00:25:10.046 21:18:25 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:10.046 21:18:25 -- host/digest.sh@93 -- # get_accel_stats 00:25:10.046 21:18:25 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:10.046 21:18:25 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:10.046 21:18:25 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:10.046 | select(.opcode=="crc32c") 00:25:10.046 | "\(.module_name) \(.executed)"' 00:25:10.304 21:18:26 -- host/digest.sh@94 -- # false 00:25:10.304 21:18:26 -- host/digest.sh@94 -- # exp_module=software 00:25:10.304 21:18:26 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:10.304 21:18:26 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:10.304 21:18:26 -- host/digest.sh@98 -- # killprocess 3188990 00:25:10.304 21:18:26 -- common/autotest_common.sh@936 -- # '[' -z 3188990 ']' 00:25:10.304 21:18:26 -- common/autotest_common.sh@940 -- # kill -0 3188990 00:25:10.304 21:18:26 -- common/autotest_common.sh@941 -- # uname 00:25:10.304 21:18:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:10.304 21:18:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3188990 00:25:10.304 21:18:26 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:10.304 21:18:26 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:10.304 21:18:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3188990' 00:25:10.304 killing process with pid 3188990 00:25:10.304 21:18:26 -- common/autotest_common.sh@955 -- # kill 3188990 00:25:10.304 Received shutdown signal, test time was about 2.000000 seconds 00:25:10.304 00:25:10.304 Latency(us) 00:25:10.304 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:10.304 =================================================================================================================== 00:25:10.304 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:10.304 21:18:26 -- common/autotest_common.sh@960 -- # wait 3188990 00:25:10.562 21:18:26 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:25:10.562 21:18:26 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:10.562 21:18:26 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:10.562 21:18:26 -- host/digest.sh@80 -- # rw=randwrite 00:25:10.562 21:18:26 -- host/digest.sh@80 -- # bs=131072 00:25:10.562 21:18:26 -- host/digest.sh@80 -- # qd=16 00:25:10.562 21:18:26 -- host/digest.sh@80 -- # scan_dsa=false 00:25:10.562 21:18:26 -- host/digest.sh@83 -- # bperfpid=3189570 00:25:10.562 21:18:26 -- host/digest.sh@84 -- # waitforlisten 3189570 /var/tmp/bperf.sock 00:25:10.562 21:18:26 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:10.562 21:18:26 -- common/autotest_common.sh@817 -- # '[' -z 3189570 ']' 00:25:10.562 21:18:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:10.562 21:18:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:10.562 21:18:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:10.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:10.562 21:18:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:10.562 21:18:26 -- common/autotest_common.sh@10 -- # set +x 00:25:10.562 [2024-04-18 21:18:26.433142] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:25:10.562 [2024-04-18 21:18:26.433200] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3189570 ] 00:25:10.562 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:10.562 Zero copy mechanism will not be used. 00:25:10.562 EAL: No free 2048 kB hugepages reported on node 1 00:25:10.821 [2024-04-18 21:18:26.495520] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.821 [2024-04-18 21:18:26.566397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:11.387 21:18:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:11.387 21:18:27 -- common/autotest_common.sh@850 -- # return 0 00:25:11.387 21:18:27 -- host/digest.sh@86 -- # false 00:25:11.387 21:18:27 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:11.387 21:18:27 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:11.645 21:18:27 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:11.645 21:18:27 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:11.903 nvme0n1 00:25:11.903 21:18:27 -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:11.903 21:18:27 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:11.903 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:11.903 Zero copy mechanism will not be used. 00:25:11.903 Running I/O for 2 seconds... 00:25:14.434 00:25:14.434 Latency(us) 00:25:14.434 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:14.434 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:14.434 nvme0n1 : 2.01 3079.05 384.88 0.00 0.00 5186.96 3319.54 26328.38 00:25:14.434 =================================================================================================================== 00:25:14.434 Total : 3079.05 384.88 0.00 0.00 5186.96 3319.54 26328.38 00:25:14.434 0 00:25:14.434 21:18:29 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:14.434 21:18:29 -- host/digest.sh@93 -- # get_accel_stats 00:25:14.434 21:18:29 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:14.434 21:18:29 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:14.434 | select(.opcode=="crc32c") 00:25:14.434 | "\(.module_name) \(.executed)"' 00:25:14.434 21:18:29 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:14.434 21:18:29 -- host/digest.sh@94 -- # false 00:25:14.434 21:18:29 -- host/digest.sh@94 -- # exp_module=software 00:25:14.434 21:18:29 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:14.434 21:18:29 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:14.434 21:18:29 -- host/digest.sh@98 -- # killprocess 3189570 00:25:14.434 21:18:29 -- common/autotest_common.sh@936 -- # '[' -z 3189570 ']' 00:25:14.434 21:18:29 -- common/autotest_common.sh@940 -- # kill -0 3189570 00:25:14.434 21:18:29 -- common/autotest_common.sh@941 -- # uname 00:25:14.434 21:18:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:14.434 21:18:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3189570 00:25:14.434 21:18:30 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:14.434 21:18:30 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:14.434 21:18:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3189570' 00:25:14.434 killing process with pid 3189570 00:25:14.434 21:18:30 -- common/autotest_common.sh@955 -- # kill 3189570 00:25:14.434 Received shutdown signal, test time was about 2.000000 seconds 00:25:14.434 00:25:14.434 Latency(us) 00:25:14.434 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:14.434 =================================================================================================================== 00:25:14.434 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:14.434 21:18:30 -- common/autotest_common.sh@960 -- # wait 3189570 00:25:14.434 21:18:30 -- host/digest.sh@132 -- # killprocess 3187425 00:25:14.434 21:18:30 -- common/autotest_common.sh@936 -- # '[' -z 3187425 ']' 00:25:14.434 21:18:30 -- common/autotest_common.sh@940 -- # kill -0 3187425 00:25:14.434 21:18:30 -- common/autotest_common.sh@941 -- # uname 00:25:14.434 21:18:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:14.434 21:18:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3187425 00:25:14.434 21:18:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:14.434 21:18:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:14.434 21:18:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3187425' 00:25:14.434 killing process with pid 3187425 00:25:14.434 21:18:30 -- common/autotest_common.sh@955 -- # kill 3187425 00:25:14.434 21:18:30 -- common/autotest_common.sh@960 -- # wait 3187425 00:25:14.713 00:25:14.713 real 0m17.087s 00:25:14.713 user 0m33.220s 00:25:14.713 sys 0m3.933s 00:25:14.713 21:18:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:14.713 21:18:30 -- common/autotest_common.sh@10 -- # set +x 00:25:14.713 ************************************ 00:25:14.713 END TEST nvmf_digest_clean 00:25:14.713 ************************************ 00:25:14.713 21:18:30 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:25:14.713 21:18:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:14.713 21:18:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:14.713 21:18:30 -- common/autotest_common.sh@10 -- # set +x 00:25:15.002 ************************************ 00:25:15.002 START TEST nvmf_digest_error 00:25:15.002 ************************************ 00:25:15.002 21:18:30 -- common/autotest_common.sh@1111 -- # run_digest_error 00:25:15.002 21:18:30 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:25:15.002 21:18:30 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:15.002 21:18:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:15.002 21:18:30 -- common/autotest_common.sh@10 -- # set +x 00:25:15.002 21:18:30 -- nvmf/common.sh@470 -- # nvmfpid=3190290 00:25:15.003 21:18:30 -- nvmf/common.sh@471 -- # waitforlisten 3190290 00:25:15.003 21:18:30 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:15.003 21:18:30 -- common/autotest_common.sh@817 -- # '[' -z 3190290 ']' 00:25:15.003 21:18:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:15.003 21:18:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:15.003 21:18:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:15.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:15.003 21:18:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:15.003 21:18:30 -- common/autotest_common.sh@10 -- # set +x 00:25:15.003 [2024-04-18 21:18:30.697159] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:25:15.003 [2024-04-18 21:18:30.697201] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:15.003 EAL: No free 2048 kB hugepages reported on node 1 00:25:15.003 [2024-04-18 21:18:30.761981] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.003 [2024-04-18 21:18:30.836569] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:15.003 [2024-04-18 21:18:30.836605] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:15.003 [2024-04-18 21:18:30.836613] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:15.003 [2024-04-18 21:18:30.836620] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:15.003 [2024-04-18 21:18:30.836626] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:15.003 [2024-04-18 21:18:30.836643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.597 21:18:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:15.597 21:18:31 -- common/autotest_common.sh@850 -- # return 0 00:25:15.597 21:18:31 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:15.597 21:18:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:15.597 21:18:31 -- common/autotest_common.sh@10 -- # set +x 00:25:15.856 21:18:31 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:15.856 21:18:31 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:25:15.856 21:18:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:15.856 21:18:31 -- common/autotest_common.sh@10 -- # set +x 00:25:15.856 [2024-04-18 21:18:31.534678] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:25:15.856 21:18:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:15.856 21:18:31 -- host/digest.sh@105 -- # common_target_config 00:25:15.856 21:18:31 -- host/digest.sh@43 -- # rpc_cmd 00:25:15.856 21:18:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:15.856 21:18:31 -- common/autotest_common.sh@10 -- # set +x 00:25:15.856 null0 00:25:15.856 [2024-04-18 21:18:31.627381] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:15.856 [2024-04-18 21:18:31.651567] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:15.856 21:18:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:15.856 21:18:31 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:25:15.856 21:18:31 -- host/digest.sh@54 -- # local rw bs qd 00:25:15.856 21:18:31 -- host/digest.sh@56 -- # rw=randread 00:25:15.856 21:18:31 -- host/digest.sh@56 -- # bs=4096 00:25:15.856 21:18:31 -- host/digest.sh@56 -- # qd=128 00:25:15.856 21:18:31 -- host/digest.sh@58 -- # bperfpid=3190525 00:25:15.856 21:18:31 -- host/digest.sh@60 -- # waitforlisten 3190525 /var/tmp/bperf.sock 00:25:15.856 21:18:31 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:25:15.856 21:18:31 -- common/autotest_common.sh@817 -- # '[' -z 3190525 ']' 00:25:15.856 21:18:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:15.856 21:18:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:15.856 21:18:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:15.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:15.856 21:18:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:15.856 21:18:31 -- common/autotest_common.sh@10 -- # set +x 00:25:15.856 [2024-04-18 21:18:31.699935] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:25:15.856 [2024-04-18 21:18:31.699976] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3190525 ] 00:25:15.856 EAL: No free 2048 kB hugepages reported on node 1 00:25:15.856 [2024-04-18 21:18:31.758485] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.115 [2024-04-18 21:18:31.836965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:16.683 21:18:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:16.683 21:18:32 -- common/autotest_common.sh@850 -- # return 0 00:25:16.683 21:18:32 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:16.683 21:18:32 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:16.942 21:18:32 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:16.942 21:18:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:16.942 21:18:32 -- common/autotest_common.sh@10 -- # set +x 00:25:16.942 21:18:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:16.942 21:18:32 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:16.942 21:18:32 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:17.202 nvme0n1 00:25:17.202 21:18:33 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:17.202 21:18:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:17.202 21:18:33 -- common/autotest_common.sh@10 -- # set +x 00:25:17.202 21:18:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:17.202 21:18:33 -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:17.202 21:18:33 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:17.461 Running I/O for 2 seconds... 00:25:17.461 [2024-04-18 21:18:33.169166] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.461 [2024-04-18 21:18:33.169197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.461 [2024-04-18 21:18:33.169207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.461 [2024-04-18 21:18:33.179747] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.461 [2024-04-18 21:18:33.179771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.461 [2024-04-18 21:18:33.179780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.461 [2024-04-18 21:18:33.190102] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.461 [2024-04-18 21:18:33.190124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.461 [2024-04-18 21:18:33.190132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.461 [2024-04-18 21:18:33.198596] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.461 [2024-04-18 21:18:33.198616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:11594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.461 [2024-04-18 21:18:33.198625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.461 [2024-04-18 21:18:33.209085] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.461 [2024-04-18 21:18:33.209106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.461 [2024-04-18 21:18:33.209114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.461 [2024-04-18 21:18:33.218668] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.461 [2024-04-18 21:18:33.218688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.461 [2024-04-18 21:18:33.218696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.461 [2024-04-18 21:18:33.229279] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.461 [2024-04-18 21:18:33.229299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.461 [2024-04-18 21:18:33.229307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.461 [2024-04-18 21:18:33.238399] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.461 [2024-04-18 21:18:33.238418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.461 [2024-04-18 21:18:33.238429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.461 [2024-04-18 21:18:33.248369] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.461 [2024-04-18 21:18:33.248388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:25477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.461 [2024-04-18 21:18:33.248396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.461 [2024-04-18 21:18:33.257608] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.461 [2024-04-18 21:18:33.257627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.461 [2024-04-18 21:18:33.257635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.461 [2024-04-18 21:18:33.267660] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.461 [2024-04-18 21:18:33.267679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.461 [2024-04-18 21:18:33.267687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.461 [2024-04-18 21:18:33.276905] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.461 [2024-04-18 21:18:33.276924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.461 [2024-04-18 21:18:33.276933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.461 [2024-04-18 21:18:33.287575] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.461 [2024-04-18 21:18:33.287594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.461 [2024-04-18 21:18:33.287602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.461 [2024-04-18 21:18:33.296177] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.461 [2024-04-18 21:18:33.296196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.461 [2024-04-18 21:18:33.296204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.461 [2024-04-18 21:18:33.307679] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.461 [2024-04-18 21:18:33.307701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.461 [2024-04-18 21:18:33.307710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.461 [2024-04-18 21:18:33.317233] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.461 [2024-04-18 21:18:33.317253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.461 [2024-04-18 21:18:33.317262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.461 [2024-04-18 21:18:33.327721] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.461 [2024-04-18 21:18:33.327744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.461 [2024-04-18 21:18:33.327752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.461 [2024-04-18 21:18:33.337792] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.461 [2024-04-18 21:18:33.337812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.461 [2024-04-18 21:18:33.337820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.461 [2024-04-18 21:18:33.347423] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.461 [2024-04-18 21:18:33.347443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.461 [2024-04-18 21:18:33.347451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.461 [2024-04-18 21:18:33.357966] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.461 [2024-04-18 21:18:33.357986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.461 [2024-04-18 21:18:33.357993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.461 [2024-04-18 21:18:33.369209] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.461 [2024-04-18 21:18:33.369228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.461 [2024-04-18 21:18:33.369236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.461 [2024-04-18 21:18:33.378590] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.461 [2024-04-18 21:18:33.378608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.461 [2024-04-18 21:18:33.378616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.461 [2024-04-18 21:18:33.388517] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.461 [2024-04-18 21:18:33.388537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.461 [2024-04-18 21:18:33.388546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.720 [2024-04-18 21:18:33.398141] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.720 [2024-04-18 21:18:33.398160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.720 [2024-04-18 21:18:33.398167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.721 [2024-04-18 21:18:33.407778] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.721 [2024-04-18 21:18:33.407798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.721 [2024-04-18 21:18:33.407806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.721 [2024-04-18 21:18:33.417210] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.721 [2024-04-18 21:18:33.417229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.721 [2024-04-18 21:18:33.417236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.721 [2024-04-18 21:18:33.426985] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.721 [2024-04-18 21:18:33.427003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.721 [2024-04-18 21:18:33.427011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.721 [2024-04-18 21:18:33.437577] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.721 [2024-04-18 21:18:33.437597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.721 [2024-04-18 21:18:33.437606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.721 [2024-04-18 21:18:33.447020] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.721 [2024-04-18 21:18:33.447041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.721 [2024-04-18 21:18:33.447050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.721 [2024-04-18 21:18:33.457307] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.721 [2024-04-18 21:18:33.457327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.721 [2024-04-18 21:18:33.457335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.721 [2024-04-18 21:18:33.465707] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.721 [2024-04-18 21:18:33.465727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.721 [2024-04-18 21:18:33.465735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.721 [2024-04-18 21:18:33.476469] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.721 [2024-04-18 21:18:33.476490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.721 [2024-04-18 21:18:33.476498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.721 [2024-04-18 21:18:33.486739] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.721 [2024-04-18 21:18:33.486761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.721 [2024-04-18 21:18:33.486769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.721 [2024-04-18 21:18:33.496213] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.721 [2024-04-18 21:18:33.496233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.721 [2024-04-18 21:18:33.496245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.721 [2024-04-18 21:18:33.506450] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.721 [2024-04-18 21:18:33.506471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:25124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.721 [2024-04-18 21:18:33.506479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.721 [2024-04-18 21:18:33.515362] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.721 [2024-04-18 21:18:33.515382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.721 [2024-04-18 21:18:33.515390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.721 [2024-04-18 21:18:33.525187] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.721 [2024-04-18 21:18:33.525207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.721 [2024-04-18 21:18:33.525215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.721 [2024-04-18 21:18:33.534229] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.721 [2024-04-18 21:18:33.534248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.721 [2024-04-18 21:18:33.534256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.721 [2024-04-18 21:18:33.544055] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.721 [2024-04-18 21:18:33.544075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.721 [2024-04-18 21:18:33.544083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.721 [2024-04-18 21:18:33.554738] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.721 [2024-04-18 21:18:33.554758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.721 [2024-04-18 21:18:33.554766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.721 [2024-04-18 21:18:33.563823] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.721 [2024-04-18 21:18:33.563842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.721 [2024-04-18 21:18:33.563850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.721 [2024-04-18 21:18:33.573442] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.721 [2024-04-18 21:18:33.573462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.721 [2024-04-18 21:18:33.573469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.721 [2024-04-18 21:18:33.583704] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.721 [2024-04-18 21:18:33.583727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.721 [2024-04-18 21:18:33.583735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.721 [2024-04-18 21:18:33.592744] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.721 [2024-04-18 21:18:33.592764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.721 [2024-04-18 21:18:33.592772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.721 [2024-04-18 21:18:33.602440] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.721 [2024-04-18 21:18:33.602460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.721 [2024-04-18 21:18:33.602468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.721 [2024-04-18 21:18:33.613343] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.721 [2024-04-18 21:18:33.613362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.721 [2024-04-18 21:18:33.613369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.721 [2024-04-18 21:18:33.621483] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.721 [2024-04-18 21:18:33.621503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.721 [2024-04-18 21:18:33.621518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.721 [2024-04-18 21:18:33.633324] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.721 [2024-04-18 21:18:33.633344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.721 [2024-04-18 21:18:33.633351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.721 [2024-04-18 21:18:33.641480] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.721 [2024-04-18 21:18:33.641499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.721 [2024-04-18 21:18:33.641507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.981 [2024-04-18 21:18:33.652245] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.981 [2024-04-18 21:18:33.652265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.981 [2024-04-18 21:18:33.652273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.981 [2024-04-18 21:18:33.662332] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.981 [2024-04-18 21:18:33.662352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.981 [2024-04-18 21:18:33.662360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.981 [2024-04-18 21:18:33.671104] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.981 [2024-04-18 21:18:33.671124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.981 [2024-04-18 21:18:33.671132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.981 [2024-04-18 21:18:33.680744] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.981 [2024-04-18 21:18:33.680765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.981 [2024-04-18 21:18:33.680774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.981 [2024-04-18 21:18:33.690982] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.981 [2024-04-18 21:18:33.691003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.981 [2024-04-18 21:18:33.691012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.981 [2024-04-18 21:18:33.701120] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.981 [2024-04-18 21:18:33.701140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.981 [2024-04-18 21:18:33.701149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.981 [2024-04-18 21:18:33.710960] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.981 [2024-04-18 21:18:33.710981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.981 [2024-04-18 21:18:33.710989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.981 [2024-04-18 21:18:33.720414] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.981 [2024-04-18 21:18:33.720433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.981 [2024-04-18 21:18:33.720441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.981 [2024-04-18 21:18:33.730189] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.981 [2024-04-18 21:18:33.730210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.981 [2024-04-18 21:18:33.730218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.981 [2024-04-18 21:18:33.738837] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.981 [2024-04-18 21:18:33.738857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.981 [2024-04-18 21:18:33.738865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.981 [2024-04-18 21:18:33.748902] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.982 [2024-04-18 21:18:33.748926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.982 [2024-04-18 21:18:33.748935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.982 [2024-04-18 21:18:33.758840] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.982 [2024-04-18 21:18:33.758860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.982 [2024-04-18 21:18:33.758867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.982 [2024-04-18 21:18:33.769057] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.982 [2024-04-18 21:18:33.769076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:13382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.982 [2024-04-18 21:18:33.769084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.982 [2024-04-18 21:18:33.777862] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.982 [2024-04-18 21:18:33.777881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.982 [2024-04-18 21:18:33.777889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.982 [2024-04-18 21:18:33.788153] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.982 [2024-04-18 21:18:33.788173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.982 [2024-04-18 21:18:33.788181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.982 [2024-04-18 21:18:33.797994] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.982 [2024-04-18 21:18:33.798013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.982 [2024-04-18 21:18:33.798021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.982 [2024-04-18 21:18:33.806007] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.982 [2024-04-18 21:18:33.806026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.982 [2024-04-18 21:18:33.806034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.982 [2024-04-18 21:18:33.816870] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.982 [2024-04-18 21:18:33.816890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.982 [2024-04-18 21:18:33.816898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.982 [2024-04-18 21:18:33.827454] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.982 [2024-04-18 21:18:33.827474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.982 [2024-04-18 21:18:33.827483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.982 [2024-04-18 21:18:33.835292] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.982 [2024-04-18 21:18:33.835312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.982 [2024-04-18 21:18:33.835321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.982 [2024-04-18 21:18:33.846127] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.982 [2024-04-18 21:18:33.846147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.982 [2024-04-18 21:18:33.846155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.982 [2024-04-18 21:18:33.855120] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.982 [2024-04-18 21:18:33.855139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.982 [2024-04-18 21:18:33.855147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.982 [2024-04-18 21:18:33.865608] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.982 [2024-04-18 21:18:33.865627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.982 [2024-04-18 21:18:33.865634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.982 [2024-04-18 21:18:33.874620] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.982 [2024-04-18 21:18:33.874639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.982 [2024-04-18 21:18:33.874647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.982 [2024-04-18 21:18:33.885419] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.982 [2024-04-18 21:18:33.885438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.982 [2024-04-18 21:18:33.885445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.982 [2024-04-18 21:18:33.893937] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.982 [2024-04-18 21:18:33.893956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.982 [2024-04-18 21:18:33.893964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.982 [2024-04-18 21:18:33.903911] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:17.982 [2024-04-18 21:18:33.903930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.982 [2024-04-18 21:18:33.903938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.241 [2024-04-18 21:18:33.913626] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.241 [2024-04-18 21:18:33.913645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.241 [2024-04-18 21:18:33.913656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.241 [2024-04-18 21:18:33.924410] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.242 [2024-04-18 21:18:33.924429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.242 [2024-04-18 21:18:33.924437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.242 [2024-04-18 21:18:33.934266] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.242 [2024-04-18 21:18:33.934284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.242 [2024-04-18 21:18:33.934292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.242 [2024-04-18 21:18:33.942709] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.242 [2024-04-18 21:18:33.942730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.242 [2024-04-18 21:18:33.942738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.242 [2024-04-18 21:18:33.952904] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.242 [2024-04-18 21:18:33.952924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.242 [2024-04-18 21:18:33.952932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.242 [2024-04-18 21:18:33.962293] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.242 [2024-04-18 21:18:33.962312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.242 [2024-04-18 21:18:33.962321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.242 [2024-04-18 21:18:33.971197] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.242 [2024-04-18 21:18:33.971217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.242 [2024-04-18 21:18:33.971225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.242 [2024-04-18 21:18:33.982140] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.242 [2024-04-18 21:18:33.982160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.242 [2024-04-18 21:18:33.982167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.242 [2024-04-18 21:18:33.991918] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.242 [2024-04-18 21:18:33.991937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.242 [2024-04-18 21:18:33.991944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.242 [2024-04-18 21:18:34.000110] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.242 [2024-04-18 21:18:34.000133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.242 [2024-04-18 21:18:34.000143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.242 [2024-04-18 21:18:34.011289] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.242 [2024-04-18 21:18:34.011311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.242 [2024-04-18 21:18:34.011319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.242 [2024-04-18 21:18:34.020428] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.242 [2024-04-18 21:18:34.020447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.242 [2024-04-18 21:18:34.020456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.242 [2024-04-18 21:18:34.030211] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.242 [2024-04-18 21:18:34.030230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.242 [2024-04-18 21:18:34.030239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.242 [2024-04-18 21:18:34.039621] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.242 [2024-04-18 21:18:34.039641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.242 [2024-04-18 21:18:34.039650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.242 [2024-04-18 21:18:34.049849] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.242 [2024-04-18 21:18:34.049869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.242 [2024-04-18 21:18:34.049877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.242 [2024-04-18 21:18:34.059606] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.242 [2024-04-18 21:18:34.059626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.242 [2024-04-18 21:18:34.059634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.242 [2024-04-18 21:18:34.069100] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.242 [2024-04-18 21:18:34.069120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.242 [2024-04-18 21:18:34.069128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.242 [2024-04-18 21:18:34.078586] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.242 [2024-04-18 21:18:34.078605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.242 [2024-04-18 21:18:34.078612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.242 [2024-04-18 21:18:34.087837] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.242 [2024-04-18 21:18:34.087857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.242 [2024-04-18 21:18:34.087864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.242 [2024-04-18 21:18:34.098489] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.242 [2024-04-18 21:18:34.098508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.242 [2024-04-18 21:18:34.098521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.242 [2024-04-18 21:18:34.106785] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.242 [2024-04-18 21:18:34.106805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.242 [2024-04-18 21:18:34.106813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.242 [2024-04-18 21:18:34.116504] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.242 [2024-04-18 21:18:34.116528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.242 [2024-04-18 21:18:34.116536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.242 [2024-04-18 21:18:34.126780] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.242 [2024-04-18 21:18:34.126799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.242 [2024-04-18 21:18:34.126807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.242 [2024-04-18 21:18:34.136745] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.242 [2024-04-18 21:18:34.136764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.242 [2024-04-18 21:18:34.136772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.242 [2024-04-18 21:18:34.146475] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.242 [2024-04-18 21:18:34.146494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.242 [2024-04-18 21:18:34.146502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.242 [2024-04-18 21:18:34.154649] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.242 [2024-04-18 21:18:34.154668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.242 [2024-04-18 21:18:34.154676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.242 [2024-04-18 21:18:34.164965] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.242 [2024-04-18 21:18:34.164983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.242 [2024-04-18 21:18:34.164994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.502 [2024-04-18 21:18:34.176230] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.502 [2024-04-18 21:18:34.176249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.502 [2024-04-18 21:18:34.176257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.502 [2024-04-18 21:18:34.185245] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.502 [2024-04-18 21:18:34.185265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.502 [2024-04-18 21:18:34.185272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.502 [2024-04-18 21:18:34.195435] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.502 [2024-04-18 21:18:34.195456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.502 [2024-04-18 21:18:34.195464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.502 [2024-04-18 21:18:34.205724] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.502 [2024-04-18 21:18:34.205744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.502 [2024-04-18 21:18:34.205753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.502 [2024-04-18 21:18:34.213962] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.502 [2024-04-18 21:18:34.213983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:25378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.502 [2024-04-18 21:18:34.213991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.502 [2024-04-18 21:18:34.224281] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.502 [2024-04-18 21:18:34.224302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.502 [2024-04-18 21:18:34.224310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.502 [2024-04-18 21:18:34.234673] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.502 [2024-04-18 21:18:34.234692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.502 [2024-04-18 21:18:34.234700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.502 [2024-04-18 21:18:34.243858] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.503 [2024-04-18 21:18:34.243877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:16872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.503 [2024-04-18 21:18:34.243886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.503 [2024-04-18 21:18:34.253265] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.503 [2024-04-18 21:18:34.253284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.503 [2024-04-18 21:18:34.253292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.503 [2024-04-18 21:18:34.263160] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.503 [2024-04-18 21:18:34.263179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.503 [2024-04-18 21:18:34.263187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.503 [2024-04-18 21:18:34.272732] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.503 [2024-04-18 21:18:34.272752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.503 [2024-04-18 21:18:34.272759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.503 [2024-04-18 21:18:34.282918] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.503 [2024-04-18 21:18:34.282937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.503 [2024-04-18 21:18:34.282945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.503 [2024-04-18 21:18:34.291985] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.503 [2024-04-18 21:18:34.292003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.503 [2024-04-18 21:18:34.292011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.503 [2024-04-18 21:18:34.301118] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.503 [2024-04-18 21:18:34.301137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.503 [2024-04-18 21:18:34.301145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.503 [2024-04-18 21:18:34.311586] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.503 [2024-04-18 21:18:34.311606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.503 [2024-04-18 21:18:34.311613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.503 [2024-04-18 21:18:34.320574] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.503 [2024-04-18 21:18:34.320593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.503 [2024-04-18 21:18:34.320601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.503 [2024-04-18 21:18:34.330689] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.503 [2024-04-18 21:18:34.330708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.503 [2024-04-18 21:18:34.330719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.503 [2024-04-18 21:18:34.339691] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.503 [2024-04-18 21:18:34.339711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:11561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.503 [2024-04-18 21:18:34.339719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.503 [2024-04-18 21:18:34.349617] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.503 [2024-04-18 21:18:34.349636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.503 [2024-04-18 21:18:34.349644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.503 [2024-04-18 21:18:34.359633] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.503 [2024-04-18 21:18:34.359652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.503 [2024-04-18 21:18:34.359660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.503 [2024-04-18 21:18:34.367850] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.503 [2024-04-18 21:18:34.367868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.503 [2024-04-18 21:18:34.367876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.503 [2024-04-18 21:18:34.378148] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.503 [2024-04-18 21:18:34.378168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.503 [2024-04-18 21:18:34.378176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.503 [2024-04-18 21:18:34.387717] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.503 [2024-04-18 21:18:34.387737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.503 [2024-04-18 21:18:34.387745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.503 [2024-04-18 21:18:34.398311] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.503 [2024-04-18 21:18:34.398330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.503 [2024-04-18 21:18:34.398338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.503 [2024-04-18 21:18:34.406615] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.503 [2024-04-18 21:18:34.406634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.503 [2024-04-18 21:18:34.406642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.503 [2024-04-18 21:18:34.417754] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.503 [2024-04-18 21:18:34.417776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:18000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.503 [2024-04-18 21:18:34.417784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.503 [2024-04-18 21:18:34.427179] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.503 [2024-04-18 21:18:34.427197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.503 [2024-04-18 21:18:34.427205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.762 [2024-04-18 21:18:34.436236] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.763 [2024-04-18 21:18:34.436255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.763 [2024-04-18 21:18:34.436263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.763 [2024-04-18 21:18:34.447150] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.763 [2024-04-18 21:18:34.447171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.763 [2024-04-18 21:18:34.447180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.763 [2024-04-18 21:18:34.456288] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.763 [2024-04-18 21:18:34.456307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.763 [2024-04-18 21:18:34.456315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.763 [2024-04-18 21:18:34.465029] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.763 [2024-04-18 21:18:34.465048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.763 [2024-04-18 21:18:34.465056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.763 [2024-04-18 21:18:34.476169] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.763 [2024-04-18 21:18:34.476189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.763 [2024-04-18 21:18:34.476197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.763 [2024-04-18 21:18:34.484870] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.763 [2024-04-18 21:18:34.484889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.763 [2024-04-18 21:18:34.484897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.763 [2024-04-18 21:18:34.494052] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.763 [2024-04-18 21:18:34.494072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.763 [2024-04-18 21:18:34.494080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.763 [2024-04-18 21:18:34.505132] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.763 [2024-04-18 21:18:34.505153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.763 [2024-04-18 21:18:34.505161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.763 [2024-04-18 21:18:34.513351] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.763 [2024-04-18 21:18:34.513370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:7072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.763 [2024-04-18 21:18:34.513378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.763 [2024-04-18 21:18:34.523929] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.763 [2024-04-18 21:18:34.523948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.763 [2024-04-18 21:18:34.523955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.763 [2024-04-18 21:18:34.533438] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.763 [2024-04-18 21:18:34.533458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.763 [2024-04-18 21:18:34.533465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.763 [2024-04-18 21:18:34.542518] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.763 [2024-04-18 21:18:34.542537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.763 [2024-04-18 21:18:34.542545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.763 [2024-04-18 21:18:34.552932] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.763 [2024-04-18 21:18:34.552988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.763 [2024-04-18 21:18:34.552997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.763 [2024-04-18 21:18:34.562920] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.763 [2024-04-18 21:18:34.562941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.763 [2024-04-18 21:18:34.562949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.763 [2024-04-18 21:18:34.572348] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.763 [2024-04-18 21:18:34.572367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.763 [2024-04-18 21:18:34.572375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.763 [2024-04-18 21:18:34.581766] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.763 [2024-04-18 21:18:34.581785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.763 [2024-04-18 21:18:34.581796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.763 [2024-04-18 21:18:34.591043] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.763 [2024-04-18 21:18:34.591062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.763 [2024-04-18 21:18:34.591070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.763 [2024-04-18 21:18:34.601232] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.763 [2024-04-18 21:18:34.601251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.763 [2024-04-18 21:18:34.601261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.763 [2024-04-18 21:18:34.610526] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.763 [2024-04-18 21:18:34.610545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.763 [2024-04-18 21:18:34.610553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.763 [2024-04-18 21:18:34.619930] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.763 [2024-04-18 21:18:34.619950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.763 [2024-04-18 21:18:34.619957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.763 [2024-04-18 21:18:34.630042] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.763 [2024-04-18 21:18:34.630060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.763 [2024-04-18 21:18:34.630068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.763 [2024-04-18 21:18:34.639962] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.763 [2024-04-18 21:18:34.639980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.763 [2024-04-18 21:18:34.639988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.763 [2024-04-18 21:18:34.649321] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.763 [2024-04-18 21:18:34.649340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:24836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.763 [2024-04-18 21:18:34.649348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.763 [2024-04-18 21:18:34.659052] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.763 [2024-04-18 21:18:34.659071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.763 [2024-04-18 21:18:34.659079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.763 [2024-04-18 21:18:34.669325] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.763 [2024-04-18 21:18:34.669345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.763 [2024-04-18 21:18:34.669353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.763 [2024-04-18 21:18:34.679147] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.763 [2024-04-18 21:18:34.679167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.763 [2024-04-18 21:18:34.679175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.763 [2024-04-18 21:18:34.688568] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:18.763 [2024-04-18 21:18:34.688587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.763 [2024-04-18 21:18:34.688595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.023 [2024-04-18 21:18:34.698985] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:19.023 [2024-04-18 21:18:34.699006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:25502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.023 [2024-04-18 21:18:34.699014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.023 [2024-04-18 21:18:34.707610] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:19.023 [2024-04-18 21:18:34.707630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.023 [2024-04-18 21:18:34.707638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.023 [2024-04-18 21:18:34.718713] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:19.023 [2024-04-18 21:18:34.718734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.023 [2024-04-18 21:18:34.718741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.023 [2024-04-18 21:18:34.728331] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:19.023 [2024-04-18 21:18:34.728351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.023 [2024-04-18 21:18:34.728358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.023 [2024-04-18 21:18:34.737312] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:19.023 [2024-04-18 21:18:34.737332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.023 [2024-04-18 21:18:34.737340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.023 [2024-04-18 21:18:34.747825] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:19.023 [2024-04-18 21:18:34.747844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.023 [2024-04-18 21:18:34.747855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.023 [2024-04-18 21:18:34.756987] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:19.023 [2024-04-18 21:18:34.757007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.023 [2024-04-18 21:18:34.757014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.023 [2024-04-18 21:18:34.769772] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:19.023 [2024-04-18 21:18:34.769791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.023 [2024-04-18 21:18:34.769798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.023 [2024-04-18 21:18:34.778805] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:19.023 [2024-04-18 21:18:34.778824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.023 [2024-04-18 21:18:34.778831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.023 [2024-04-18 21:18:34.791756] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:19.023 [2024-04-18 21:18:34.791775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.023 [2024-04-18 21:18:34.791783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.023 [2024-04-18 21:18:34.803656] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:19.023 [2024-04-18 21:18:34.803675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.023 [2024-04-18 21:18:34.803683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.023 [2024-04-18 21:18:34.813079] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:19.023 [2024-04-18 21:18:34.813098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.024 [2024-04-18 21:18:34.813106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.024 [2024-04-18 21:18:34.822951] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:19.024 [2024-04-18 21:18:34.822970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.024 [2024-04-18 21:18:34.822978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.024 [2024-04-18 21:18:34.832707] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:19.024 [2024-04-18 21:18:34.832727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.024 [2024-04-18 21:18:34.832734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.024 [2024-04-18 21:18:34.842235] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:19.024 [2024-04-18 21:18:34.842257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.024 [2024-04-18 21:18:34.842264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.024 [2024-04-18 21:18:34.851131] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:19.024 [2024-04-18 21:18:34.851150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.024 [2024-04-18 21:18:34.851159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.024 [2024-04-18 21:18:34.861092] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:19.024 [2024-04-18 21:18:34.861111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.024 [2024-04-18 21:18:34.861119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.024 [2024-04-18 21:18:34.870833] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:19.024 [2024-04-18 21:18:34.870852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.024 [2024-04-18 21:18:34.870860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.024 [2024-04-18 21:18:34.879692] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:19.024 [2024-04-18 21:18:34.879713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.024 [2024-04-18 21:18:34.879722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.024 [2024-04-18 21:18:34.890949] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:19.024 [2024-04-18 21:18:34.890969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.024 [2024-04-18 21:18:34.890977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.024 [2024-04-18 21:18:34.899875] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:19.024 [2024-04-18 21:18:34.899896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.024 [2024-04-18 21:18:34.899904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.024 [2024-04-18 21:18:34.909254] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:19.024 [2024-04-18 21:18:34.909274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.024 [2024-04-18 21:18:34.909283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.024 [2024-04-18 21:18:34.919375] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:19.024 [2024-04-18 21:18:34.919395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.024 [2024-04-18 21:18:34.919403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.024 [2024-04-18 21:18:34.928694] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:19.024 [2024-04-18 21:18:34.928715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.024 [2024-04-18 21:18:34.928723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.024 [2024-04-18 21:18:34.938618] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:19.024 [2024-04-18 21:18:34.938638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.024 [2024-04-18 21:18:34.938645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.024 [2024-04-18 21:18:34.949569] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:19.024 [2024-04-18 21:18:34.949590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.024 [2024-04-18 21:18:34.949599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.284 [2024-04-18 21:18:34.958347] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:19.284 [2024-04-18 21:18:34.958367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.284 [2024-04-18 21:18:34.958375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.284 [2024-04-18 21:18:34.967998] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:19.284 [2024-04-18 21:18:34.968018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.284 [2024-04-18 21:18:34.968026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.284 [2024-04-18 21:18:34.979079] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:19.284 [2024-04-18 21:18:34.979099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.284 [2024-04-18 21:18:34.979106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.284 [2024-04-18 21:18:34.987320] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:19.284 [2024-04-18 21:18:34.987340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.284 [2024-04-18 21:18:34.987348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.284 [2024-04-18 21:18:34.998376] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:19.284 [2024-04-18 21:18:34.998396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.284 [2024-04-18 21:18:34.998404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.284 [2024-04-18 21:18:35.006673] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:19.284 [2024-04-18 21:18:35.006692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.284 [2024-04-18 21:18:35.006704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.284 [2024-04-18 21:18:35.018131] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:19.284 [2024-04-18 21:18:35.018151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.284 [2024-04-18 21:18:35.018159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.284 [2024-04-18 21:18:35.025990] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:19.284 [2024-04-18 21:18:35.026010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.284 [2024-04-18 21:18:35.026018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.284 [2024-04-18 21:18:35.036803] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:19.284 [2024-04-18 21:18:35.036822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.284 [2024-04-18 21:18:35.036829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.284 [2024-04-18 21:18:35.047009] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:19.284 [2024-04-18 21:18:35.047029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.284 [2024-04-18 21:18:35.047036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.285 [2024-04-18 21:18:35.057063] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:19.285 [2024-04-18 21:18:35.057082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.285 [2024-04-18 21:18:35.057090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.285 [2024-04-18 21:18:35.065913] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:19.285 [2024-04-18 21:18:35.065934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.285 [2024-04-18 21:18:35.065942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.285 [2024-04-18 21:18:35.076010] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:19.285 [2024-04-18 21:18:35.076030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.285 [2024-04-18 21:18:35.076038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.285 [2024-04-18 21:18:35.086548] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:19.285 [2024-04-18 21:18:35.086568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.285 [2024-04-18 21:18:35.086576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.285 [2024-04-18 21:18:35.095515] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:19.285 [2024-04-18 21:18:35.095535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.285 [2024-04-18 21:18:35.095543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.285 [2024-04-18 21:18:35.105990] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:19.285 [2024-04-18 21:18:35.106010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.285 [2024-04-18 21:18:35.106018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.285 [2024-04-18 21:18:35.114594] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:19.285 [2024-04-18 21:18:35.114615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.285 [2024-04-18 21:18:35.114624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.285 [2024-04-18 21:18:35.124755] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:19.285 [2024-04-18 21:18:35.124774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.285 [2024-04-18 21:18:35.124782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.285 [2024-04-18 21:18:35.134945] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:19.285 [2024-04-18 21:18:35.134965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.285 [2024-04-18 21:18:35.134973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.285 [2024-04-18 21:18:35.144711] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:19.285 [2024-04-18 21:18:35.144732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.285 [2024-04-18 21:18:35.144740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.285 [2024-04-18 21:18:35.154222] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1477ce0) 00:25:19.285 [2024-04-18 21:18:35.154242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.285 [2024-04-18 21:18:35.154249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.285 00:25:19.285 Latency(us) 00:25:19.285 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:19.285 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:19.285 nvme0n1 : 2.00 26065.28 101.82 0.00 0.00 4905.90 2436.23 13620.09 00:25:19.285 =================================================================================================================== 00:25:19.285 Total : 26065.28 101.82 0.00 0.00 4905.90 2436.23 13620.09 00:25:19.285 0 00:25:19.285 21:18:35 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:19.285 21:18:35 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:19.285 21:18:35 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:19.285 | .driver_specific 00:25:19.285 | .nvme_error 00:25:19.285 | .status_code 00:25:19.285 | .command_transient_transport_error' 00:25:19.285 21:18:35 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:19.544 21:18:35 -- host/digest.sh@71 -- # (( 204 > 0 )) 00:25:19.544 21:18:35 -- host/digest.sh@73 -- # killprocess 3190525 00:25:19.544 21:18:35 -- common/autotest_common.sh@936 -- # '[' -z 3190525 ']' 00:25:19.544 21:18:35 -- common/autotest_common.sh@940 -- # kill -0 3190525 00:25:19.544 21:18:35 -- common/autotest_common.sh@941 -- # uname 00:25:19.544 21:18:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:19.544 21:18:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3190525 00:25:19.544 21:18:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:19.544 21:18:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:19.544 21:18:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3190525' 00:25:19.544 killing process with pid 3190525 00:25:19.544 21:18:35 -- common/autotest_common.sh@955 -- # kill 3190525 00:25:19.544 Received shutdown signal, test time was about 2.000000 seconds 00:25:19.544 00:25:19.544 Latency(us) 00:25:19.544 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:19.544 =================================================================================================================== 00:25:19.544 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:19.544 21:18:35 -- common/autotest_common.sh@960 -- # wait 3190525 00:25:19.804 21:18:35 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:25:19.804 21:18:35 -- host/digest.sh@54 -- # local rw bs qd 00:25:19.804 21:18:35 -- host/digest.sh@56 -- # rw=randread 00:25:19.804 21:18:35 -- host/digest.sh@56 -- # bs=131072 00:25:19.804 21:18:35 -- host/digest.sh@56 -- # qd=16 00:25:19.804 21:18:35 -- host/digest.sh@58 -- # bperfpid=3191221 00:25:19.804 21:18:35 -- host/digest.sh@60 -- # waitforlisten 3191221 /var/tmp/bperf.sock 00:25:19.804 21:18:35 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:25:19.804 21:18:35 -- common/autotest_common.sh@817 -- # '[' -z 3191221 ']' 00:25:19.804 21:18:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:19.804 21:18:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:19.804 21:18:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:19.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:19.804 21:18:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:19.804 21:18:35 -- common/autotest_common.sh@10 -- # set +x 00:25:19.804 [2024-04-18 21:18:35.660469] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:25:19.804 [2024-04-18 21:18:35.660525] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3191221 ] 00:25:19.804 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:19.804 Zero copy mechanism will not be used. 00:25:19.804 EAL: No free 2048 kB hugepages reported on node 1 00:25:19.804 [2024-04-18 21:18:35.719981] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:20.063 [2024-04-18 21:18:35.788863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:20.632 21:18:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:20.632 21:18:36 -- common/autotest_common.sh@850 -- # return 0 00:25:20.632 21:18:36 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:20.632 21:18:36 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:20.892 21:18:36 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:20.892 21:18:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:20.892 21:18:36 -- common/autotest_common.sh@10 -- # set +x 00:25:20.892 21:18:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:20.892 21:18:36 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:20.892 21:18:36 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:21.152 nvme0n1 00:25:21.152 21:18:37 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:21.152 21:18:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.152 21:18:37 -- common/autotest_common.sh@10 -- # set +x 00:25:21.152 21:18:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:21.152 21:18:37 -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:21.152 21:18:37 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:21.411 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:21.411 Zero copy mechanism will not be used. 00:25:21.411 Running I/O for 2 seconds... 00:25:21.411 [2024-04-18 21:18:37.138977] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.411 [2024-04-18 21:18:37.139013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.411 [2024-04-18 21:18:37.139023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.411 [2024-04-18 21:18:37.153192] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.411 [2024-04-18 21:18:37.153218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.411 [2024-04-18 21:18:37.153227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.411 [2024-04-18 21:18:37.168883] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.411 [2024-04-18 21:18:37.168905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.411 [2024-04-18 21:18:37.168913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.411 [2024-04-18 21:18:37.181101] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.411 [2024-04-18 21:18:37.181121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.411 [2024-04-18 21:18:37.181130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.411 [2024-04-18 21:18:37.190486] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.412 [2024-04-18 21:18:37.190505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.412 [2024-04-18 21:18:37.190518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.412 [2024-04-18 21:18:37.199047] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.412 [2024-04-18 21:18:37.199066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.412 [2024-04-18 21:18:37.199074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.412 [2024-04-18 21:18:37.207642] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.412 [2024-04-18 21:18:37.207665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.412 [2024-04-18 21:18:37.207673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.412 [2024-04-18 21:18:37.216360] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.412 [2024-04-18 21:18:37.216381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.412 [2024-04-18 21:18:37.216389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.412 [2024-04-18 21:18:37.224943] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.412 [2024-04-18 21:18:37.224964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.412 [2024-04-18 21:18:37.224972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.412 [2024-04-18 21:18:37.233474] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.412 [2024-04-18 21:18:37.233494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.412 [2024-04-18 21:18:37.233503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.412 [2024-04-18 21:18:37.242016] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.412 [2024-04-18 21:18:37.242036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.412 [2024-04-18 21:18:37.242044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.412 [2024-04-18 21:18:37.250564] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.412 [2024-04-18 21:18:37.250583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.412 [2024-04-18 21:18:37.250591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.412 [2024-04-18 21:18:37.259112] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.412 [2024-04-18 21:18:37.259133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.412 [2024-04-18 21:18:37.259141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.412 [2024-04-18 21:18:37.267696] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.412 [2024-04-18 21:18:37.267717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.412 [2024-04-18 21:18:37.267725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.412 [2024-04-18 21:18:37.276456] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.412 [2024-04-18 21:18:37.276475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.412 [2024-04-18 21:18:37.276483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.412 [2024-04-18 21:18:37.285301] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.412 [2024-04-18 21:18:37.285321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.412 [2024-04-18 21:18:37.285328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.412 [2024-04-18 21:18:37.300409] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.412 [2024-04-18 21:18:37.300429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.412 [2024-04-18 21:18:37.300437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.412 [2024-04-18 21:18:37.312415] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.412 [2024-04-18 21:18:37.312435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.412 [2024-04-18 21:18:37.312442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.412 [2024-04-18 21:18:37.321804] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.412 [2024-04-18 21:18:37.321823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.412 [2024-04-18 21:18:37.321830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.412 [2024-04-18 21:18:37.334899] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.412 [2024-04-18 21:18:37.334919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.412 [2024-04-18 21:18:37.334927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.672 [2024-04-18 21:18:37.349515] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.672 [2024-04-18 21:18:37.349552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.672 [2024-04-18 21:18:37.349560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.672 [2024-04-18 21:18:37.360394] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.672 [2024-04-18 21:18:37.360413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.672 [2024-04-18 21:18:37.360421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.672 [2024-04-18 21:18:37.369996] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.672 [2024-04-18 21:18:37.370016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.672 [2024-04-18 21:18:37.370023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.672 [2024-04-18 21:18:37.378925] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.672 [2024-04-18 21:18:37.378944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.672 [2024-04-18 21:18:37.378955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.672 [2024-04-18 21:18:37.388056] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.672 [2024-04-18 21:18:37.388076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.672 [2024-04-18 21:18:37.388084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.672 [2024-04-18 21:18:37.397662] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.672 [2024-04-18 21:18:37.397681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.672 [2024-04-18 21:18:37.397689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.672 [2024-04-18 21:18:37.406960] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.672 [2024-04-18 21:18:37.406981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.672 [2024-04-18 21:18:37.406990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.672 [2024-04-18 21:18:37.424166] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.672 [2024-04-18 21:18:37.424186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.672 [2024-04-18 21:18:37.424194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.672 [2024-04-18 21:18:37.437642] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.672 [2024-04-18 21:18:37.437662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.672 [2024-04-18 21:18:37.437670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.672 [2024-04-18 21:18:37.452342] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.672 [2024-04-18 21:18:37.452362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.672 [2024-04-18 21:18:37.452369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.672 [2024-04-18 21:18:37.463763] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.672 [2024-04-18 21:18:37.463783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.672 [2024-04-18 21:18:37.463790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.672 [2024-04-18 21:18:37.473045] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.672 [2024-04-18 21:18:37.473065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.672 [2024-04-18 21:18:37.473073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.672 [2024-04-18 21:18:37.486646] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.672 [2024-04-18 21:18:37.486666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.672 [2024-04-18 21:18:37.486674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.672 [2024-04-18 21:18:37.500938] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.672 [2024-04-18 21:18:37.500959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.672 [2024-04-18 21:18:37.500966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.672 [2024-04-18 21:18:37.516798] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.672 [2024-04-18 21:18:37.516819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.672 [2024-04-18 21:18:37.516827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.672 [2024-04-18 21:18:37.530189] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.672 [2024-04-18 21:18:37.530209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.672 [2024-04-18 21:18:37.530217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.672 [2024-04-18 21:18:37.540874] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.672 [2024-04-18 21:18:37.540894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.672 [2024-04-18 21:18:37.540901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.672 [2024-04-18 21:18:37.550242] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.672 [2024-04-18 21:18:37.550261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.672 [2024-04-18 21:18:37.550269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.672 [2024-04-18 21:18:37.560102] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.672 [2024-04-18 21:18:37.560121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.672 [2024-04-18 21:18:37.560129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.672 [2024-04-18 21:18:37.569157] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.672 [2024-04-18 21:18:37.569176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.672 [2024-04-18 21:18:37.569184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.672 [2024-04-18 21:18:37.584956] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.672 [2024-04-18 21:18:37.584975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.672 [2024-04-18 21:18:37.584986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.672 [2024-04-18 21:18:37.598223] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.672 [2024-04-18 21:18:37.598244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.672 [2024-04-18 21:18:37.598252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.932 [2024-04-18 21:18:37.608859] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.932 [2024-04-18 21:18:37.608879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.932 [2024-04-18 21:18:37.608887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.932 [2024-04-18 21:18:37.624071] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.932 [2024-04-18 21:18:37.624090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.932 [2024-04-18 21:18:37.624098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.932 [2024-04-18 21:18:37.636273] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.932 [2024-04-18 21:18:37.636294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.932 [2024-04-18 21:18:37.636302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.932 [2024-04-18 21:18:37.645948] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.932 [2024-04-18 21:18:37.645968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.932 [2024-04-18 21:18:37.645976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.932 [2024-04-18 21:18:37.655188] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.932 [2024-04-18 21:18:37.655207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.932 [2024-04-18 21:18:37.655214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.932 [2024-04-18 21:18:37.663700] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.932 [2024-04-18 21:18:37.663721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.932 [2024-04-18 21:18:37.663729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.932 [2024-04-18 21:18:37.672420] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.932 [2024-04-18 21:18:37.672440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.932 [2024-04-18 21:18:37.672448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.932 [2024-04-18 21:18:37.680774] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.932 [2024-04-18 21:18:37.680797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.932 [2024-04-18 21:18:37.680805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.932 [2024-04-18 21:18:37.697009] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.932 [2024-04-18 21:18:37.697029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.932 [2024-04-18 21:18:37.697036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.932 [2024-04-18 21:18:37.709727] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.932 [2024-04-18 21:18:37.709749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.932 [2024-04-18 21:18:37.709757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.932 [2024-04-18 21:18:37.719976] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.932 [2024-04-18 21:18:37.719996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.932 [2024-04-18 21:18:37.720004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.932 [2024-04-18 21:18:37.729141] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.932 [2024-04-18 21:18:37.729162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.932 [2024-04-18 21:18:37.729170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.932 [2024-04-18 21:18:37.738450] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.932 [2024-04-18 21:18:37.738471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.932 [2024-04-18 21:18:37.738479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.932 [2024-04-18 21:18:37.747804] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.932 [2024-04-18 21:18:37.747824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.932 [2024-04-18 21:18:37.747833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.932 [2024-04-18 21:18:37.756606] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.933 [2024-04-18 21:18:37.756628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.933 [2024-04-18 21:18:37.756637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.933 [2024-04-18 21:18:37.765294] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.933 [2024-04-18 21:18:37.765315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.933 [2024-04-18 21:18:37.765323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.933 [2024-04-18 21:18:37.773901] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.933 [2024-04-18 21:18:37.773922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.933 [2024-04-18 21:18:37.773930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.933 [2024-04-18 21:18:37.782554] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.933 [2024-04-18 21:18:37.782575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.933 [2024-04-18 21:18:37.782583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.933 [2024-04-18 21:18:37.791187] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.933 [2024-04-18 21:18:37.791207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.933 [2024-04-18 21:18:37.791215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.933 [2024-04-18 21:18:37.799868] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.933 [2024-04-18 21:18:37.799887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.933 [2024-04-18 21:18:37.799895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.933 [2024-04-18 21:18:37.808449] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.933 [2024-04-18 21:18:37.808468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.933 [2024-04-18 21:18:37.808476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.933 [2024-04-18 21:18:37.817141] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.933 [2024-04-18 21:18:37.817161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.933 [2024-04-18 21:18:37.817169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.933 [2024-04-18 21:18:37.825798] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.933 [2024-04-18 21:18:37.825818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.933 [2024-04-18 21:18:37.825826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.933 [2024-04-18 21:18:37.834355] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.933 [2024-04-18 21:18:37.834374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.933 [2024-04-18 21:18:37.834382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.933 [2024-04-18 21:18:37.842887] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.933 [2024-04-18 21:18:37.842907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.933 [2024-04-18 21:18:37.842917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.933 [2024-04-18 21:18:37.851842] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.933 [2024-04-18 21:18:37.851863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.933 [2024-04-18 21:18:37.851871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.933 [2024-04-18 21:18:37.860735] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:21.933 [2024-04-18 21:18:37.860755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.933 [2024-04-18 21:18:37.860763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.194 [2024-04-18 21:18:37.869617] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.194 [2024-04-18 21:18:37.869636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.194 [2024-04-18 21:18:37.869644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.194 [2024-04-18 21:18:37.878504] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.194 [2024-04-18 21:18:37.878530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.194 [2024-04-18 21:18:37.878538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.194 [2024-04-18 21:18:37.887142] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.194 [2024-04-18 21:18:37.887161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.194 [2024-04-18 21:18:37.887169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.194 [2024-04-18 21:18:37.895810] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.194 [2024-04-18 21:18:37.895830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.194 [2024-04-18 21:18:37.895838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.194 [2024-04-18 21:18:37.904388] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.194 [2024-04-18 21:18:37.904408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.194 [2024-04-18 21:18:37.904416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.194 [2024-04-18 21:18:37.912966] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.194 [2024-04-18 21:18:37.912986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.194 [2024-04-18 21:18:37.912994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.194 [2024-04-18 21:18:37.921517] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.194 [2024-04-18 21:18:37.921538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.194 [2024-04-18 21:18:37.921546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.194 [2024-04-18 21:18:37.930290] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.194 [2024-04-18 21:18:37.930310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.194 [2024-04-18 21:18:37.930317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.194 [2024-04-18 21:18:37.938873] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.194 [2024-04-18 21:18:37.938893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.194 [2024-04-18 21:18:37.938900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.194 [2024-04-18 21:18:37.947490] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.194 [2024-04-18 21:18:37.947515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.194 [2024-04-18 21:18:37.947523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.194 [2024-04-18 21:18:37.956030] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.194 [2024-04-18 21:18:37.956050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.194 [2024-04-18 21:18:37.956058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.194 [2024-04-18 21:18:37.964645] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.194 [2024-04-18 21:18:37.964665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.194 [2024-04-18 21:18:37.964672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.194 [2024-04-18 21:18:37.973181] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.194 [2024-04-18 21:18:37.973202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.194 [2024-04-18 21:18:37.973210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.194 [2024-04-18 21:18:37.981812] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.194 [2024-04-18 21:18:37.981832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.194 [2024-04-18 21:18:37.981840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.194 [2024-04-18 21:18:37.990472] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.194 [2024-04-18 21:18:37.990491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.194 [2024-04-18 21:18:37.990502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.194 [2024-04-18 21:18:37.999132] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.194 [2024-04-18 21:18:37.999152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.194 [2024-04-18 21:18:37.999160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.194 [2024-04-18 21:18:38.007920] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.194 [2024-04-18 21:18:38.007940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.194 [2024-04-18 21:18:38.007947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.194 [2024-04-18 21:18:38.016595] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.194 [2024-04-18 21:18:38.016614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.194 [2024-04-18 21:18:38.016622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.194 [2024-04-18 21:18:38.025143] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.194 [2024-04-18 21:18:38.025162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.194 [2024-04-18 21:18:38.025170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.194 [2024-04-18 21:18:38.033752] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.194 [2024-04-18 21:18:38.033771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.194 [2024-04-18 21:18:38.033778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.194 [2024-04-18 21:18:38.042416] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.194 [2024-04-18 21:18:38.042435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.194 [2024-04-18 21:18:38.042443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.194 [2024-04-18 21:18:38.051233] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.194 [2024-04-18 21:18:38.051253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.194 [2024-04-18 21:18:38.051261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.194 [2024-04-18 21:18:38.059915] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.194 [2024-04-18 21:18:38.059935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.194 [2024-04-18 21:18:38.059942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.194 [2024-04-18 21:18:38.068590] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.194 [2024-04-18 21:18:38.068613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.194 [2024-04-18 21:18:38.068621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.194 [2024-04-18 21:18:38.077202] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.194 [2024-04-18 21:18:38.077222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.194 [2024-04-18 21:18:38.077229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.194 [2024-04-18 21:18:38.085860] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.194 [2024-04-18 21:18:38.085881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.194 [2024-04-18 21:18:38.085889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.194 [2024-04-18 21:18:38.094520] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.195 [2024-04-18 21:18:38.094540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.195 [2024-04-18 21:18:38.094548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.195 [2024-04-18 21:18:38.103091] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.195 [2024-04-18 21:18:38.103110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.195 [2024-04-18 21:18:38.103118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.195 [2024-04-18 21:18:38.111822] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.195 [2024-04-18 21:18:38.111842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.195 [2024-04-18 21:18:38.111850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.195 [2024-04-18 21:18:38.120470] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.195 [2024-04-18 21:18:38.120489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.195 [2024-04-18 21:18:38.120497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.455 [2024-04-18 21:18:38.129090] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.455 [2024-04-18 21:18:38.129110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.455 [2024-04-18 21:18:38.129118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.455 [2024-04-18 21:18:38.137659] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.455 [2024-04-18 21:18:38.137679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.455 [2024-04-18 21:18:38.137686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.455 [2024-04-18 21:18:38.146194] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.455 [2024-04-18 21:18:38.146214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.455 [2024-04-18 21:18:38.146222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.455 [2024-04-18 21:18:38.154901] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.455 [2024-04-18 21:18:38.154921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.455 [2024-04-18 21:18:38.154928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.456 [2024-04-18 21:18:38.163445] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.456 [2024-04-18 21:18:38.163465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.456 [2024-04-18 21:18:38.163473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.456 [2024-04-18 21:18:38.172016] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.456 [2024-04-18 21:18:38.172037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.456 [2024-04-18 21:18:38.172045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.456 [2024-04-18 21:18:38.180616] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.456 [2024-04-18 21:18:38.180636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.456 [2024-04-18 21:18:38.180643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.456 [2024-04-18 21:18:38.189240] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.456 [2024-04-18 21:18:38.189260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.456 [2024-04-18 21:18:38.189268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.456 [2024-04-18 21:18:38.197865] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.456 [2024-04-18 21:18:38.197885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.456 [2024-04-18 21:18:38.197893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.456 [2024-04-18 21:18:38.206504] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.456 [2024-04-18 21:18:38.206530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.456 [2024-04-18 21:18:38.206538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.456 [2024-04-18 21:18:38.215139] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.456 [2024-04-18 21:18:38.215158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.456 [2024-04-18 21:18:38.215168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.456 [2024-04-18 21:18:38.223817] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.456 [2024-04-18 21:18:38.223837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.456 [2024-04-18 21:18:38.223844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.456 [2024-04-18 21:18:38.232964] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.456 [2024-04-18 21:18:38.232983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.456 [2024-04-18 21:18:38.232990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.456 [2024-04-18 21:18:38.242007] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.456 [2024-04-18 21:18:38.242028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.456 [2024-04-18 21:18:38.242036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.456 [2024-04-18 21:18:38.252785] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.456 [2024-04-18 21:18:38.252808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.456 [2024-04-18 21:18:38.252816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.456 [2024-04-18 21:18:38.262885] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.456 [2024-04-18 21:18:38.262906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.456 [2024-04-18 21:18:38.262914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.456 [2024-04-18 21:18:38.273157] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.456 [2024-04-18 21:18:38.273178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.456 [2024-04-18 21:18:38.273187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.456 [2024-04-18 21:18:38.283189] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.456 [2024-04-18 21:18:38.283212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.456 [2024-04-18 21:18:38.283220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.456 [2024-04-18 21:18:38.294175] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.456 [2024-04-18 21:18:38.294197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.456 [2024-04-18 21:18:38.294205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.456 [2024-04-18 21:18:38.303590] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.456 [2024-04-18 21:18:38.303612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.456 [2024-04-18 21:18:38.303621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.456 [2024-04-18 21:18:38.313826] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.456 [2024-04-18 21:18:38.313848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.456 [2024-04-18 21:18:38.313856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.456 [2024-04-18 21:18:38.323015] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.456 [2024-04-18 21:18:38.323035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.456 [2024-04-18 21:18:38.323044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.456 [2024-04-18 21:18:38.331946] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.456 [2024-04-18 21:18:38.331966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.456 [2024-04-18 21:18:38.331974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.456 [2024-04-18 21:18:38.340477] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.456 [2024-04-18 21:18:38.340498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.456 [2024-04-18 21:18:38.340506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.456 [2024-04-18 21:18:38.349143] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.456 [2024-04-18 21:18:38.349163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.456 [2024-04-18 21:18:38.349171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.456 [2024-04-18 21:18:38.357812] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.456 [2024-04-18 21:18:38.357831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.456 [2024-04-18 21:18:38.357839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.456 [2024-04-18 21:18:38.366458] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.456 [2024-04-18 21:18:38.366477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.456 [2024-04-18 21:18:38.366484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.456 [2024-04-18 21:18:38.375126] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.456 [2024-04-18 21:18:38.375146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.456 [2024-04-18 21:18:38.375157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.456 [2024-04-18 21:18:38.383776] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.456 [2024-04-18 21:18:38.383796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.456 [2024-04-18 21:18:38.383804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.717 [2024-04-18 21:18:38.392407] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.717 [2024-04-18 21:18:38.392426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.717 [2024-04-18 21:18:38.392433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.717 [2024-04-18 21:18:38.401086] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.717 [2024-04-18 21:18:38.401104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.717 [2024-04-18 21:18:38.401112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.717 [2024-04-18 21:18:38.409668] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.717 [2024-04-18 21:18:38.409687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.717 [2024-04-18 21:18:38.409695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.717 [2024-04-18 21:18:38.418310] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.717 [2024-04-18 21:18:38.418329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.717 [2024-04-18 21:18:38.418336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.717 [2024-04-18 21:18:38.426946] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.717 [2024-04-18 21:18:38.426966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.717 [2024-04-18 21:18:38.426975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.717 [2024-04-18 21:18:38.435578] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.717 [2024-04-18 21:18:38.435598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.717 [2024-04-18 21:18:38.435605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.717 [2024-04-18 21:18:38.444364] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.717 [2024-04-18 21:18:38.444384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.717 [2024-04-18 21:18:38.444392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.717 [2024-04-18 21:18:38.453176] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.717 [2024-04-18 21:18:38.453199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.717 [2024-04-18 21:18:38.453206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.717 [2024-04-18 21:18:38.461804] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.717 [2024-04-18 21:18:38.461823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.717 [2024-04-18 21:18:38.461830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.717 [2024-04-18 21:18:38.470403] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.717 [2024-04-18 21:18:38.470423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.717 [2024-04-18 21:18:38.470430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.717 [2024-04-18 21:18:38.478995] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.717 [2024-04-18 21:18:38.479014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.717 [2024-04-18 21:18:38.479022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.717 [2024-04-18 21:18:38.487618] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.717 [2024-04-18 21:18:38.487637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.717 [2024-04-18 21:18:38.487645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.717 [2024-04-18 21:18:38.496270] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.717 [2024-04-18 21:18:38.496289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.717 [2024-04-18 21:18:38.496297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.717 [2024-04-18 21:18:38.504948] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.717 [2024-04-18 21:18:38.504967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.717 [2024-04-18 21:18:38.504975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.717 [2024-04-18 21:18:38.513666] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.717 [2024-04-18 21:18:38.513685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.717 [2024-04-18 21:18:38.513692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.717 [2024-04-18 21:18:38.522230] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.717 [2024-04-18 21:18:38.522248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.717 [2024-04-18 21:18:38.522256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.717 [2024-04-18 21:18:38.530869] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.717 [2024-04-18 21:18:38.530889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.717 [2024-04-18 21:18:38.530896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.717 [2024-04-18 21:18:38.539395] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.717 [2024-04-18 21:18:38.539414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.717 [2024-04-18 21:18:38.539422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.717 [2024-04-18 21:18:38.548017] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.717 [2024-04-18 21:18:38.548036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.717 [2024-04-18 21:18:38.548044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.717 [2024-04-18 21:18:38.556749] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.717 [2024-04-18 21:18:38.556768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.717 [2024-04-18 21:18:38.556776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.717 [2024-04-18 21:18:38.565369] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.717 [2024-04-18 21:18:38.565388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.717 [2024-04-18 21:18:38.565395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.717 [2024-04-18 21:18:38.574441] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.717 [2024-04-18 21:18:38.574461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.717 [2024-04-18 21:18:38.574469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.718 [2024-04-18 21:18:38.583129] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.718 [2024-04-18 21:18:38.583148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.718 [2024-04-18 21:18:38.583155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.718 [2024-04-18 21:18:38.591723] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.718 [2024-04-18 21:18:38.591742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.718 [2024-04-18 21:18:38.591750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.718 [2024-04-18 21:18:38.600450] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.718 [2024-04-18 21:18:38.600469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.718 [2024-04-18 21:18:38.600480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.718 [2024-04-18 21:18:38.609085] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.718 [2024-04-18 21:18:38.609104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.718 [2024-04-18 21:18:38.609112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.718 [2024-04-18 21:18:38.617698] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.718 [2024-04-18 21:18:38.617728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.718 [2024-04-18 21:18:38.617736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.718 [2024-04-18 21:18:38.626254] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.718 [2024-04-18 21:18:38.626274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.718 [2024-04-18 21:18:38.626281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.718 [2024-04-18 21:18:38.634838] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.718 [2024-04-18 21:18:38.634857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.718 [2024-04-18 21:18:38.634865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.718 [2024-04-18 21:18:38.643479] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.718 [2024-04-18 21:18:38.643499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.718 [2024-04-18 21:18:38.643507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.978 [2024-04-18 21:18:38.652137] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.978 [2024-04-18 21:18:38.652157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.978 [2024-04-18 21:18:38.652164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.978 [2024-04-18 21:18:38.660811] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.978 [2024-04-18 21:18:38.660830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.978 [2024-04-18 21:18:38.660837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.978 [2024-04-18 21:18:38.669387] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.978 [2024-04-18 21:18:38.669406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.978 [2024-04-18 21:18:38.669413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.978 [2024-04-18 21:18:38.677971] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.978 [2024-04-18 21:18:38.677991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.978 [2024-04-18 21:18:38.677999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.978 [2024-04-18 21:18:38.686524] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.978 [2024-04-18 21:18:38.686543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.978 [2024-04-18 21:18:38.686551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.978 [2024-04-18 21:18:38.695095] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.978 [2024-04-18 21:18:38.695114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.978 [2024-04-18 21:18:38.695121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.978 [2024-04-18 21:18:38.703682] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.978 [2024-04-18 21:18:38.703701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.978 [2024-04-18 21:18:38.703709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.978 [2024-04-18 21:18:38.712290] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.978 [2024-04-18 21:18:38.712309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.978 [2024-04-18 21:18:38.712317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.978 [2024-04-18 21:18:38.720904] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.978 [2024-04-18 21:18:38.720923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.978 [2024-04-18 21:18:38.720930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.978 [2024-04-18 21:18:38.729518] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.978 [2024-04-18 21:18:38.729537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.978 [2024-04-18 21:18:38.729545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.978 [2024-04-18 21:18:38.738122] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.978 [2024-04-18 21:18:38.738140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.978 [2024-04-18 21:18:38.738148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.978 [2024-04-18 21:18:38.746730] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.978 [2024-04-18 21:18:38.746750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.978 [2024-04-18 21:18:38.746761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.978 [2024-04-18 21:18:38.755348] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.978 [2024-04-18 21:18:38.755368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.978 [2024-04-18 21:18:38.755375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.978 [2024-04-18 21:18:38.763839] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.978 [2024-04-18 21:18:38.763858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.978 [2024-04-18 21:18:38.763866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.978 [2024-04-18 21:18:38.772393] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.979 [2024-04-18 21:18:38.772412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.979 [2024-04-18 21:18:38.772420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.979 [2024-04-18 21:18:38.781059] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.979 [2024-04-18 21:18:38.781078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.979 [2024-04-18 21:18:38.781086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.979 [2024-04-18 21:18:38.789651] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.979 [2024-04-18 21:18:38.789670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.979 [2024-04-18 21:18:38.789678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.979 [2024-04-18 21:18:38.798284] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.979 [2024-04-18 21:18:38.798303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.979 [2024-04-18 21:18:38.798311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.979 [2024-04-18 21:18:38.806869] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.979 [2024-04-18 21:18:38.806888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.979 [2024-04-18 21:18:38.806895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.979 [2024-04-18 21:18:38.815490] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.979 [2024-04-18 21:18:38.815514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.979 [2024-04-18 21:18:38.815523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.979 [2024-04-18 21:18:38.824130] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.979 [2024-04-18 21:18:38.824152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.979 [2024-04-18 21:18:38.824159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.979 [2024-04-18 21:18:38.832709] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.979 [2024-04-18 21:18:38.832728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.979 [2024-04-18 21:18:38.832736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.979 [2024-04-18 21:18:38.841327] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.979 [2024-04-18 21:18:38.841346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.979 [2024-04-18 21:18:38.841353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.979 [2024-04-18 21:18:38.850053] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.979 [2024-04-18 21:18:38.850071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.979 [2024-04-18 21:18:38.850079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.979 [2024-04-18 21:18:38.858601] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.979 [2024-04-18 21:18:38.858620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.979 [2024-04-18 21:18:38.858627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.979 [2024-04-18 21:18:38.871779] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.979 [2024-04-18 21:18:38.871798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.979 [2024-04-18 21:18:38.871805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.979 [2024-04-18 21:18:38.886334] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.979 [2024-04-18 21:18:38.886353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.979 [2024-04-18 21:18:38.886360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.979 [2024-04-18 21:18:38.896932] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.979 [2024-04-18 21:18:38.896950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.979 [2024-04-18 21:18:38.896958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.979 [2024-04-18 21:18:38.906571] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:22.979 [2024-04-18 21:18:38.906591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.979 [2024-04-18 21:18:38.906598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.240 [2024-04-18 21:18:38.915615] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:23.240 [2024-04-18 21:18:38.915634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.240 [2024-04-18 21:18:38.915641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.240 [2024-04-18 21:18:38.924669] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:23.240 [2024-04-18 21:18:38.924688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.240 [2024-04-18 21:18:38.924695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.240 [2024-04-18 21:18:38.933460] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:23.240 [2024-04-18 21:18:38.933481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.240 [2024-04-18 21:18:38.933488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.240 [2024-04-18 21:18:38.941863] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:23.240 [2024-04-18 21:18:38.941883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.240 [2024-04-18 21:18:38.941891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.240 [2024-04-18 21:18:38.951628] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:23.240 [2024-04-18 21:18:38.951647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.240 [2024-04-18 21:18:38.951655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.240 [2024-04-18 21:18:38.961486] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:23.240 [2024-04-18 21:18:38.961506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.240 [2024-04-18 21:18:38.961519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.240 [2024-04-18 21:18:38.971487] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:23.240 [2024-04-18 21:18:38.971507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.240 [2024-04-18 21:18:38.971520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.240 [2024-04-18 21:18:38.980234] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:23.240 [2024-04-18 21:18:38.980254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.240 [2024-04-18 21:18:38.980262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.240 [2024-04-18 21:18:38.994861] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:23.240 [2024-04-18 21:18:38.994884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.240 [2024-04-18 21:18:38.994892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.240 [2024-04-18 21:18:39.009710] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:23.240 [2024-04-18 21:18:39.009730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.240 [2024-04-18 21:18:39.009738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.240 [2024-04-18 21:18:39.022461] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:23.240 [2024-04-18 21:18:39.022481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.240 [2024-04-18 21:18:39.022489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.240 [2024-04-18 21:18:39.040029] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:23.240 [2024-04-18 21:18:39.040050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.240 [2024-04-18 21:18:39.040058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.240 [2024-04-18 21:18:39.053248] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:23.240 [2024-04-18 21:18:39.053267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.240 [2024-04-18 21:18:39.053275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.240 [2024-04-18 21:18:39.064500] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:23.240 [2024-04-18 21:18:39.064530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.240 [2024-04-18 21:18:39.064537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.240 [2024-04-18 21:18:39.073847] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:23.240 [2024-04-18 21:18:39.073867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.240 [2024-04-18 21:18:39.073874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.240 [2024-04-18 21:18:39.082960] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:23.240 [2024-04-18 21:18:39.082980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.240 [2024-04-18 21:18:39.082988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.240 [2024-04-18 21:18:39.092730] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:23.240 [2024-04-18 21:18:39.092750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.240 [2024-04-18 21:18:39.092757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.240 [2024-04-18 21:18:39.101321] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:23.240 [2024-04-18 21:18:39.101340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.240 [2024-04-18 21:18:39.101348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.240 [2024-04-18 21:18:39.113550] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d22fc0) 00:25:23.240 [2024-04-18 21:18:39.113570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.240 [2024-04-18 21:18:39.113578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.240 00:25:23.240 Latency(us) 00:25:23.240 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:23.240 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:23.240 nvme0n1 : 2.00 3193.83 399.23 0.00 0.00 5007.84 4103.12 18122.13 00:25:23.240 =================================================================================================================== 00:25:23.240 Total : 3193.83 399.23 0.00 0.00 5007.84 4103.12 18122.13 00:25:23.240 0 00:25:23.240 21:18:39 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:23.240 21:18:39 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:23.240 21:18:39 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:23.240 | .driver_specific 00:25:23.240 | .nvme_error 00:25:23.240 | .status_code 00:25:23.241 | .command_transient_transport_error' 00:25:23.241 21:18:39 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:23.500 21:18:39 -- host/digest.sh@71 -- # (( 206 > 0 )) 00:25:23.500 21:18:39 -- host/digest.sh@73 -- # killprocess 3191221 00:25:23.500 21:18:39 -- common/autotest_common.sh@936 -- # '[' -z 3191221 ']' 00:25:23.500 21:18:39 -- common/autotest_common.sh@940 -- # kill -0 3191221 00:25:23.500 21:18:39 -- common/autotest_common.sh@941 -- # uname 00:25:23.500 21:18:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:23.500 21:18:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3191221 00:25:23.500 21:18:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:23.500 21:18:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:23.500 21:18:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3191221' 00:25:23.500 killing process with pid 3191221 00:25:23.500 21:18:39 -- common/autotest_common.sh@955 -- # kill 3191221 00:25:23.500 Received shutdown signal, test time was about 2.000000 seconds 00:25:23.500 00:25:23.500 Latency(us) 00:25:23.500 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:23.500 =================================================================================================================== 00:25:23.500 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:23.500 21:18:39 -- common/autotest_common.sh@960 -- # wait 3191221 00:25:23.759 21:18:39 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:25:23.759 21:18:39 -- host/digest.sh@54 -- # local rw bs qd 00:25:23.759 21:18:39 -- host/digest.sh@56 -- # rw=randwrite 00:25:23.759 21:18:39 -- host/digest.sh@56 -- # bs=4096 00:25:23.759 21:18:39 -- host/digest.sh@56 -- # qd=128 00:25:23.759 21:18:39 -- host/digest.sh@58 -- # bperfpid=3191917 00:25:23.759 21:18:39 -- host/digest.sh@60 -- # waitforlisten 3191917 /var/tmp/bperf.sock 00:25:23.759 21:18:39 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:25:23.759 21:18:39 -- common/autotest_common.sh@817 -- # '[' -z 3191917 ']' 00:25:23.759 21:18:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:23.759 21:18:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:23.760 21:18:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:23.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:23.760 21:18:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:23.760 21:18:39 -- common/autotest_common.sh@10 -- # set +x 00:25:23.760 [2024-04-18 21:18:39.633004] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:25:23.760 [2024-04-18 21:18:39.633051] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3191917 ] 00:25:23.760 EAL: No free 2048 kB hugepages reported on node 1 00:25:24.019 [2024-04-18 21:18:39.692114] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.019 [2024-04-18 21:18:39.762264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:24.594 21:18:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:24.594 21:18:40 -- common/autotest_common.sh@850 -- # return 0 00:25:24.594 21:18:40 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:24.594 21:18:40 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:24.853 21:18:40 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:24.853 21:18:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:24.853 21:18:40 -- common/autotest_common.sh@10 -- # set +x 00:25:24.853 21:18:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:24.853 21:18:40 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:24.853 21:18:40 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:25.113 nvme0n1 00:25:25.113 21:18:40 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:25.113 21:18:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:25.113 21:18:40 -- common/autotest_common.sh@10 -- # set +x 00:25:25.113 21:18:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:25.113 21:18:40 -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:25.113 21:18:40 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:25.113 Running I/O for 2 seconds... 00:25:25.113 [2024-04-18 21:18:40.950687] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.113 [2024-04-18 21:18:40.951365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.113 [2024-04-18 21:18:40.951392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:25.113 [2024-04-18 21:18:40.960761] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.113 [2024-04-18 21:18:40.961032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.113 [2024-04-18 21:18:40.961053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.113 [2024-04-18 21:18:40.970673] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.113 [2024-04-18 21:18:40.970882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.113 [2024-04-18 21:18:40.970900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.113 [2024-04-18 21:18:40.980458] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.113 [2024-04-18 21:18:40.980698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.113 [2024-04-18 21:18:40.980716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.113 [2024-04-18 21:18:40.990276] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.113 [2024-04-18 21:18:40.990540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.113 [2024-04-18 21:18:40.990559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.113 [2024-04-18 21:18:41.000137] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.113 [2024-04-18 21:18:41.000375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.113 [2024-04-18 21:18:41.000393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.113 [2024-04-18 21:18:41.009963] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.113 [2024-04-18 21:18:41.010204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.113 [2024-04-18 21:18:41.010223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.113 [2024-04-18 21:18:41.019887] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.113 [2024-04-18 21:18:41.020168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.113 [2024-04-18 21:18:41.020186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.113 [2024-04-18 21:18:41.029634] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.113 [2024-04-18 21:18:41.029919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.113 [2024-04-18 21:18:41.029936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.113 [2024-04-18 21:18:41.039436] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.113 [2024-04-18 21:18:41.039687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.113 [2024-04-18 21:18:41.039705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.373 [2024-04-18 21:18:41.049466] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.373 [2024-04-18 21:18:41.049744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-04-18 21:18:41.049764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.373 [2024-04-18 21:18:41.059303] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.373 [2024-04-18 21:18:41.059585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-04-18 21:18:41.059606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.373 [2024-04-18 21:18:41.069103] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.373 [2024-04-18 21:18:41.069331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-04-18 21:18:41.069349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.373 [2024-04-18 21:18:41.079089] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.373 [2024-04-18 21:18:41.079374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-04-18 21:18:41.079392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.373 [2024-04-18 21:18:41.088926] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.373 [2024-04-18 21:18:41.089204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-04-18 21:18:41.089221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.373 [2024-04-18 21:18:41.098665] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.373 [2024-04-18 21:18:41.098955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-04-18 21:18:41.098972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.373 [2024-04-18 21:18:41.108468] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.373 [2024-04-18 21:18:41.108785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-04-18 21:18:41.108803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.373 [2024-04-18 21:18:41.118252] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.373 [2024-04-18 21:18:41.118477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-04-18 21:18:41.118495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.373 [2024-04-18 21:18:41.128004] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.373 [2024-04-18 21:18:41.128308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-04-18 21:18:41.128326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.373 [2024-04-18 21:18:41.137850] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.373 [2024-04-18 21:18:41.138135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-04-18 21:18:41.138153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.373 [2024-04-18 21:18:41.147587] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.373 [2024-04-18 21:18:41.147903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-04-18 21:18:41.147921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.373 [2024-04-18 21:18:41.157414] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.373 [2024-04-18 21:18:41.157678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-04-18 21:18:41.157696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.373 [2024-04-18 21:18:41.167143] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.373 [2024-04-18 21:18:41.167379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:25008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-04-18 21:18:41.167396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.373 [2024-04-18 21:18:41.176954] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.373 [2024-04-18 21:18:41.177178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-04-18 21:18:41.177195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.373 [2024-04-18 21:18:41.186665] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.373 [2024-04-18 21:18:41.186884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-04-18 21:18:41.186902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.373 [2024-04-18 21:18:41.196469] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.373 [2024-04-18 21:18:41.196776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-04-18 21:18:41.196794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.373 [2024-04-18 21:18:41.206279] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.373 [2024-04-18 21:18:41.206583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-04-18 21:18:41.206601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.373 [2024-04-18 21:18:41.216138] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.373 [2024-04-18 21:18:41.216361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-04-18 21:18:41.216380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.373 [2024-04-18 21:18:41.225968] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.373 [2024-04-18 21:18:41.226219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-04-18 21:18:41.226237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.373 [2024-04-18 21:18:41.235685] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.373 [2024-04-18 21:18:41.235924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-04-18 21:18:41.235942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.373 [2024-04-18 21:18:41.245477] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.373 [2024-04-18 21:18:41.245746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-04-18 21:18:41.245774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.373 [2024-04-18 21:18:41.255298] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.373 [2024-04-18 21:18:41.255582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-04-18 21:18:41.255601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.373 [2024-04-18 21:18:41.265140] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.373 [2024-04-18 21:18:41.265392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-04-18 21:18:41.265410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.373 [2024-04-18 21:18:41.274949] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.373 [2024-04-18 21:18:41.275227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-04-18 21:18:41.275245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.373 [2024-04-18 21:18:41.284729] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.373 [2024-04-18 21:18:41.285028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-04-18 21:18:41.285045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.373 [2024-04-18 21:18:41.294535] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.373 [2024-04-18 21:18:41.294775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-04-18 21:18:41.294792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.633 [2024-04-18 21:18:41.304501] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.633 [2024-04-18 21:18:41.304791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.633 [2024-04-18 21:18:41.304808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.633 [2024-04-18 21:18:41.314436] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.633 [2024-04-18 21:18:41.314712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.633 [2024-04-18 21:18:41.314733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.633 [2024-04-18 21:18:41.324206] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.633 [2024-04-18 21:18:41.324449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.633 [2024-04-18 21:18:41.324466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.633 [2024-04-18 21:18:41.333982] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.633 [2024-04-18 21:18:41.334224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.633 [2024-04-18 21:18:41.334242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.633 [2024-04-18 21:18:41.343728] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.633 [2024-04-18 21:18:41.344022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.633 [2024-04-18 21:18:41.344040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.633 [2024-04-18 21:18:41.353609] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.633 [2024-04-18 21:18:41.353869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.633 [2024-04-18 21:18:41.353886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.633 [2024-04-18 21:18:41.363358] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.633 [2024-04-18 21:18:41.363646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.633 [2024-04-18 21:18:41.363663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.633 [2024-04-18 21:18:41.373148] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.633 [2024-04-18 21:18:41.373436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:20114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.633 [2024-04-18 21:18:41.373454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.633 [2024-04-18 21:18:41.382987] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.633 [2024-04-18 21:18:41.383223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.633 [2024-04-18 21:18:41.383240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.633 [2024-04-18 21:18:41.392720] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.633 [2024-04-18 21:18:41.392976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.633 [2024-04-18 21:18:41.392994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.633 [2024-04-18 21:18:41.402536] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.633 [2024-04-18 21:18:41.402762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.633 [2024-04-18 21:18:41.402780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.633 [2024-04-18 21:18:41.412281] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.633 [2024-04-18 21:18:41.412493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.633 [2024-04-18 21:18:41.412515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.633 [2024-04-18 21:18:41.422130] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.633 [2024-04-18 21:18:41.422339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.633 [2024-04-18 21:18:41.422356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.633 [2024-04-18 21:18:41.431896] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.633 [2024-04-18 21:18:41.432104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.633 [2024-04-18 21:18:41.432121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.633 [2024-04-18 21:18:41.441675] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.633 [2024-04-18 21:18:41.441904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.633 [2024-04-18 21:18:41.441922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.633 [2024-04-18 21:18:41.451527] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.633 [2024-04-18 21:18:41.451810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.633 [2024-04-18 21:18:41.451827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.633 [2024-04-18 21:18:41.461312] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.633 [2024-04-18 21:18:41.461627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.633 [2024-04-18 21:18:41.461645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.633 [2024-04-18 21:18:41.471214] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.633 [2024-04-18 21:18:41.471462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.633 [2024-04-18 21:18:41.471481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.633 [2024-04-18 21:18:41.480949] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.633 [2024-04-18 21:18:41.481257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.633 [2024-04-18 21:18:41.481276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.634 [2024-04-18 21:18:41.490713] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.634 [2024-04-18 21:18:41.490939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:9839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.634 [2024-04-18 21:18:41.490956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.634 [2024-04-18 21:18:41.500448] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.634 [2024-04-18 21:18:41.500745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.634 [2024-04-18 21:18:41.500763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.634 [2024-04-18 21:18:41.510258] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.634 [2024-04-18 21:18:41.510520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.634 [2024-04-18 21:18:41.510539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.634 [2024-04-18 21:18:41.519990] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.634 [2024-04-18 21:18:41.520284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.634 [2024-04-18 21:18:41.520301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.634 [2024-04-18 21:18:41.530116] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.634 [2024-04-18 21:18:41.530376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.634 [2024-04-18 21:18:41.530394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.634 [2024-04-18 21:18:41.540098] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.634 [2024-04-18 21:18:41.540338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.634 [2024-04-18 21:18:41.540355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.634 [2024-04-18 21:18:41.550117] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.634 [2024-04-18 21:18:41.550350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.634 [2024-04-18 21:18:41.550368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.634 [2024-04-18 21:18:41.560305] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.634 [2024-04-18 21:18:41.560545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.634 [2024-04-18 21:18:41.560563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.894 [2024-04-18 21:18:41.570298] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.894 [2024-04-18 21:18:41.570560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.894 [2024-04-18 21:18:41.570584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.894 [2024-04-18 21:18:41.580087] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.894 [2024-04-18 21:18:41.580323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.894 [2024-04-18 21:18:41.580341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.894 [2024-04-18 21:18:41.589976] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.894 [2024-04-18 21:18:41.590211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.895 [2024-04-18 21:18:41.590229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.895 [2024-04-18 21:18:41.600006] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.895 [2024-04-18 21:18:41.600294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.895 [2024-04-18 21:18:41.600312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.895 [2024-04-18 21:18:41.610004] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.895 [2024-04-18 21:18:41.610293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.895 [2024-04-18 21:18:41.610310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.895 [2024-04-18 21:18:41.619841] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.895 [2024-04-18 21:18:41.620062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.895 [2024-04-18 21:18:41.620079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.895 [2024-04-18 21:18:41.629592] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.895 [2024-04-18 21:18:41.629833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.895 [2024-04-18 21:18:41.629850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.895 [2024-04-18 21:18:41.639407] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.895 [2024-04-18 21:18:41.639682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.895 [2024-04-18 21:18:41.639700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.895 [2024-04-18 21:18:41.649184] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.895 [2024-04-18 21:18:41.649484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.895 [2024-04-18 21:18:41.649501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.895 [2024-04-18 21:18:41.658965] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.895 [2024-04-18 21:18:41.659228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.895 [2024-04-18 21:18:41.659245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.895 [2024-04-18 21:18:41.668757] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.895 [2024-04-18 21:18:41.668995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.895 [2024-04-18 21:18:41.669013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.895 [2024-04-18 21:18:41.678586] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.895 [2024-04-18 21:18:41.678872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.895 [2024-04-18 21:18:41.678890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.895 [2024-04-18 21:18:41.688349] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.895 [2024-04-18 21:18:41.688629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.895 [2024-04-18 21:18:41.688647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.895 [2024-04-18 21:18:41.698128] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.895 [2024-04-18 21:18:41.698402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.895 [2024-04-18 21:18:41.698419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.895 [2024-04-18 21:18:41.707964] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.895 [2024-04-18 21:18:41.708214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.895 [2024-04-18 21:18:41.708232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.895 [2024-04-18 21:18:41.717702] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.895 [2024-04-18 21:18:41.718016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.895 [2024-04-18 21:18:41.718033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.895 [2024-04-18 21:18:41.727690] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.895 [2024-04-18 21:18:41.727904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.895 [2024-04-18 21:18:41.727923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.895 [2024-04-18 21:18:41.737401] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.895 [2024-04-18 21:18:41.737703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.895 [2024-04-18 21:18:41.737722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.895 [2024-04-18 21:18:41.747160] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.895 [2024-04-18 21:18:41.747378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.895 [2024-04-18 21:18:41.747397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.895 [2024-04-18 21:18:41.756954] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.895 [2024-04-18 21:18:41.757231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.895 [2024-04-18 21:18:41.757249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.895 [2024-04-18 21:18:41.766733] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.895 [2024-04-18 21:18:41.766955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.895 [2024-04-18 21:18:41.766973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.895 [2024-04-18 21:18:41.776474] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.895 [2024-04-18 21:18:41.776756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.895 [2024-04-18 21:18:41.776776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.895 [2024-04-18 21:18:41.786280] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.895 [2024-04-18 21:18:41.786570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.895 [2024-04-18 21:18:41.786587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.895 [2024-04-18 21:18:41.796090] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.895 [2024-04-18 21:18:41.796331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.895 [2024-04-18 21:18:41.796349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.895 [2024-04-18 21:18:41.805876] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.895 [2024-04-18 21:18:41.806175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.895 [2024-04-18 21:18:41.806193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.895 [2024-04-18 21:18:41.815665] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:25.895 [2024-04-18 21:18:41.815941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.895 [2024-04-18 21:18:41.815958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.161 [2024-04-18 21:18:41.825829] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.161 [2024-04-18 21:18:41.826094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.161 [2024-04-18 21:18:41.826117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.161 [2024-04-18 21:18:41.835848] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.161 [2024-04-18 21:18:41.836084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.162 [2024-04-18 21:18:41.836102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.162 [2024-04-18 21:18:41.845882] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.162 [2024-04-18 21:18:41.846108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.162 [2024-04-18 21:18:41.846126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.162 [2024-04-18 21:18:41.855829] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.162 [2024-04-18 21:18:41.856106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.162 [2024-04-18 21:18:41.856124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.162 [2024-04-18 21:18:41.865771] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.162 [2024-04-18 21:18:41.866053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.162 [2024-04-18 21:18:41.866071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.162 [2024-04-18 21:18:41.875561] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.162 [2024-04-18 21:18:41.875805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.162 [2024-04-18 21:18:41.875824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.162 [2024-04-18 21:18:41.885275] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.162 [2024-04-18 21:18:41.885500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.162 [2024-04-18 21:18:41.885522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.162 [2024-04-18 21:18:41.895069] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.162 [2024-04-18 21:18:41.895352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.162 [2024-04-18 21:18:41.895369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.162 [2024-04-18 21:18:41.904852] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.162 [2024-04-18 21:18:41.905120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.162 [2024-04-18 21:18:41.905138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.162 [2024-04-18 21:18:41.914640] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.162 [2024-04-18 21:18:41.914900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.162 [2024-04-18 21:18:41.914918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.162 [2024-04-18 21:18:41.924490] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.162 [2024-04-18 21:18:41.924795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.162 [2024-04-18 21:18:41.924813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.162 [2024-04-18 21:18:41.934236] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.162 [2024-04-18 21:18:41.934518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.162 [2024-04-18 21:18:41.934551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.162 [2024-04-18 21:18:41.943940] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.162 [2024-04-18 21:18:41.944235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.162 [2024-04-18 21:18:41.944253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.162 [2024-04-18 21:18:41.953747] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.162 [2024-04-18 21:18:41.954008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.162 [2024-04-18 21:18:41.954025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.162 [2024-04-18 21:18:41.963570] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.162 [2024-04-18 21:18:41.963826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:9395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.162 [2024-04-18 21:18:41.963843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.162 [2024-04-18 21:18:41.973341] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.162 [2024-04-18 21:18:41.973570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.162 [2024-04-18 21:18:41.973587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.162 [2024-04-18 21:18:41.983189] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.162 [2024-04-18 21:18:41.983440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.162 [2024-04-18 21:18:41.983461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.162 [2024-04-18 21:18:41.993018] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.162 [2024-04-18 21:18:41.993253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.162 [2024-04-18 21:18:41.993271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.162 [2024-04-18 21:18:42.002792] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.162 [2024-04-18 21:18:42.003047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.162 [2024-04-18 21:18:42.003065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.162 [2024-04-18 21:18:42.012575] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.162 [2024-04-18 21:18:42.012829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.162 [2024-04-18 21:18:42.012846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.162 [2024-04-18 21:18:42.022348] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.162 [2024-04-18 21:18:42.022582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.162 [2024-04-18 21:18:42.022600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.162 [2024-04-18 21:18:42.032124] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.162 [2024-04-18 21:18:42.032377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.162 [2024-04-18 21:18:42.032395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.162 [2024-04-18 21:18:42.042013] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.162 [2024-04-18 21:18:42.042250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.162 [2024-04-18 21:18:42.042268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.162 [2024-04-18 21:18:42.051778] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.162 [2024-04-18 21:18:42.052035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.162 [2024-04-18 21:18:42.052057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.162 [2024-04-18 21:18:42.061568] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.162 [2024-04-18 21:18:42.061804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.162 [2024-04-18 21:18:42.061823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.162 [2024-04-18 21:18:42.071546] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.162 [2024-04-18 21:18:42.071793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.162 [2024-04-18 21:18:42.071811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.162 [2024-04-18 21:18:42.081424] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.162 [2024-04-18 21:18:42.081665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.162 [2024-04-18 21:18:42.081687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.162 [2024-04-18 21:18:42.091354] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.162 [2024-04-18 21:18:42.091610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.162 [2024-04-18 21:18:42.091629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.422 [2024-04-18 21:18:42.101391] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.422 [2024-04-18 21:18:42.101764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.422 [2024-04-18 21:18:42.101781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.422 [2024-04-18 21:18:42.111297] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.422 [2024-04-18 21:18:42.111548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.422 [2024-04-18 21:18:42.111566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.422 [2024-04-18 21:18:42.121045] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.422 [2024-04-18 21:18:42.121284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.422 [2024-04-18 21:18:42.121301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.422 [2024-04-18 21:18:42.130810] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.422 [2024-04-18 21:18:42.131052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.422 [2024-04-18 21:18:42.131069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.422 [2024-04-18 21:18:42.140581] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.422 [2024-04-18 21:18:42.140825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.422 [2024-04-18 21:18:42.140842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.422 [2024-04-18 21:18:42.150357] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.422 [2024-04-18 21:18:42.150604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.422 [2024-04-18 21:18:42.150621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.423 [2024-04-18 21:18:42.160143] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.423 [2024-04-18 21:18:42.160387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.423 [2024-04-18 21:18:42.160405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.423 [2024-04-18 21:18:42.169896] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.423 [2024-04-18 21:18:42.170143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.423 [2024-04-18 21:18:42.170160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.423 [2024-04-18 21:18:42.179664] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.423 [2024-04-18 21:18:42.179907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.423 [2024-04-18 21:18:42.179925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.423 [2024-04-18 21:18:42.189416] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.423 [2024-04-18 21:18:42.189660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.423 [2024-04-18 21:18:42.189677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.423 [2024-04-18 21:18:42.199187] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.423 [2024-04-18 21:18:42.199424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:25295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.423 [2024-04-18 21:18:42.199442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.423 [2024-04-18 21:18:42.208961] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.423 [2024-04-18 21:18:42.209204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.423 [2024-04-18 21:18:42.209222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.423 [2024-04-18 21:18:42.218705] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.423 [2024-04-18 21:18:42.218955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.423 [2024-04-18 21:18:42.218972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.423 [2024-04-18 21:18:42.228469] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.423 [2024-04-18 21:18:42.228724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.423 [2024-04-18 21:18:42.228742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.423 [2024-04-18 21:18:42.238309] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.423 [2024-04-18 21:18:42.238556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.423 [2024-04-18 21:18:42.238575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.423 [2024-04-18 21:18:42.248070] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.423 [2024-04-18 21:18:42.248317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.423 [2024-04-18 21:18:42.248335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.423 [2024-04-18 21:18:42.257829] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.423 [2024-04-18 21:18:42.258070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:20304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.423 [2024-04-18 21:18:42.258089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.423 [2024-04-18 21:18:42.267597] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.423 [2024-04-18 21:18:42.267836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.423 [2024-04-18 21:18:42.267853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.423 [2024-04-18 21:18:42.277363] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.423 [2024-04-18 21:18:42.277609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.423 [2024-04-18 21:18:42.277627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.423 [2024-04-18 21:18:42.287151] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.423 [2024-04-18 21:18:42.287396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.423 [2024-04-18 21:18:42.287414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.423 [2024-04-18 21:18:42.297204] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.423 [2024-04-18 21:18:42.297450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.423 [2024-04-18 21:18:42.297468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.423 [2024-04-18 21:18:42.307134] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.423 [2024-04-18 21:18:42.307382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.423 [2024-04-18 21:18:42.307399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.423 [2024-04-18 21:18:42.316883] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.423 [2024-04-18 21:18:42.317131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.423 [2024-04-18 21:18:42.317149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.423 [2024-04-18 21:18:42.326656] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.423 [2024-04-18 21:18:42.326897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.423 [2024-04-18 21:18:42.326915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.423 [2024-04-18 21:18:42.336402] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.423 [2024-04-18 21:18:42.336653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.423 [2024-04-18 21:18:42.336674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.423 [2024-04-18 21:18:42.346171] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.423 [2024-04-18 21:18:42.346408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.423 [2024-04-18 21:18:42.346425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.683 [2024-04-18 21:18:42.356220] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.683 [2024-04-18 21:18:42.356467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.683 [2024-04-18 21:18:42.356485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.683 [2024-04-18 21:18:42.366064] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.683 [2024-04-18 21:18:42.366303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.683 [2024-04-18 21:18:42.366321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.683 [2024-04-18 21:18:42.375816] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.683 [2024-04-18 21:18:42.376061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.683 [2024-04-18 21:18:42.376078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.684 [2024-04-18 21:18:42.385556] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.684 [2024-04-18 21:18:42.385802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.684 [2024-04-18 21:18:42.385819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.684 [2024-04-18 21:18:42.395346] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.684 [2024-04-18 21:18:42.395590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.684 [2024-04-18 21:18:42.395607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.684 [2024-04-18 21:18:42.405043] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.684 [2024-04-18 21:18:42.405286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.684 [2024-04-18 21:18:42.405303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.684 [2024-04-18 21:18:42.414812] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.684 [2024-04-18 21:18:42.415053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.684 [2024-04-18 21:18:42.415070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.684 [2024-04-18 21:18:42.424572] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.684 [2024-04-18 21:18:42.424821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:63 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.684 [2024-04-18 21:18:42.424837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.684 [2024-04-18 21:18:42.434321] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.684 [2024-04-18 21:18:42.434571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.684 [2024-04-18 21:18:42.434588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.684 [2024-04-18 21:18:42.444081] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.684 [2024-04-18 21:18:42.444323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.684 [2024-04-18 21:18:42.444340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.684 [2024-04-18 21:18:42.453846] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.684 [2024-04-18 21:18:42.454087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.684 [2024-04-18 21:18:42.454104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.684 [2024-04-18 21:18:42.463601] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.684 [2024-04-18 21:18:42.463846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:9758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.684 [2024-04-18 21:18:42.463862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.684 [2024-04-18 21:18:42.473342] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.684 [2024-04-18 21:18:42.473586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.684 [2024-04-18 21:18:42.473603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.684 [2024-04-18 21:18:42.483076] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.684 [2024-04-18 21:18:42.483320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.684 [2024-04-18 21:18:42.483337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.684 [2024-04-18 21:18:42.492899] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.684 [2024-04-18 21:18:42.493147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.684 [2024-04-18 21:18:42.493166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.684 [2024-04-18 21:18:42.502645] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.684 [2024-04-18 21:18:42.502888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.684 [2024-04-18 21:18:42.502906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.684 [2024-04-18 21:18:42.512391] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.684 [2024-04-18 21:18:42.512644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.684 [2024-04-18 21:18:42.512661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.684 [2024-04-18 21:18:42.522143] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.684 [2024-04-18 21:18:42.522388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.684 [2024-04-18 21:18:42.522406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.684 [2024-04-18 21:18:42.531878] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.684 [2024-04-18 21:18:42.532129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.684 [2024-04-18 21:18:42.532146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.684 [2024-04-18 21:18:42.541613] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.684 [2024-04-18 21:18:42.541856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.684 [2024-04-18 21:18:42.541873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.684 [2024-04-18 21:18:42.551346] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.684 [2024-04-18 21:18:42.551590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.684 [2024-04-18 21:18:42.551608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.684 [2024-04-18 21:18:42.561301] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.684 [2024-04-18 21:18:42.561559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.684 [2024-04-18 21:18:42.561576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.684 [2024-04-18 21:18:42.571055] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.684 [2024-04-18 21:18:42.571297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.684 [2024-04-18 21:18:42.571314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.684 [2024-04-18 21:18:42.580806] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.684 [2024-04-18 21:18:42.581051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.684 [2024-04-18 21:18:42.581068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.684 [2024-04-18 21:18:42.590562] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.684 [2024-04-18 21:18:42.590810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.684 [2024-04-18 21:18:42.590830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.684 [2024-04-18 21:18:42.600282] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.684 [2024-04-18 21:18:42.600529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.684 [2024-04-18 21:18:42.600546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.684 [2024-04-18 21:18:42.610097] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.684 [2024-04-18 21:18:42.610344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:9625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.684 [2024-04-18 21:18:42.610362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.944 [2024-04-18 21:18:42.620178] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.944 [2024-04-18 21:18:42.620421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.945 [2024-04-18 21:18:42.620438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.945 [2024-04-18 21:18:42.629931] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.945 [2024-04-18 21:18:42.630183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.945 [2024-04-18 21:18:42.630200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.945 [2024-04-18 21:18:42.639649] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.945 [2024-04-18 21:18:42.639888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.945 [2024-04-18 21:18:42.639905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.945 [2024-04-18 21:18:42.649379] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.945 [2024-04-18 21:18:42.649629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.945 [2024-04-18 21:18:42.649646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.945 [2024-04-18 21:18:42.659140] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.945 [2024-04-18 21:18:42.659386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.945 [2024-04-18 21:18:42.659403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.945 [2024-04-18 21:18:42.668903] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.945 [2024-04-18 21:18:42.669146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.945 [2024-04-18 21:18:42.669164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.945 [2024-04-18 21:18:42.678651] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.945 [2024-04-18 21:18:42.678895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.945 [2024-04-18 21:18:42.678915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.945 [2024-04-18 21:18:42.688462] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.945 [2024-04-18 21:18:42.688716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.945 [2024-04-18 21:18:42.688733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.945 [2024-04-18 21:18:42.698242] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.945 [2024-04-18 21:18:42.698489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.945 [2024-04-18 21:18:42.698506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.945 [2024-04-18 21:18:42.708129] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.945 [2024-04-18 21:18:42.708375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.945 [2024-04-18 21:18:42.708392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.945 [2024-04-18 21:18:42.717904] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.945 [2024-04-18 21:18:42.718147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.945 [2024-04-18 21:18:42.718164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.945 [2024-04-18 21:18:42.727783] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.945 [2024-04-18 21:18:42.728023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.945 [2024-04-18 21:18:42.728040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.945 [2024-04-18 21:18:42.737523] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.945 [2024-04-18 21:18:42.737771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.945 [2024-04-18 21:18:42.737789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.945 [2024-04-18 21:18:42.747375] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.945 [2024-04-18 21:18:42.747617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.945 [2024-04-18 21:18:42.747636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.945 [2024-04-18 21:18:42.757124] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.945 [2024-04-18 21:18:42.757371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.945 [2024-04-18 21:18:42.757389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.945 [2024-04-18 21:18:42.766862] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.945 [2024-04-18 21:18:42.767109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.945 [2024-04-18 21:18:42.767127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.945 [2024-04-18 21:18:42.776635] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.945 [2024-04-18 21:18:42.776884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.945 [2024-04-18 21:18:42.776901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.945 [2024-04-18 21:18:42.786356] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.945 [2024-04-18 21:18:42.786605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.945 [2024-04-18 21:18:42.786622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.945 [2024-04-18 21:18:42.796119] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.945 [2024-04-18 21:18:42.796362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.945 [2024-04-18 21:18:42.796379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.945 [2024-04-18 21:18:42.805856] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.945 [2024-04-18 21:18:42.806100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.945 [2024-04-18 21:18:42.806117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.945 [2024-04-18 21:18:42.815626] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.945 [2024-04-18 21:18:42.815870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.945 [2024-04-18 21:18:42.815887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.945 [2024-04-18 21:18:42.825349] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.945 [2024-04-18 21:18:42.825593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.945 [2024-04-18 21:18:42.825610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.945 [2024-04-18 21:18:42.835111] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.945 [2024-04-18 21:18:42.835355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.945 [2024-04-18 21:18:42.835372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.945 [2024-04-18 21:18:42.844841] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.945 [2024-04-18 21:18:42.845086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.945 [2024-04-18 21:18:42.845107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.945 [2024-04-18 21:18:42.854598] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.945 [2024-04-18 21:18:42.854842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.945 [2024-04-18 21:18:42.854859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.945 [2024-04-18 21:18:42.864303] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.945 [2024-04-18 21:18:42.864546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.945 [2024-04-18 21:18:42.864563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.945 [2024-04-18 21:18:42.874242] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:26.945 [2024-04-18 21:18:42.874488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:20065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.945 [2024-04-18 21:18:42.874505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:27.205 [2024-04-18 21:18:42.884170] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:27.205 [2024-04-18 21:18:42.884414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.205 [2024-04-18 21:18:42.884430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:27.205 [2024-04-18 21:18:42.893908] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:27.205 [2024-04-18 21:18:42.894160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.205 [2024-04-18 21:18:42.894177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:27.205 [2024-04-18 21:18:42.903654] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:27.205 [2024-04-18 21:18:42.903903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.205 [2024-04-18 21:18:42.903920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:27.205 [2024-04-18 21:18:42.913390] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:27.205 [2024-04-18 21:18:42.913638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.205 [2024-04-18 21:18:42.913655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:27.205 [2024-04-18 21:18:42.923150] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:27.205 [2024-04-18 21:18:42.923392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.205 [2024-04-18 21:18:42.923411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:27.205 [2024-04-18 21:18:42.932864] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc29bd0) with pdu=0x2000190fe720 00:25:27.205 [2024-04-18 21:18:42.933108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.205 [2024-04-18 21:18:42.933127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:27.205 00:25:27.205 Latency(us) 00:25:27.205 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:27.205 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:27.205 nvme0n1 : 2.00 25838.71 100.93 0.00 0.00 4945.12 4302.58 21085.50 00:25:27.205 =================================================================================================================== 00:25:27.205 Total : 25838.71 100.93 0.00 0.00 4945.12 4302.58 21085.50 00:25:27.205 0 00:25:27.205 21:18:42 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:27.205 21:18:42 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:27.205 21:18:42 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:27.205 | .driver_specific 00:25:27.205 | .nvme_error 00:25:27.205 | .status_code 00:25:27.205 | .command_transient_transport_error' 00:25:27.205 21:18:42 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:27.205 21:18:43 -- host/digest.sh@71 -- # (( 203 > 0 )) 00:25:27.205 21:18:43 -- host/digest.sh@73 -- # killprocess 3191917 00:25:27.205 21:18:43 -- common/autotest_common.sh@936 -- # '[' -z 3191917 ']' 00:25:27.205 21:18:43 -- common/autotest_common.sh@940 -- # kill -0 3191917 00:25:27.464 21:18:43 -- common/autotest_common.sh@941 -- # uname 00:25:27.464 21:18:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:27.464 21:18:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3191917 00:25:27.464 21:18:43 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:27.464 21:18:43 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:27.464 21:18:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3191917' 00:25:27.465 killing process with pid 3191917 00:25:27.465 21:18:43 -- common/autotest_common.sh@955 -- # kill 3191917 00:25:27.465 Received shutdown signal, test time was about 2.000000 seconds 00:25:27.465 00:25:27.465 Latency(us) 00:25:27.465 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:27.465 =================================================================================================================== 00:25:27.465 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:27.465 21:18:43 -- common/autotest_common.sh@960 -- # wait 3191917 00:25:27.465 21:18:43 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:25:27.465 21:18:43 -- host/digest.sh@54 -- # local rw bs qd 00:25:27.465 21:18:43 -- host/digest.sh@56 -- # rw=randwrite 00:25:27.465 21:18:43 -- host/digest.sh@56 -- # bs=131072 00:25:27.465 21:18:43 -- host/digest.sh@56 -- # qd=16 00:25:27.465 21:18:43 -- host/digest.sh@58 -- # bperfpid=3192401 00:25:27.465 21:18:43 -- host/digest.sh@60 -- # waitforlisten 3192401 /var/tmp/bperf.sock 00:25:27.465 21:18:43 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:25:27.465 21:18:43 -- common/autotest_common.sh@817 -- # '[' -z 3192401 ']' 00:25:27.465 21:18:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:27.465 21:18:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:27.465 21:18:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:27.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:27.465 21:18:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:27.465 21:18:43 -- common/autotest_common.sh@10 -- # set +x 00:25:27.724 [2024-04-18 21:18:43.431786] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:25:27.724 [2024-04-18 21:18:43.431834] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3192401 ] 00:25:27.724 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:27.724 Zero copy mechanism will not be used. 00:25:27.724 EAL: No free 2048 kB hugepages reported on node 1 00:25:27.724 [2024-04-18 21:18:43.492100] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.724 [2024-04-18 21:18:43.558486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:28.663 21:18:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:28.663 21:18:44 -- common/autotest_common.sh@850 -- # return 0 00:25:28.663 21:18:44 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:28.663 21:18:44 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:28.663 21:18:44 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:28.663 21:18:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:28.663 21:18:44 -- common/autotest_common.sh@10 -- # set +x 00:25:28.663 21:18:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:28.663 21:18:44 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:28.663 21:18:44 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:28.922 nvme0n1 00:25:28.922 21:18:44 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:28.922 21:18:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:28.922 21:18:44 -- common/autotest_common.sh@10 -- # set +x 00:25:28.922 21:18:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:28.922 21:18:44 -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:28.922 21:18:44 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:28.922 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:28.922 Zero copy mechanism will not be used. 00:25:28.922 Running I/O for 2 seconds... 00:25:28.922 [2024-04-18 21:18:44.797655] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:28.922 [2024-04-18 21:18:44.798253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.922 [2024-04-18 21:18:44.798283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.922 [2024-04-18 21:18:44.811238] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:28.922 [2024-04-18 21:18:44.811657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.922 [2024-04-18 21:18:44.811682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.922 [2024-04-18 21:18:44.820587] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:28.922 [2024-04-18 21:18:44.821013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.922 [2024-04-18 21:18:44.821034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.922 [2024-04-18 21:18:44.829448] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:28.922 [2024-04-18 21:18:44.829868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.922 [2024-04-18 21:18:44.829889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.922 [2024-04-18 21:18:44.839265] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:28.922 [2024-04-18 21:18:44.839412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.922 [2024-04-18 21:18:44.839432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.922 [2024-04-18 21:18:44.848885] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:28.922 [2024-04-18 21:18:44.849316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.922 [2024-04-18 21:18:44.849336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.182 [2024-04-18 21:18:44.866869] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.182 [2024-04-18 21:18:44.867578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.182 [2024-04-18 21:18:44.867598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.182 [2024-04-18 21:18:44.886737] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.182 [2024-04-18 21:18:44.887295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.182 [2024-04-18 21:18:44.887315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.182 [2024-04-18 21:18:44.896563] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.182 [2024-04-18 21:18:44.897077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.182 [2024-04-18 21:18:44.897095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.182 [2024-04-18 21:18:44.906735] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.182 [2024-04-18 21:18:44.907251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.182 [2024-04-18 21:18:44.907270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.182 [2024-04-18 21:18:44.916008] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.182 [2024-04-18 21:18:44.916447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.182 [2024-04-18 21:18:44.916465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.182 [2024-04-18 21:18:44.926099] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.182 [2024-04-18 21:18:44.926588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.182 [2024-04-18 21:18:44.926606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.182 [2024-04-18 21:18:44.936352] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.182 [2024-04-18 21:18:44.936838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.182 [2024-04-18 21:18:44.936860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.182 [2024-04-18 21:18:44.947459] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.182 [2024-04-18 21:18:44.947951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.182 [2024-04-18 21:18:44.947969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.182 [2024-04-18 21:18:44.957874] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.182 [2024-04-18 21:18:44.958433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.182 [2024-04-18 21:18:44.958451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.182 [2024-04-18 21:18:44.967328] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.182 [2024-04-18 21:18:44.967815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.182 [2024-04-18 21:18:44.967833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.182 [2024-04-18 21:18:44.977614] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.182 [2024-04-18 21:18:44.978065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.182 [2024-04-18 21:18:44.978083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.182 [2024-04-18 21:18:44.986966] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.182 [2024-04-18 21:18:44.987331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.182 [2024-04-18 21:18:44.987349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.182 [2024-04-18 21:18:44.996336] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.182 [2024-04-18 21:18:44.996794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.182 [2024-04-18 21:18:44.996812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.182 [2024-04-18 21:18:45.006122] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.182 [2024-04-18 21:18:45.006624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.182 [2024-04-18 21:18:45.006642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.182 [2024-04-18 21:18:45.016195] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.182 [2024-04-18 21:18:45.016623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.183 [2024-04-18 21:18:45.016641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.183 [2024-04-18 21:18:45.026070] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.183 [2024-04-18 21:18:45.026467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.183 [2024-04-18 21:18:45.026485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.183 [2024-04-18 21:18:45.034723] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.183 [2024-04-18 21:18:45.035153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.183 [2024-04-18 21:18:45.035171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.183 [2024-04-18 21:18:45.045254] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.183 [2024-04-18 21:18:45.045754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.183 [2024-04-18 21:18:45.045773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.183 [2024-04-18 21:18:45.054925] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.183 [2024-04-18 21:18:45.055273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.183 [2024-04-18 21:18:45.055291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.183 [2024-04-18 21:18:45.065062] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.183 [2024-04-18 21:18:45.065436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.183 [2024-04-18 21:18:45.065456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.183 [2024-04-18 21:18:45.075132] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.183 [2024-04-18 21:18:45.075558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.183 [2024-04-18 21:18:45.075578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.183 [2024-04-18 21:18:45.085327] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.183 [2024-04-18 21:18:45.085772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.183 [2024-04-18 21:18:45.085790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.183 [2024-04-18 21:18:45.095207] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.183 [2024-04-18 21:18:45.095609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.183 [2024-04-18 21:18:45.095628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.183 [2024-04-18 21:18:45.105019] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.183 [2024-04-18 21:18:45.105458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.183 [2024-04-18 21:18:45.105477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.443 [2024-04-18 21:18:45.114735] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.443 [2024-04-18 21:18:45.115197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.443 [2024-04-18 21:18:45.115215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.443 [2024-04-18 21:18:45.124168] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.443 [2024-04-18 21:18:45.124575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.443 [2024-04-18 21:18:45.124593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.443 [2024-04-18 21:18:45.134167] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.443 [2024-04-18 21:18:45.134620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.443 [2024-04-18 21:18:45.134639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.443 [2024-04-18 21:18:45.144982] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.443 [2024-04-18 21:18:45.145496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.443 [2024-04-18 21:18:45.145521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.443 [2024-04-18 21:18:45.155915] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.443 [2024-04-18 21:18:45.156305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.443 [2024-04-18 21:18:45.156323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.443 [2024-04-18 21:18:45.166665] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.443 [2024-04-18 21:18:45.167069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.443 [2024-04-18 21:18:45.167086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.443 [2024-04-18 21:18:45.176245] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.443 [2024-04-18 21:18:45.176661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.443 [2024-04-18 21:18:45.176679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.443 [2024-04-18 21:18:45.186969] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.443 [2024-04-18 21:18:45.187487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.443 [2024-04-18 21:18:45.187505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.443 [2024-04-18 21:18:45.197678] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.443 [2024-04-18 21:18:45.198216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.443 [2024-04-18 21:18:45.198237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.443 [2024-04-18 21:18:45.208094] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.443 [2024-04-18 21:18:45.208595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.443 [2024-04-18 21:18:45.208613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.443 [2024-04-18 21:18:45.219058] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.443 [2024-04-18 21:18:45.219585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.443 [2024-04-18 21:18:45.219602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.443 [2024-04-18 21:18:45.229626] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.443 [2024-04-18 21:18:45.229829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.443 [2024-04-18 21:18:45.229847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.443 [2024-04-18 21:18:45.240595] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.443 [2024-04-18 21:18:45.241105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.443 [2024-04-18 21:18:45.241123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.443 [2024-04-18 21:18:45.250745] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.443 [2024-04-18 21:18:45.251258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.443 [2024-04-18 21:18:45.251276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.443 [2024-04-18 21:18:45.261448] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.443 [2024-04-18 21:18:45.261815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.443 [2024-04-18 21:18:45.261832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.443 [2024-04-18 21:18:45.271349] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.443 [2024-04-18 21:18:45.271736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.443 [2024-04-18 21:18:45.271754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.443 [2024-04-18 21:18:45.281628] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.443 [2024-04-18 21:18:45.282024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.443 [2024-04-18 21:18:45.282042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.443 [2024-04-18 21:18:45.292742] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.444 [2024-04-18 21:18:45.293243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.444 [2024-04-18 21:18:45.293261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.444 [2024-04-18 21:18:45.311251] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.444 [2024-04-18 21:18:45.311731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.444 [2024-04-18 21:18:45.311749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.444 [2024-04-18 21:18:45.323903] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.444 [2024-04-18 21:18:45.324325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.444 [2024-04-18 21:18:45.324345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.444 [2024-04-18 21:18:45.335256] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.444 [2024-04-18 21:18:45.335645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.444 [2024-04-18 21:18:45.335664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.444 [2024-04-18 21:18:45.346727] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.444 [2024-04-18 21:18:45.347068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.444 [2024-04-18 21:18:45.347086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.444 [2024-04-18 21:18:45.356734] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.444 [2024-04-18 21:18:45.357124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.444 [2024-04-18 21:18:45.357142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.444 [2024-04-18 21:18:45.367224] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.444 [2024-04-18 21:18:45.367726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.444 [2024-04-18 21:18:45.367744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.703 [2024-04-18 21:18:45.378327] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.703 [2024-04-18 21:18:45.378850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.703 [2024-04-18 21:18:45.378867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.703 [2024-04-18 21:18:45.389187] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.703 [2024-04-18 21:18:45.389547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.703 [2024-04-18 21:18:45.389565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.704 [2024-04-18 21:18:45.399158] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.704 [2024-04-18 21:18:45.399702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.704 [2024-04-18 21:18:45.399721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.704 [2024-04-18 21:18:45.410872] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.704 [2024-04-18 21:18:45.411262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.704 [2024-04-18 21:18:45.411281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.704 [2024-04-18 21:18:45.421614] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.704 [2024-04-18 21:18:45.422085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.704 [2024-04-18 21:18:45.422105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.704 [2024-04-18 21:18:45.432393] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.704 [2024-04-18 21:18:45.432810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.704 [2024-04-18 21:18:45.432828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.704 [2024-04-18 21:18:45.444373] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.704 [2024-04-18 21:18:45.444723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.704 [2024-04-18 21:18:45.444741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.704 [2024-04-18 21:18:45.455719] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.704 [2024-04-18 21:18:45.456122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.704 [2024-04-18 21:18:45.456139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.704 [2024-04-18 21:18:45.466719] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.704 [2024-04-18 21:18:45.467226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.704 [2024-04-18 21:18:45.467244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.704 [2024-04-18 21:18:45.477217] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.704 [2024-04-18 21:18:45.477653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.704 [2024-04-18 21:18:45.477671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.704 [2024-04-18 21:18:45.487754] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.704 [2024-04-18 21:18:45.488191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.704 [2024-04-18 21:18:45.488211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.704 [2024-04-18 21:18:45.497018] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.704 [2024-04-18 21:18:45.497305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.704 [2024-04-18 21:18:45.497323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.704 [2024-04-18 21:18:45.507528] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.704 [2024-04-18 21:18:45.507913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.704 [2024-04-18 21:18:45.507931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.704 [2024-04-18 21:18:45.518487] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.704 [2024-04-18 21:18:45.519038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.704 [2024-04-18 21:18:45.519057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.704 [2024-04-18 21:18:45.529172] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.704 [2024-04-18 21:18:45.529530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.704 [2024-04-18 21:18:45.529548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.704 [2024-04-18 21:18:45.539648] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.704 [2024-04-18 21:18:45.540055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.704 [2024-04-18 21:18:45.540072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.704 [2024-04-18 21:18:45.549384] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.704 [2024-04-18 21:18:45.549806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.704 [2024-04-18 21:18:45.549824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.704 [2024-04-18 21:18:45.560996] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.704 [2024-04-18 21:18:45.561311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.704 [2024-04-18 21:18:45.561329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.704 [2024-04-18 21:18:45.572217] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.704 [2024-04-18 21:18:45.572526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.704 [2024-04-18 21:18:45.572546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.704 [2024-04-18 21:18:45.582584] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.704 [2024-04-18 21:18:45.582945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.704 [2024-04-18 21:18:45.582964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.704 [2024-04-18 21:18:45.593216] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.704 [2024-04-18 21:18:45.593610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.704 [2024-04-18 21:18:45.593628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.704 [2024-04-18 21:18:45.603422] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.704 [2024-04-18 21:18:45.603916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.704 [2024-04-18 21:18:45.603934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.704 [2024-04-18 21:18:45.614400] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.704 [2024-04-18 21:18:45.614741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.704 [2024-04-18 21:18:45.614758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.704 [2024-04-18 21:18:45.624825] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.704 [2024-04-18 21:18:45.625226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.704 [2024-04-18 21:18:45.625244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.964 [2024-04-18 21:18:45.635302] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.964 [2024-04-18 21:18:45.635728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.964 [2024-04-18 21:18:45.635746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.964 [2024-04-18 21:18:45.645847] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.964 [2024-04-18 21:18:45.646135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.964 [2024-04-18 21:18:45.646153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.964 [2024-04-18 21:18:45.655953] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.964 [2024-04-18 21:18:45.656374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.964 [2024-04-18 21:18:45.656392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.964 [2024-04-18 21:18:45.666456] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.964 [2024-04-18 21:18:45.666843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.964 [2024-04-18 21:18:45.666861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.964 [2024-04-18 21:18:45.677515] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.964 [2024-04-18 21:18:45.677939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.964 [2024-04-18 21:18:45.677958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.964 [2024-04-18 21:18:45.688753] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.964 [2024-04-18 21:18:45.689300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.964 [2024-04-18 21:18:45.689318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.964 [2024-04-18 21:18:45.698877] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.964 [2024-04-18 21:18:45.699270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.964 [2024-04-18 21:18:45.699288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.964 [2024-04-18 21:18:45.709260] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.964 [2024-04-18 21:18:45.709666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.964 [2024-04-18 21:18:45.709683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.964 [2024-04-18 21:18:45.720204] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.964 [2024-04-18 21:18:45.720715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.964 [2024-04-18 21:18:45.720732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.964 [2024-04-18 21:18:45.731943] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.964 [2024-04-18 21:18:45.732340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.964 [2024-04-18 21:18:45.732359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.964 [2024-04-18 21:18:45.742919] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.964 [2024-04-18 21:18:45.743803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.964 [2024-04-18 21:18:45.743820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.964 [2024-04-18 21:18:45.753624] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.964 [2024-04-18 21:18:45.753959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.964 [2024-04-18 21:18:45.753977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.964 [2024-04-18 21:18:45.764224] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.964 [2024-04-18 21:18:45.764587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.964 [2024-04-18 21:18:45.764609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.964 [2024-04-18 21:18:45.772944] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.964 [2024-04-18 21:18:45.773351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.964 [2024-04-18 21:18:45.773369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.964 [2024-04-18 21:18:45.782739] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.964 [2024-04-18 21:18:45.783187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.964 [2024-04-18 21:18:45.783205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.964 [2024-04-18 21:18:45.793924] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.964 [2024-04-18 21:18:45.794322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.964 [2024-04-18 21:18:45.794339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.964 [2024-04-18 21:18:45.803291] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.964 [2024-04-18 21:18:45.803603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.964 [2024-04-18 21:18:45.803624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.964 [2024-04-18 21:18:45.812705] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.964 [2024-04-18 21:18:45.813148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.964 [2024-04-18 21:18:45.813167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.965 [2024-04-18 21:18:45.822067] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.965 [2024-04-18 21:18:45.822625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.965 [2024-04-18 21:18:45.822644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.965 [2024-04-18 21:18:45.832366] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.965 [2024-04-18 21:18:45.832726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.965 [2024-04-18 21:18:45.832745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.965 [2024-04-18 21:18:45.842370] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.965 [2024-04-18 21:18:45.842835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.965 [2024-04-18 21:18:45.842853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.965 [2024-04-18 21:18:45.852238] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.965 [2024-04-18 21:18:45.852728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.965 [2024-04-18 21:18:45.852746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.965 [2024-04-18 21:18:45.862783] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.965 [2024-04-18 21:18:45.863082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.965 [2024-04-18 21:18:45.863100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.965 [2024-04-18 21:18:45.872355] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.965 [2024-04-18 21:18:45.872664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.965 [2024-04-18 21:18:45.872681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.965 [2024-04-18 21:18:45.882747] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:29.965 [2024-04-18 21:18:45.883097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.965 [2024-04-18 21:18:45.883115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:30.225 [2024-04-18 21:18:45.895094] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.225 [2024-04-18 21:18:45.895460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-04-18 21:18:45.895478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.225 [2024-04-18 21:18:45.906075] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.225 [2024-04-18 21:18:45.906505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-04-18 21:18:45.906526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:30.225 [2024-04-18 21:18:45.916829] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.225 [2024-04-18 21:18:45.917292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-04-18 21:18:45.917310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:30.225 [2024-04-18 21:18:45.926176] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.225 [2024-04-18 21:18:45.926536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-04-18 21:18:45.926553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:30.225 [2024-04-18 21:18:45.936531] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.225 [2024-04-18 21:18:45.936918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-04-18 21:18:45.936935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.225 [2024-04-18 21:18:45.946306] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.225 [2024-04-18 21:18:45.946798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-04-18 21:18:45.946816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:30.225 [2024-04-18 21:18:45.956721] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.225 [2024-04-18 21:18:45.957103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-04-18 21:18:45.957121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:30.225 [2024-04-18 21:18:45.965714] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.225 [2024-04-18 21:18:45.966225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-04-18 21:18:45.966243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:30.225 [2024-04-18 21:18:45.975699] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.225 [2024-04-18 21:18:45.976123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-04-18 21:18:45.976142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.225 [2024-04-18 21:18:45.985340] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.225 [2024-04-18 21:18:45.985761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-04-18 21:18:45.985779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:30.225 [2024-04-18 21:18:45.994220] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.225 [2024-04-18 21:18:45.994645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-04-18 21:18:45.994664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:30.225 [2024-04-18 21:18:46.004296] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.225 [2024-04-18 21:18:46.004686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-04-18 21:18:46.004704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:30.225 [2024-04-18 21:18:46.013154] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.225 [2024-04-18 21:18:46.013457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-04-18 21:18:46.013475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.225 [2024-04-18 21:18:46.022456] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.225 [2024-04-18 21:18:46.022847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-04-18 21:18:46.022868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:30.225 [2024-04-18 21:18:46.030963] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.225 [2024-04-18 21:18:46.031310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-04-18 21:18:46.031327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:30.225 [2024-04-18 21:18:46.041726] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.225 [2024-04-18 21:18:46.042223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-04-18 21:18:46.042242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:30.225 [2024-04-18 21:18:46.051485] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.225 [2024-04-18 21:18:46.051880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-04-18 21:18:46.051898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.225 [2024-04-18 21:18:46.061940] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.225 [2024-04-18 21:18:46.062370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-04-18 21:18:46.062388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:30.225 [2024-04-18 21:18:46.072321] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.225 [2024-04-18 21:18:46.072727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-04-18 21:18:46.072745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:30.225 [2024-04-18 21:18:46.082749] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.225 [2024-04-18 21:18:46.083135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-04-18 21:18:46.083155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:30.225 [2024-04-18 21:18:46.092728] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.225 [2024-04-18 21:18:46.093068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-04-18 21:18:46.093086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.225 [2024-04-18 21:18:46.103157] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.226 [2024-04-18 21:18:46.103430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.226 [2024-04-18 21:18:46.103448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:30.226 [2024-04-18 21:18:46.113966] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.226 [2024-04-18 21:18:46.114308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.226 [2024-04-18 21:18:46.114326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:30.226 [2024-04-18 21:18:46.124012] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.226 [2024-04-18 21:18:46.124312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.226 [2024-04-18 21:18:46.124330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:30.226 [2024-04-18 21:18:46.133031] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.226 [2024-04-18 21:18:46.133361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.226 [2024-04-18 21:18:46.133379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.226 [2024-04-18 21:18:46.142304] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.226 [2024-04-18 21:18:46.142688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.226 [2024-04-18 21:18:46.142707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:30.226 [2024-04-18 21:18:46.153555] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.226 [2024-04-18 21:18:46.153896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.226 [2024-04-18 21:18:46.153914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:30.486 [2024-04-18 21:18:46.164283] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.486 [2024-04-18 21:18:46.164629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.486 [2024-04-18 21:18:46.164646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:30.486 [2024-04-18 21:18:46.175381] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.486 [2024-04-18 21:18:46.175691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.486 [2024-04-18 21:18:46.175709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.486 [2024-04-18 21:18:46.185808] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.486 [2024-04-18 21:18:46.186184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.486 [2024-04-18 21:18:46.186201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:30.486 [2024-04-18 21:18:46.197076] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.486 [2024-04-18 21:18:46.197450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.486 [2024-04-18 21:18:46.197468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:30.486 [2024-04-18 21:18:46.206539] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.486 [2024-04-18 21:18:46.206906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.486 [2024-04-18 21:18:46.206924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:30.486 [2024-04-18 21:18:46.217223] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.486 [2024-04-18 21:18:46.217632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.486 [2024-04-18 21:18:46.217650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.486 [2024-04-18 21:18:46.226720] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.486 [2024-04-18 21:18:46.227085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.486 [2024-04-18 21:18:46.227103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:30.486 [2024-04-18 21:18:46.236785] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.486 [2024-04-18 21:18:46.237160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.486 [2024-04-18 21:18:46.237177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:30.486 [2024-04-18 21:18:46.247656] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.486 [2024-04-18 21:18:46.248042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.486 [2024-04-18 21:18:46.248060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:30.486 [2024-04-18 21:18:46.258369] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.486 [2024-04-18 21:18:46.258760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.486 [2024-04-18 21:18:46.258778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.486 [2024-04-18 21:18:46.268630] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.486 [2024-04-18 21:18:46.269073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.486 [2024-04-18 21:18:46.269090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:30.486 [2024-04-18 21:18:46.279467] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.486 [2024-04-18 21:18:46.279849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.486 [2024-04-18 21:18:46.279868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:30.486 [2024-04-18 21:18:46.289641] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.486 [2024-04-18 21:18:46.290004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.486 [2024-04-18 21:18:46.290025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:30.486 [2024-04-18 21:18:46.300534] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.486 [2024-04-18 21:18:46.301002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.486 [2024-04-18 21:18:46.301020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.486 [2024-04-18 21:18:46.310827] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.486 [2024-04-18 21:18:46.311153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.486 [2024-04-18 21:18:46.311170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:30.486 [2024-04-18 21:18:46.320333] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.486 [2024-04-18 21:18:46.320668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.486 [2024-04-18 21:18:46.320687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:30.486 [2024-04-18 21:18:46.330960] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.486 [2024-04-18 21:18:46.331365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.486 [2024-04-18 21:18:46.331384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:30.486 [2024-04-18 21:18:46.341588] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.486 [2024-04-18 21:18:46.341910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.486 [2024-04-18 21:18:46.341929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.486 [2024-04-18 21:18:46.351272] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.486 [2024-04-18 21:18:46.351636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.486 [2024-04-18 21:18:46.351656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:30.486 [2024-04-18 21:18:46.361862] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.486 [2024-04-18 21:18:46.362142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.486 [2024-04-18 21:18:46.362161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:30.486 [2024-04-18 21:18:46.372754] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.486 [2024-04-18 21:18:46.373095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.486 [2024-04-18 21:18:46.373113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:30.486 [2024-04-18 21:18:46.383182] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.487 [2024-04-18 21:18:46.383521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.487 [2024-04-18 21:18:46.383539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.487 [2024-04-18 21:18:46.392929] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.487 [2024-04-18 21:18:46.393301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.487 [2024-04-18 21:18:46.393319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:30.487 [2024-04-18 21:18:46.404064] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.487 [2024-04-18 21:18:46.404466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.487 [2024-04-18 21:18:46.404485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:30.487 [2024-04-18 21:18:46.413879] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.487 [2024-04-18 21:18:46.414252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.487 [2024-04-18 21:18:46.414271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:30.746 [2024-04-18 21:18:46.425035] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.746 [2024-04-18 21:18:46.425408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.746 [2024-04-18 21:18:46.425426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.746 [2024-04-18 21:18:46.436795] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.746 [2024-04-18 21:18:46.437190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.746 [2024-04-18 21:18:46.437208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:30.746 [2024-04-18 21:18:46.446868] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.746 [2024-04-18 21:18:46.447343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.746 [2024-04-18 21:18:46.447360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:30.746 [2024-04-18 21:18:46.458382] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.746 [2024-04-18 21:18:46.458765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.746 [2024-04-18 21:18:46.458783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:30.746 [2024-04-18 21:18:46.467102] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.746 [2024-04-18 21:18:46.467524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.746 [2024-04-18 21:18:46.467545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.746 [2024-04-18 21:18:46.477177] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.746 [2024-04-18 21:18:46.477654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.746 [2024-04-18 21:18:46.477672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:30.746 [2024-04-18 21:18:46.487685] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.746 [2024-04-18 21:18:46.488012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.746 [2024-04-18 21:18:46.488029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:30.746 [2024-04-18 21:18:46.498016] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.746 [2024-04-18 21:18:46.498354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.746 [2024-04-18 21:18:46.498373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:30.746 [2024-04-18 21:18:46.508458] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.746 [2024-04-18 21:18:46.508997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.746 [2024-04-18 21:18:46.509014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.746 [2024-04-18 21:18:46.519752] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.746 [2024-04-18 21:18:46.520087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.746 [2024-04-18 21:18:46.520105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:30.746 [2024-04-18 21:18:46.529973] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.746 [2024-04-18 21:18:46.530277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.746 [2024-04-18 21:18:46.530295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:30.746 [2024-04-18 21:18:46.540086] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.746 [2024-04-18 21:18:46.540433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.746 [2024-04-18 21:18:46.540451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:30.746 [2024-04-18 21:18:46.549940] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.746 [2024-04-18 21:18:46.550299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.746 [2024-04-18 21:18:46.550316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.746 [2024-04-18 21:18:46.559204] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.746 [2024-04-18 21:18:46.559506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.746 [2024-04-18 21:18:46.559531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:30.746 [2024-04-18 21:18:46.568885] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.746 [2024-04-18 21:18:46.569285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.746 [2024-04-18 21:18:46.569303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:30.746 [2024-04-18 21:18:46.579664] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.746 [2024-04-18 21:18:46.580020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.746 [2024-04-18 21:18:46.580037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:30.746 [2024-04-18 21:18:46.590308] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.746 [2024-04-18 21:18:46.590742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.746 [2024-04-18 21:18:46.590760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.746 [2024-04-18 21:18:46.601519] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.746 [2024-04-18 21:18:46.601876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.746 [2024-04-18 21:18:46.601895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:30.746 [2024-04-18 21:18:46.612430] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.746 [2024-04-18 21:18:46.612759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.746 [2024-04-18 21:18:46.612778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:30.746 [2024-04-18 21:18:46.622051] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.746 [2024-04-18 21:18:46.622367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.746 [2024-04-18 21:18:46.622386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:30.746 [2024-04-18 21:18:46.631804] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.746 [2024-04-18 21:18:46.632116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.746 [2024-04-18 21:18:46.632134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.746 [2024-04-18 21:18:46.641064] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.746 [2024-04-18 21:18:46.641414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.746 [2024-04-18 21:18:46.641433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:30.746 [2024-04-18 21:18:46.649811] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.746 [2024-04-18 21:18:46.650226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.746 [2024-04-18 21:18:46.650245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:30.746 [2024-04-18 21:18:46.658722] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.746 [2024-04-18 21:18:46.659152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.746 [2024-04-18 21:18:46.659171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:30.746 [2024-04-18 21:18:46.667908] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:30.746 [2024-04-18 21:18:46.668236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.746 [2024-04-18 21:18:46.668255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.005 [2024-04-18 21:18:46.677870] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:31.005 [2024-04-18 21:18:46.678236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.005 [2024-04-18 21:18:46.678256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.005 [2024-04-18 21:18:46.685824] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:31.005 [2024-04-18 21:18:46.686209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.005 [2024-04-18 21:18:46.686228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.005 [2024-04-18 21:18:46.695840] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:31.005 [2024-04-18 21:18:46.696112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.005 [2024-04-18 21:18:46.696131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.005 [2024-04-18 21:18:46.704403] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:31.005 [2024-04-18 21:18:46.704732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.005 [2024-04-18 21:18:46.704750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.005 [2024-04-18 21:18:46.714193] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:31.005 [2024-04-18 21:18:46.714579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.005 [2024-04-18 21:18:46.714597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.005 [2024-04-18 21:18:46.724341] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:31.005 [2024-04-18 21:18:46.724696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.005 [2024-04-18 21:18:46.724718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.005 [2024-04-18 21:18:46.734064] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:31.005 [2024-04-18 21:18:46.734361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.005 [2024-04-18 21:18:46.734379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.005 [2024-04-18 21:18:46.743066] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:31.005 [2024-04-18 21:18:46.743322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.005 [2024-04-18 21:18:46.743340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.005 [2024-04-18 21:18:46.752830] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:31.005 [2024-04-18 21:18:46.753243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.005 [2024-04-18 21:18:46.753261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.005 [2024-04-18 21:18:46.762394] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5edd0) with pdu=0x2000190fef90 00:25:31.005 [2024-04-18 21:18:46.762832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.005 [2024-04-18 21:18:46.762850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.005 00:25:31.005 Latency(us) 00:25:31.005 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:31.005 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:31.005 nvme0n1 : 2.00 2952.08 369.01 0.00 0.00 5411.50 3405.02 25644.52 00:25:31.005 =================================================================================================================== 00:25:31.005 Total : 2952.08 369.01 0.00 0.00 5411.50 3405.02 25644.52 00:25:31.005 0 00:25:31.005 21:18:46 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:31.005 21:18:46 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:31.005 | .driver_specific 00:25:31.005 | .nvme_error 00:25:31.005 | .status_code 00:25:31.005 | .command_transient_transport_error' 00:25:31.005 21:18:46 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:31.005 21:18:46 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:31.263 21:18:46 -- host/digest.sh@71 -- # (( 190 > 0 )) 00:25:31.263 21:18:46 -- host/digest.sh@73 -- # killprocess 3192401 00:25:31.263 21:18:46 -- common/autotest_common.sh@936 -- # '[' -z 3192401 ']' 00:25:31.263 21:18:46 -- common/autotest_common.sh@940 -- # kill -0 3192401 00:25:31.263 21:18:46 -- common/autotest_common.sh@941 -- # uname 00:25:31.263 21:18:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:31.263 21:18:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3192401 00:25:31.263 21:18:47 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:31.263 21:18:47 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:31.263 21:18:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3192401' 00:25:31.263 killing process with pid 3192401 00:25:31.263 21:18:47 -- common/autotest_common.sh@955 -- # kill 3192401 00:25:31.263 Received shutdown signal, test time was about 2.000000 seconds 00:25:31.263 00:25:31.263 Latency(us) 00:25:31.264 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:31.264 =================================================================================================================== 00:25:31.264 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:31.264 21:18:47 -- common/autotest_common.sh@960 -- # wait 3192401 00:25:31.656 21:18:47 -- host/digest.sh@116 -- # killprocess 3190290 00:25:31.656 21:18:47 -- common/autotest_common.sh@936 -- # '[' -z 3190290 ']' 00:25:31.656 21:18:47 -- common/autotest_common.sh@940 -- # kill -0 3190290 00:25:31.656 21:18:47 -- common/autotest_common.sh@941 -- # uname 00:25:31.656 21:18:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:31.656 21:18:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3190290 00:25:31.656 21:18:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:31.656 21:18:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:31.656 21:18:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3190290' 00:25:31.656 killing process with pid 3190290 00:25:31.656 21:18:47 -- common/autotest_common.sh@955 -- # kill 3190290 00:25:31.656 21:18:47 -- common/autotest_common.sh@960 -- # wait 3190290 00:25:31.656 00:25:31.656 real 0m16.844s 00:25:31.656 user 0m32.817s 00:25:31.656 sys 0m3.879s 00:25:31.656 21:18:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:31.656 21:18:47 -- common/autotest_common.sh@10 -- # set +x 00:25:31.656 ************************************ 00:25:31.656 END TEST nvmf_digest_error 00:25:31.656 ************************************ 00:25:31.656 21:18:47 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:25:31.656 21:18:47 -- host/digest.sh@150 -- # nvmftestfini 00:25:31.656 21:18:47 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:31.656 21:18:47 -- nvmf/common.sh@117 -- # sync 00:25:31.656 21:18:47 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:31.656 21:18:47 -- nvmf/common.sh@120 -- # set +e 00:25:31.656 21:18:47 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:31.656 21:18:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:31.656 rmmod nvme_tcp 00:25:31.656 rmmod nvme_fabrics 00:25:31.656 rmmod nvme_keyring 00:25:31.656 21:18:47 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:31.656 21:18:47 -- nvmf/common.sh@124 -- # set -e 00:25:31.656 21:18:47 -- nvmf/common.sh@125 -- # return 0 00:25:31.656 21:18:47 -- nvmf/common.sh@478 -- # '[' -n 3190290 ']' 00:25:31.656 21:18:47 -- nvmf/common.sh@479 -- # killprocess 3190290 00:25:31.656 21:18:47 -- common/autotest_common.sh@936 -- # '[' -z 3190290 ']' 00:25:31.656 21:18:47 -- common/autotest_common.sh@940 -- # kill -0 3190290 00:25:31.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (3190290) - No such process 00:25:31.656 21:18:47 -- common/autotest_common.sh@963 -- # echo 'Process with pid 3190290 is not found' 00:25:31.656 Process with pid 3190290 is not found 00:25:31.656 21:18:47 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:31.656 21:18:47 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:31.656 21:18:47 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:31.656 21:18:47 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:31.656 21:18:47 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:31.656 21:18:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:31.656 21:18:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:31.656 21:18:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:34.203 21:18:49 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:34.203 00:25:34.203 real 0m42.885s 00:25:34.203 user 1m7.965s 00:25:34.203 sys 0m12.830s 00:25:34.203 21:18:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:34.203 21:18:49 -- common/autotest_common.sh@10 -- # set +x 00:25:34.203 ************************************ 00:25:34.203 END TEST nvmf_digest 00:25:34.203 ************************************ 00:25:34.203 21:18:49 -- nvmf/nvmf.sh@109 -- # [[ 0 -eq 1 ]] 00:25:34.203 21:18:49 -- nvmf/nvmf.sh@114 -- # [[ 0 -eq 1 ]] 00:25:34.203 21:18:49 -- nvmf/nvmf.sh@119 -- # [[ phy == phy ]] 00:25:34.203 21:18:49 -- nvmf/nvmf.sh@120 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:34.203 21:18:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:34.203 21:18:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:34.203 21:18:49 -- common/autotest_common.sh@10 -- # set +x 00:25:34.203 ************************************ 00:25:34.203 START TEST nvmf_bdevperf 00:25:34.203 ************************************ 00:25:34.203 21:18:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:34.203 * Looking for test storage... 00:25:34.203 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:34.203 21:18:49 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:34.203 21:18:49 -- nvmf/common.sh@7 -- # uname -s 00:25:34.203 21:18:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:34.203 21:18:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:34.203 21:18:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:34.203 21:18:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:34.203 21:18:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:34.203 21:18:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:34.203 21:18:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:34.203 21:18:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:34.203 21:18:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:34.203 21:18:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:34.203 21:18:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:34.203 21:18:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:34.203 21:18:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:34.204 21:18:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:34.204 21:18:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:34.204 21:18:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:34.204 21:18:49 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:34.204 21:18:49 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:34.204 21:18:49 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:34.204 21:18:49 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:34.204 21:18:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.204 21:18:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.204 21:18:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.204 21:18:49 -- paths/export.sh@5 -- # export PATH 00:25:34.204 21:18:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.204 21:18:49 -- nvmf/common.sh@47 -- # : 0 00:25:34.204 21:18:49 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:34.204 21:18:49 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:34.204 21:18:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:34.204 21:18:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:34.204 21:18:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:34.204 21:18:49 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:34.204 21:18:49 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:34.204 21:18:49 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:34.204 21:18:49 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:34.204 21:18:49 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:34.204 21:18:49 -- host/bdevperf.sh@24 -- # nvmftestinit 00:25:34.204 21:18:49 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:34.204 21:18:49 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:34.204 21:18:49 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:34.204 21:18:49 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:34.204 21:18:49 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:34.204 21:18:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:34.204 21:18:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:34.204 21:18:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:34.204 21:18:49 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:34.204 21:18:49 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:34.204 21:18:49 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:34.204 21:18:49 -- common/autotest_common.sh@10 -- # set +x 00:25:40.773 21:18:55 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:40.773 21:18:55 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:40.773 21:18:55 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:40.773 21:18:55 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:40.773 21:18:55 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:40.773 21:18:55 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:40.773 21:18:55 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:40.773 21:18:55 -- nvmf/common.sh@295 -- # net_devs=() 00:25:40.773 21:18:55 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:40.773 21:18:55 -- nvmf/common.sh@296 -- # e810=() 00:25:40.773 21:18:55 -- nvmf/common.sh@296 -- # local -ga e810 00:25:40.773 21:18:55 -- nvmf/common.sh@297 -- # x722=() 00:25:40.773 21:18:55 -- nvmf/common.sh@297 -- # local -ga x722 00:25:40.773 21:18:55 -- nvmf/common.sh@298 -- # mlx=() 00:25:40.773 21:18:55 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:40.773 21:18:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:40.773 21:18:55 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:40.773 21:18:55 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:40.773 21:18:55 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:40.773 21:18:55 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:40.773 21:18:55 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:40.773 21:18:55 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:40.773 21:18:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:40.773 21:18:55 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:40.773 21:18:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:40.773 21:18:55 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:40.773 21:18:55 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:40.773 21:18:55 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:40.773 21:18:55 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:40.773 21:18:55 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:40.773 21:18:55 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:40.773 21:18:55 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:40.773 21:18:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:40.773 21:18:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:40.773 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:40.773 21:18:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:40.773 21:18:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:40.773 21:18:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:40.773 21:18:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:40.773 21:18:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:40.773 21:18:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:40.773 21:18:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:40.773 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:40.773 21:18:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:40.773 21:18:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:40.773 21:18:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:40.773 21:18:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:40.773 21:18:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:40.773 21:18:55 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:40.773 21:18:55 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:40.773 21:18:55 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:40.773 21:18:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:40.773 21:18:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:40.773 21:18:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:40.773 21:18:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:40.773 21:18:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:40.773 Found net devices under 0000:86:00.0: cvl_0_0 00:25:40.773 21:18:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:40.773 21:18:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:40.773 21:18:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:40.773 21:18:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:40.773 21:18:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:40.773 21:18:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:40.773 Found net devices under 0000:86:00.1: cvl_0_1 00:25:40.773 21:18:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:40.773 21:18:55 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:40.773 21:18:55 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:40.773 21:18:55 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:40.773 21:18:55 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:40.773 21:18:55 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:40.773 21:18:55 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:40.773 21:18:55 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:40.773 21:18:55 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:40.773 21:18:55 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:40.773 21:18:55 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:40.773 21:18:55 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:40.773 21:18:55 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:40.773 21:18:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:40.773 21:18:55 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:40.773 21:18:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:40.773 21:18:55 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:40.773 21:18:55 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:40.773 21:18:55 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:40.773 21:18:55 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:40.773 21:18:55 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:40.773 21:18:55 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:40.773 21:18:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:40.773 21:18:55 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:40.773 21:18:55 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:40.773 21:18:55 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:40.773 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:40.773 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:25:40.773 00:25:40.773 --- 10.0.0.2 ping statistics --- 00:25:40.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.773 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:25:40.773 21:18:55 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:40.773 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:40.773 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:25:40.773 00:25:40.773 --- 10.0.0.1 ping statistics --- 00:25:40.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.773 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:25:40.774 21:18:55 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:40.774 21:18:55 -- nvmf/common.sh@411 -- # return 0 00:25:40.774 21:18:55 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:40.774 21:18:55 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:40.774 21:18:55 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:40.774 21:18:55 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:40.774 21:18:55 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:40.774 21:18:55 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:40.774 21:18:55 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:40.774 21:18:55 -- host/bdevperf.sh@25 -- # tgt_init 00:25:40.774 21:18:55 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:40.774 21:18:55 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:40.774 21:18:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:40.774 21:18:55 -- common/autotest_common.sh@10 -- # set +x 00:25:40.774 21:18:55 -- nvmf/common.sh@470 -- # nvmfpid=3196926 00:25:40.774 21:18:55 -- nvmf/common.sh@471 -- # waitforlisten 3196926 00:25:40.774 21:18:55 -- common/autotest_common.sh@817 -- # '[' -z 3196926 ']' 00:25:40.774 21:18:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:40.774 21:18:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:40.774 21:18:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:40.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:40.774 21:18:55 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:40.774 21:18:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:40.774 21:18:55 -- common/autotest_common.sh@10 -- # set +x 00:25:40.774 [2024-04-18 21:18:55.971449] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:25:40.774 [2024-04-18 21:18:55.971493] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:40.774 EAL: No free 2048 kB hugepages reported on node 1 00:25:40.774 [2024-04-18 21:18:56.032989] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:40.774 [2024-04-18 21:18:56.111390] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:40.774 [2024-04-18 21:18:56.111425] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:40.774 [2024-04-18 21:18:56.111432] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:40.774 [2024-04-18 21:18:56.111438] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:40.774 [2024-04-18 21:18:56.111443] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:40.774 [2024-04-18 21:18:56.111547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:40.774 [2024-04-18 21:18:56.111569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:40.774 [2024-04-18 21:18:56.111571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:41.033 21:18:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:41.033 21:18:56 -- common/autotest_common.sh@850 -- # return 0 00:25:41.033 21:18:56 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:41.033 21:18:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:41.033 21:18:56 -- common/autotest_common.sh@10 -- # set +x 00:25:41.033 21:18:56 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:41.033 21:18:56 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:41.033 21:18:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.033 21:18:56 -- common/autotest_common.sh@10 -- # set +x 00:25:41.033 [2024-04-18 21:18:56.816750] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:41.033 21:18:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.033 21:18:56 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:41.033 21:18:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.033 21:18:56 -- common/autotest_common.sh@10 -- # set +x 00:25:41.033 Malloc0 00:25:41.033 21:18:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.033 21:18:56 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:41.033 21:18:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.033 21:18:56 -- common/autotest_common.sh@10 -- # set +x 00:25:41.033 21:18:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.033 21:18:56 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:41.033 21:18:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.033 21:18:56 -- common/autotest_common.sh@10 -- # set +x 00:25:41.033 21:18:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.033 21:18:56 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:41.033 21:18:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.033 21:18:56 -- common/autotest_common.sh@10 -- # set +x 00:25:41.033 [2024-04-18 21:18:56.880329] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:41.033 21:18:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.033 21:18:56 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:25:41.033 21:18:56 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:25:41.033 21:18:56 -- nvmf/common.sh@521 -- # config=() 00:25:41.033 21:18:56 -- nvmf/common.sh@521 -- # local subsystem config 00:25:41.033 21:18:56 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:41.033 21:18:56 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:41.033 { 00:25:41.033 "params": { 00:25:41.033 "name": "Nvme$subsystem", 00:25:41.033 "trtype": "$TEST_TRANSPORT", 00:25:41.033 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:41.033 "adrfam": "ipv4", 00:25:41.033 "trsvcid": "$NVMF_PORT", 00:25:41.033 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:41.033 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:41.033 "hdgst": ${hdgst:-false}, 00:25:41.033 "ddgst": ${ddgst:-false} 00:25:41.033 }, 00:25:41.033 "method": "bdev_nvme_attach_controller" 00:25:41.033 } 00:25:41.033 EOF 00:25:41.033 )") 00:25:41.033 21:18:56 -- nvmf/common.sh@543 -- # cat 00:25:41.033 21:18:56 -- nvmf/common.sh@545 -- # jq . 00:25:41.033 21:18:56 -- nvmf/common.sh@546 -- # IFS=, 00:25:41.033 21:18:56 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:25:41.033 "params": { 00:25:41.033 "name": "Nvme1", 00:25:41.033 "trtype": "tcp", 00:25:41.033 "traddr": "10.0.0.2", 00:25:41.033 "adrfam": "ipv4", 00:25:41.033 "trsvcid": "4420", 00:25:41.033 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:41.033 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:41.033 "hdgst": false, 00:25:41.033 "ddgst": false 00:25:41.033 }, 00:25:41.033 "method": "bdev_nvme_attach_controller" 00:25:41.033 }' 00:25:41.033 [2024-04-18 21:18:56.929500] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:25:41.033 [2024-04-18 21:18:56.929554] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3197176 ] 00:25:41.033 EAL: No free 2048 kB hugepages reported on node 1 00:25:41.292 [2024-04-18 21:18:56.990044] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.292 [2024-04-18 21:18:57.063712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:41.550 Running I/O for 1 seconds... 00:25:42.487 00:25:42.487 Latency(us) 00:25:42.487 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:42.487 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:42.487 Verification LBA range: start 0x0 length 0x4000 00:25:42.487 Nvme1n1 : 1.00 10822.41 42.28 0.00 0.00 11776.37 776.46 18236.10 00:25:42.487 =================================================================================================================== 00:25:42.487 Total : 10822.41 42.28 0.00 0.00 11776.37 776.46 18236.10 00:25:42.745 21:18:58 -- host/bdevperf.sh@30 -- # bdevperfpid=3197403 00:25:42.745 21:18:58 -- host/bdevperf.sh@32 -- # sleep 3 00:25:42.745 21:18:58 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:25:42.746 21:18:58 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:25:42.746 21:18:58 -- nvmf/common.sh@521 -- # config=() 00:25:42.746 21:18:58 -- nvmf/common.sh@521 -- # local subsystem config 00:25:42.746 21:18:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:42.746 21:18:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:42.746 { 00:25:42.746 "params": { 00:25:42.746 "name": "Nvme$subsystem", 00:25:42.746 "trtype": "$TEST_TRANSPORT", 00:25:42.746 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:42.746 "adrfam": "ipv4", 00:25:42.746 "trsvcid": "$NVMF_PORT", 00:25:42.746 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:42.746 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:42.746 "hdgst": ${hdgst:-false}, 00:25:42.746 "ddgst": ${ddgst:-false} 00:25:42.746 }, 00:25:42.746 "method": "bdev_nvme_attach_controller" 00:25:42.746 } 00:25:42.746 EOF 00:25:42.746 )") 00:25:42.746 21:18:58 -- nvmf/common.sh@543 -- # cat 00:25:42.746 21:18:58 -- nvmf/common.sh@545 -- # jq . 00:25:42.746 21:18:58 -- nvmf/common.sh@546 -- # IFS=, 00:25:42.746 21:18:58 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:25:42.746 "params": { 00:25:42.746 "name": "Nvme1", 00:25:42.746 "trtype": "tcp", 00:25:42.746 "traddr": "10.0.0.2", 00:25:42.746 "adrfam": "ipv4", 00:25:42.746 "trsvcid": "4420", 00:25:42.746 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:42.746 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:42.746 "hdgst": false, 00:25:42.746 "ddgst": false 00:25:42.746 }, 00:25:42.746 "method": "bdev_nvme_attach_controller" 00:25:42.746 }' 00:25:42.746 [2024-04-18 21:18:58.516310] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:25:42.746 [2024-04-18 21:18:58.516360] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3197403 ] 00:25:42.746 EAL: No free 2048 kB hugepages reported on node 1 00:25:42.746 [2024-04-18 21:18:58.576329] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.746 [2024-04-18 21:18:58.644268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.313 Running I/O for 15 seconds... 00:25:45.851 21:19:01 -- host/bdevperf.sh@33 -- # kill -9 3196926 00:25:45.851 21:19:01 -- host/bdevperf.sh@35 -- # sleep 3 00:25:45.851 [2024-04-18 21:19:01.490331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:84968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.851 [2024-04-18 21:19:01.490370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.851 [2024-04-18 21:19:01.490393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.851 [2024-04-18 21:19:01.490403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.851 [2024-04-18 21:19:01.490414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:84984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.851 [2024-04-18 21:19:01.490424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.851 [2024-04-18 21:19:01.490435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:84992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.851 [2024-04-18 21:19:01.490446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.851 [2024-04-18 21:19:01.490458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:85000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.851 [2024-04-18 21:19:01.490469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.851 [2024-04-18 21:19:01.490482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:85008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.851 [2024-04-18 21:19:01.490493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.851 [2024-04-18 21:19:01.490507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:85016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.851 [2024-04-18 21:19:01.490525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.851 [2024-04-18 21:19:01.490536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:85024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.851 [2024-04-18 21:19:01.490552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.851 [2024-04-18 21:19:01.490565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:85032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.851 [2024-04-18 21:19:01.490581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.851 [2024-04-18 21:19:01.490593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:85040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.851 [2024-04-18 21:19:01.490603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.851 [2024-04-18 21:19:01.490614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:85160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.851 [2024-04-18 21:19:01.490624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.851 [2024-04-18 21:19:01.490636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:85168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.851 [2024-04-18 21:19:01.490646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.851 [2024-04-18 21:19:01.490656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:85176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.851 [2024-04-18 21:19:01.490665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.851 [2024-04-18 21:19:01.490676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:85184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.851 [2024-04-18 21:19:01.490685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.851 [2024-04-18 21:19:01.490697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:85192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.851 [2024-04-18 21:19:01.490706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.851 [2024-04-18 21:19:01.490718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:85200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.851 [2024-04-18 21:19:01.490727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.851 [2024-04-18 21:19:01.490738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:85208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.851 [2024-04-18 21:19:01.490747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.851 [2024-04-18 21:19:01.490759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:85216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.851 [2024-04-18 21:19:01.490769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.851 [2024-04-18 21:19:01.490780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:85224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.851 [2024-04-18 21:19:01.490789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.851 [2024-04-18 21:19:01.490800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:85232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.851 [2024-04-18 21:19:01.490810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.851 [2024-04-18 21:19:01.490822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.851 [2024-04-18 21:19:01.490831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.851 [2024-04-18 21:19:01.490845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:85248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.851 [2024-04-18 21:19:01.490855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.851 [2024-04-18 21:19:01.490866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:85256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.851 [2024-04-18 21:19:01.490875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.851 [2024-04-18 21:19:01.490886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:85264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.851 [2024-04-18 21:19:01.490895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.851 [2024-04-18 21:19:01.490907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:85272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.851 [2024-04-18 21:19:01.490916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.852 [2024-04-18 21:19:01.490926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:85280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.852 [2024-04-18 21:19:01.490935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.852 [2024-04-18 21:19:01.490945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:85288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.852 [2024-04-18 21:19:01.490958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.852 [2024-04-18 21:19:01.490968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:85296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.852 [2024-04-18 21:19:01.490976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.852 [2024-04-18 21:19:01.490987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:85304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.852 [2024-04-18 21:19:01.490996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.852 [2024-04-18 21:19:01.491008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:85312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.852 [2024-04-18 21:19:01.491019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.852 [2024-04-18 21:19:01.491030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:85320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.852 [2024-04-18 21:19:01.491041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.852 [2024-04-18 21:19:01.491053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:85328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.852 [2024-04-18 21:19:01.491063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.852 [2024-04-18 21:19:01.491076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:85336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.852 [2024-04-18 21:19:01.491086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.852 [2024-04-18 21:19:01.491099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:85344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.852 [2024-04-18 21:19:01.491112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.852 [2024-04-18 21:19:01.491125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:85352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.852 [2024-04-18 21:19:01.491135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.852 [2024-04-18 21:19:01.491147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:85360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.852 [2024-04-18 21:19:01.491158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.852 [2024-04-18 21:19:01.491171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:85368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.852 [2024-04-18 21:19:01.491181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.852 [2024-04-18 21:19:01.491193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:85376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.852 [2024-04-18 21:19:01.491203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.852 [2024-04-18 21:19:01.491215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:85384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.852 [2024-04-18 21:19:01.491226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.852 [2024-04-18 21:19:01.491238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.852 [2024-04-18 21:19:01.491248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.852 [2024-04-18 21:19:01.491260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:85400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.852 [2024-04-18 21:19:01.491270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.852 [2024-04-18 21:19:01.491282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.852 [2024-04-18 21:19:01.491293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.852 [2024-04-18 21:19:01.491305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:85416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.852 [2024-04-18 21:19:01.491317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.852 [2024-04-18 21:19:01.491329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.852 [2024-04-18 21:19:01.491341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.852 [2024-04-18 21:19:01.491353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:85432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.852 [2024-04-18 21:19:01.491363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.852 [2024-04-18 21:19:01.491375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:85440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.852 [2024-04-18 21:19:01.491386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.852 [2024-04-18 21:19:01.491401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:85448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.852 [2024-04-18 21:19:01.491411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.852 [2024-04-18 21:19:01.491423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:85456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.852 [2024-04-18 21:19:01.491434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.852 [2024-04-18 21:19:01.491445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:85464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.852 [2024-04-18 21:19:01.491456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.852 [2024-04-18 21:19:01.491468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.852 [2024-04-18 21:19:01.491478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.852 [2024-04-18 21:19:01.491490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:85480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.852 [2024-04-18 21:19:01.491500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.852 [2024-04-18 21:19:01.491697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.852 [2024-04-18 21:19:01.491712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.852 [2024-04-18 21:19:01.491725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:85496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.852 [2024-04-18 21:19:01.491735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.852 [2024-04-18 21:19:01.491749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:85504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.852 [2024-04-18 21:19:01.491759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.852 [2024-04-18 21:19:01.491771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:85512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.852 [2024-04-18 21:19:01.491782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.852 [2024-04-18 21:19:01.491794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.852 [2024-04-18 21:19:01.491805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.852 [2024-04-18 21:19:01.491817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:85528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.852 [2024-04-18 21:19:01.491828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.852 [2024-04-18 21:19:01.491840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:85536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.852 [2024-04-18 21:19:01.491850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.852 [2024-04-18 21:19:01.491862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:85544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.852 [2024-04-18 21:19:01.491872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.852 [2024-04-18 21:19:01.491888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:85552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.852 [2024-04-18 21:19:01.491898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.852 [2024-04-18 21:19:01.491910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:85560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.852 [2024-04-18 21:19:01.491921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.852 [2024-04-18 21:19:01.491932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:85568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.852 [2024-04-18 21:19:01.491944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.852 [2024-04-18 21:19:01.491956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:85576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.852 [2024-04-18 21:19:01.491967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.852 [2024-04-18 21:19:01.491979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:85584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.852 [2024-04-18 21:19:01.491989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.852 [2024-04-18 21:19:01.492001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:85592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.852 [2024-04-18 21:19:01.492012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.853 [2024-04-18 21:19:01.492024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:85600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.853 [2024-04-18 21:19:01.492034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.853 [2024-04-18 21:19:01.492046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:85608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.853 [2024-04-18 21:19:01.492056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.853 [2024-04-18 21:19:01.492068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:85616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.853 [2024-04-18 21:19:01.492078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.853 [2024-04-18 21:19:01.492090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:85624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.853 [2024-04-18 21:19:01.492101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.853 [2024-04-18 21:19:01.492113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:85632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.853 [2024-04-18 21:19:01.492124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.853 [2024-04-18 21:19:01.492135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.853 [2024-04-18 21:19:01.492146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.853 [2024-04-18 21:19:01.492158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:85648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.853 [2024-04-18 21:19:01.492170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.853 [2024-04-18 21:19:01.492183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:85656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.853 [2024-04-18 21:19:01.492193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.853 [2024-04-18 21:19:01.492205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:85664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.853 [2024-04-18 21:19:01.492215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.853 [2024-04-18 21:19:01.492227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:85048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.853 [2024-04-18 21:19:01.492238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.853 [2024-04-18 21:19:01.492250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:85056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.853 [2024-04-18 21:19:01.492261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.853 [2024-04-18 21:19:01.492273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:85064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.853 [2024-04-18 21:19:01.492283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.853 [2024-04-18 21:19:01.492296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:85072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.853 [2024-04-18 21:19:01.492306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.853 [2024-04-18 21:19:01.492318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:85080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.853 [2024-04-18 21:19:01.492330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.853 [2024-04-18 21:19:01.492341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:85088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.853 [2024-04-18 21:19:01.492352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.853 [2024-04-18 21:19:01.492364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.853 [2024-04-18 21:19:01.492374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.853 [2024-04-18 21:19:01.492395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:85104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.853 [2024-04-18 21:19:01.492406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.853 [2024-04-18 21:19:01.492418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:85672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.853 [2024-04-18 21:19:01.492428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.853 [2024-04-18 21:19:01.492440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:85680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.853 [2024-04-18 21:19:01.492450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.853 [2024-04-18 21:19:01.492464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:85688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.853 [2024-04-18 21:19:01.492475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.853 [2024-04-18 21:19:01.492486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.853 [2024-04-18 21:19:01.492497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.853 [2024-04-18 21:19:01.492509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:85704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.853 [2024-04-18 21:19:01.492559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.853 [2024-04-18 21:19:01.492571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:85712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.853 [2024-04-18 21:19:01.492582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.853 [2024-04-18 21:19:01.492595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:85720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.853 [2024-04-18 21:19:01.492605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.853 [2024-04-18 21:19:01.492617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:85728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.853 [2024-04-18 21:19:01.492628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.853 [2024-04-18 21:19:01.492640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:85736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.853 [2024-04-18 21:19:01.492650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.853 [2024-04-18 21:19:01.492663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:85744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.853 [2024-04-18 21:19:01.492674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.853 [2024-04-18 21:19:01.492685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:85752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.853 [2024-04-18 21:19:01.492696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.853 [2024-04-18 21:19:01.492708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:85760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.853 [2024-04-18 21:19:01.492718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.853 [2024-04-18 21:19:01.492731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:85768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.853 [2024-04-18 21:19:01.492741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.853 [2024-04-18 21:19:01.492754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:85776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.853 [2024-04-18 21:19:01.492764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.853 [2024-04-18 21:19:01.492775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:85784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.853 [2024-04-18 21:19:01.492788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.853 [2024-04-18 21:19:01.492802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:85792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.853 [2024-04-18 21:19:01.492813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.853 [2024-04-18 21:19:01.492825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:85800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.853 [2024-04-18 21:19:01.492835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.853 [2024-04-18 21:19:01.492847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.853 [2024-04-18 21:19:01.492857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.853 [2024-04-18 21:19:01.492870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:85816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.853 [2024-04-18 21:19:01.492880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.853 [2024-04-18 21:19:01.492892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:85824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.853 [2024-04-18 21:19:01.492903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.853 [2024-04-18 21:19:01.492914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:85832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.853 [2024-04-18 21:19:01.492925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.853 [2024-04-18 21:19:01.492937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:85840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.853 [2024-04-18 21:19:01.492948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.853 [2024-04-18 21:19:01.492960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:85848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.853 [2024-04-18 21:19:01.492970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.854 [2024-04-18 21:19:01.492982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:85856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.854 [2024-04-18 21:19:01.492993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.854 [2024-04-18 21:19:01.493004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:85864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.854 [2024-04-18 21:19:01.493015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.854 [2024-04-18 21:19:01.493027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:85872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.854 [2024-04-18 21:19:01.493037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.854 [2024-04-18 21:19:01.493049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.854 [2024-04-18 21:19:01.493060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.854 [2024-04-18 21:19:01.493072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:85888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.854 [2024-04-18 21:19:01.493084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.854 [2024-04-18 21:19:01.493096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.854 [2024-04-18 21:19:01.493107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.854 [2024-04-18 21:19:01.493119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:85904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.854 [2024-04-18 21:19:01.493129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.854 [2024-04-18 21:19:01.493141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:85912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.854 [2024-04-18 21:19:01.493152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.854 [2024-04-18 21:19:01.493164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.854 [2024-04-18 21:19:01.493181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.854 [2024-04-18 21:19:01.493193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:85928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.854 [2024-04-18 21:19:01.493203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.854 [2024-04-18 21:19:01.493215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:85936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.854 [2024-04-18 21:19:01.493225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.854 [2024-04-18 21:19:01.493237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:85944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.854 [2024-04-18 21:19:01.493247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.854 [2024-04-18 21:19:01.493259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:85952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.854 [2024-04-18 21:19:01.493270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.854 [2024-04-18 21:19:01.493282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:85960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.854 [2024-04-18 21:19:01.493292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.854 [2024-04-18 21:19:01.493304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:85968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.854 [2024-04-18 21:19:01.493314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.854 [2024-04-18 21:19:01.493326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:85976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.854 [2024-04-18 21:19:01.493336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.854 [2024-04-18 21:19:01.493348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:85984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.854 [2024-04-18 21:19:01.493359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.854 [2024-04-18 21:19:01.493375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.854 [2024-04-18 21:19:01.493387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.854 [2024-04-18 21:19:01.493398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:85120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.854 [2024-04-18 21:19:01.493409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.854 [2024-04-18 21:19:01.493421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:85128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.854 [2024-04-18 21:19:01.493431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.854 [2024-04-18 21:19:01.493443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:85136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.854 [2024-04-18 21:19:01.493453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.854 [2024-04-18 21:19:01.493465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:85144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.854 [2024-04-18 21:19:01.493476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.854 [2024-04-18 21:19:01.493486] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c001a0 is same with the state(5) to be set 00:25:45.854 [2024-04-18 21:19:01.493499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:45.854 [2024-04-18 21:19:01.493507] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:45.854 [2024-04-18 21:19:01.493522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85152 len:8 PRP1 0x0 PRP2 0x0 00:25:45.854 [2024-04-18 21:19:01.493533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.854 [2024-04-18 21:19:01.493585] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c001a0 was disconnected and freed. reset controller. 00:25:45.854 [2024-04-18 21:19:01.493641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.854 [2024-04-18 21:19:01.493654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.854 [2024-04-18 21:19:01.493666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.854 [2024-04-18 21:19:01.493676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.854 [2024-04-18 21:19:01.493688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.854 [2024-04-18 21:19:01.493698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.854 [2024-04-18 21:19:01.493709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.854 [2024-04-18 21:19:01.493720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.854 [2024-04-18 21:19:01.493729] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:45.854 [2024-04-18 21:19:01.496849] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.854 [2024-04-18 21:19:01.496888] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:45.854 [2024-04-18 21:19:01.497711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.854 [2024-04-18 21:19:01.498060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.854 [2024-04-18 21:19:01.498075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:45.854 [2024-04-18 21:19:01.498086] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:45.854 [2024-04-18 21:19:01.498279] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:45.854 [2024-04-18 21:19:01.498461] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.854 [2024-04-18 21:19:01.498472] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.854 [2024-04-18 21:19:01.498482] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.854 [2024-04-18 21:19:01.501311] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.854 [2024-04-18 21:19:01.510131] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.854 [2024-04-18 21:19:01.510712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.854 [2024-04-18 21:19:01.511038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.854 [2024-04-18 21:19:01.511078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:45.854 [2024-04-18 21:19:01.511112] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:45.854 [2024-04-18 21:19:01.511692] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:45.854 [2024-04-18 21:19:01.511870] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.854 [2024-04-18 21:19:01.511880] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.854 [2024-04-18 21:19:01.511889] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.854 [2024-04-18 21:19:01.514529] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.854 [2024-04-18 21:19:01.522983] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.854 [2024-04-18 21:19:01.523550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.854 [2024-04-18 21:19:01.523864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.854 [2024-04-18 21:19:01.523903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:45.854 [2024-04-18 21:19:01.523937] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:45.855 [2024-04-18 21:19:01.524465] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:45.855 [2024-04-18 21:19:01.524659] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.855 [2024-04-18 21:19:01.524669] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.855 [2024-04-18 21:19:01.524678] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.855 [2024-04-18 21:19:01.527330] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.855 [2024-04-18 21:19:01.535833] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.855 [2024-04-18 21:19:01.536428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.855 [2024-04-18 21:19:01.536857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.855 [2024-04-18 21:19:01.536898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:45.855 [2024-04-18 21:19:01.536932] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:45.855 [2024-04-18 21:19:01.537558] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:45.855 [2024-04-18 21:19:01.537790] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.855 [2024-04-18 21:19:01.537800] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.855 [2024-04-18 21:19:01.537809] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.855 [2024-04-18 21:19:01.540448] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.855 [2024-04-18 21:19:01.548656] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.855 [2024-04-18 21:19:01.549232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.855 [2024-04-18 21:19:01.549656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.855 [2024-04-18 21:19:01.549698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:45.855 [2024-04-18 21:19:01.549733] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:45.855 [2024-04-18 21:19:01.550345] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:45.855 [2024-04-18 21:19:01.550753] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.855 [2024-04-18 21:19:01.550762] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.855 [2024-04-18 21:19:01.550771] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.855 [2024-04-18 21:19:01.553564] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.855 [2024-04-18 21:19:01.561530] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.855 [2024-04-18 21:19:01.562020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.855 [2024-04-18 21:19:01.562435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.855 [2024-04-18 21:19:01.562475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:45.855 [2024-04-18 21:19:01.562509] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:45.855 [2024-04-18 21:19:01.563138] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:45.855 [2024-04-18 21:19:01.563714] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.855 [2024-04-18 21:19:01.563724] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.855 [2024-04-18 21:19:01.563733] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.855 [2024-04-18 21:19:01.566375] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.855 [2024-04-18 21:19:01.574450] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.855 [2024-04-18 21:19:01.575033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.855 [2024-04-18 21:19:01.575386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.855 [2024-04-18 21:19:01.575433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:45.855 [2024-04-18 21:19:01.575466] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:45.855 [2024-04-18 21:19:01.575721] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:45.855 [2024-04-18 21:19:01.575981] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.855 [2024-04-18 21:19:01.575994] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.855 [2024-04-18 21:19:01.576007] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.855 [2024-04-18 21:19:01.580047] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.855 [2024-04-18 21:19:01.587864] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.855 [2024-04-18 21:19:01.588447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.855 [2024-04-18 21:19:01.588884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.855 [2024-04-18 21:19:01.588925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:45.855 [2024-04-18 21:19:01.588959] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:45.855 [2024-04-18 21:19:01.589586] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:45.855 [2024-04-18 21:19:01.589928] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.855 [2024-04-18 21:19:01.589938] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.855 [2024-04-18 21:19:01.589947] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.855 [2024-04-18 21:19:01.592695] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.855 [2024-04-18 21:19:01.600646] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.855 [2024-04-18 21:19:01.601217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.855 [2024-04-18 21:19:01.601563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.855 [2024-04-18 21:19:01.601604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:45.855 [2024-04-18 21:19:01.601639] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:45.855 [2024-04-18 21:19:01.601839] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:45.855 [2024-04-18 21:19:01.602005] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.855 [2024-04-18 21:19:01.602013] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.855 [2024-04-18 21:19:01.602022] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.855 [2024-04-18 21:19:01.604660] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.855 [2024-04-18 21:19:01.613559] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.855 [2024-04-18 21:19:01.614117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.855 [2024-04-18 21:19:01.614411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.855 [2024-04-18 21:19:01.614450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:45.855 [2024-04-18 21:19:01.614494] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:45.855 [2024-04-18 21:19:01.614878] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:45.855 [2024-04-18 21:19:01.615054] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.855 [2024-04-18 21:19:01.615064] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.855 [2024-04-18 21:19:01.615073] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.855 [2024-04-18 21:19:01.617703] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.855 [2024-04-18 21:19:01.626459] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.855 [2024-04-18 21:19:01.627037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.855 [2024-04-18 21:19:01.627383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.856 [2024-04-18 21:19:01.627422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:45.856 [2024-04-18 21:19:01.627456] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:45.856 [2024-04-18 21:19:01.628084] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:45.856 [2024-04-18 21:19:01.628287] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.856 [2024-04-18 21:19:01.628297] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.856 [2024-04-18 21:19:01.628305] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.856 [2024-04-18 21:19:01.630920] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.856 [2024-04-18 21:19:01.639281] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.856 [2024-04-18 21:19:01.639586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.856 [2024-04-18 21:19:01.639890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.856 [2024-04-18 21:19:01.639929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:45.856 [2024-04-18 21:19:01.639963] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:45.856 [2024-04-18 21:19:01.640590] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:45.856 [2024-04-18 21:19:01.641148] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.856 [2024-04-18 21:19:01.641157] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.856 [2024-04-18 21:19:01.641167] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.856 [2024-04-18 21:19:01.643799] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.856 [2024-04-18 21:19:01.652144] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.856 [2024-04-18 21:19:01.652640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.856 [2024-04-18 21:19:01.653004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.856 [2024-04-18 21:19:01.653016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:45.856 [2024-04-18 21:19:01.653026] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:45.856 [2024-04-18 21:19:01.653201] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:45.856 [2024-04-18 21:19:01.653368] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.856 [2024-04-18 21:19:01.653377] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.856 [2024-04-18 21:19:01.653385] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.856 [2024-04-18 21:19:01.656119] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.856 [2024-04-18 21:19:01.665111] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.856 [2024-04-18 21:19:01.665672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.856 [2024-04-18 21:19:01.666068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.856 [2024-04-18 21:19:01.666107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:45.856 [2024-04-18 21:19:01.666141] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:45.856 [2024-04-18 21:19:01.666604] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:45.856 [2024-04-18 21:19:01.666865] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.856 [2024-04-18 21:19:01.666879] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.856 [2024-04-18 21:19:01.666897] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.856 [2024-04-18 21:19:01.670943] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.856 [2024-04-18 21:19:01.678664] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.856 [2024-04-18 21:19:01.679118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.856 [2024-04-18 21:19:01.679464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.856 [2024-04-18 21:19:01.679503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:45.856 [2024-04-18 21:19:01.679557] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:45.856 [2024-04-18 21:19:01.679822] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:45.856 [2024-04-18 21:19:01.679994] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.856 [2024-04-18 21:19:01.680003] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.856 [2024-04-18 21:19:01.680012] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.856 [2024-04-18 21:19:01.682684] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.856 [2024-04-18 21:19:01.691498] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.856 [2024-04-18 21:19:01.692076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.856 [2024-04-18 21:19:01.692488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.856 [2024-04-18 21:19:01.692541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:45.856 [2024-04-18 21:19:01.692577] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:45.856 [2024-04-18 21:19:01.693133] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:45.856 [2024-04-18 21:19:01.693313] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.856 [2024-04-18 21:19:01.693323] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.856 [2024-04-18 21:19:01.693332] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.856 [2024-04-18 21:19:01.696047] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.856 [2024-04-18 21:19:01.704479] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.856 [2024-04-18 21:19:01.705044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.856 [2024-04-18 21:19:01.705414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.856 [2024-04-18 21:19:01.705453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:45.856 [2024-04-18 21:19:01.705485] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:45.856 [2024-04-18 21:19:01.705872] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:45.856 [2024-04-18 21:19:01.706048] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.856 [2024-04-18 21:19:01.706058] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.856 [2024-04-18 21:19:01.706067] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.856 [2024-04-18 21:19:01.708806] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.856 [2024-04-18 21:19:01.717259] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.856 [2024-04-18 21:19:01.717750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.856 [2024-04-18 21:19:01.718111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.856 [2024-04-18 21:19:01.718123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:45.856 [2024-04-18 21:19:01.718133] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:45.856 [2024-04-18 21:19:01.718305] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:45.856 [2024-04-18 21:19:01.718471] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.856 [2024-04-18 21:19:01.718480] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.856 [2024-04-18 21:19:01.718489] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.856 [2024-04-18 21:19:01.721173] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.856 [2024-04-18 21:19:01.730137] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.856 [2024-04-18 21:19:01.730699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.856 [2024-04-18 21:19:01.730987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.856 [2024-04-18 21:19:01.730999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:45.856 [2024-04-18 21:19:01.731009] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:45.856 [2024-04-18 21:19:01.731180] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:45.856 [2024-04-18 21:19:01.731347] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.856 [2024-04-18 21:19:01.731360] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.856 [2024-04-18 21:19:01.731368] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.856 [2024-04-18 21:19:01.734048] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.856 [2024-04-18 21:19:01.743048] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.856 [2024-04-18 21:19:01.743611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.856 [2024-04-18 21:19:01.743951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.856 [2024-04-18 21:19:01.743964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:45.856 [2024-04-18 21:19:01.743974] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:45.856 [2024-04-18 21:19:01.744164] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:45.856 [2024-04-18 21:19:01.744345] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.857 [2024-04-18 21:19:01.744355] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.857 [2024-04-18 21:19:01.744364] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.857 [2024-04-18 21:19:01.747183] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.857 [2024-04-18 21:19:01.756137] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.857 [2024-04-18 21:19:01.756708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.857 [2024-04-18 21:19:01.757018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.857 [2024-04-18 21:19:01.757057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:45.857 [2024-04-18 21:19:01.757091] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:45.857 [2024-04-18 21:19:01.757406] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:45.857 [2024-04-18 21:19:01.757671] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.857 [2024-04-18 21:19:01.757686] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.857 [2024-04-18 21:19:01.757699] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.857 [2024-04-18 21:19:01.761742] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.857 [2024-04-18 21:19:01.769634] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.857 [2024-04-18 21:19:01.770215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.857 [2024-04-18 21:19:01.770607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.857 [2024-04-18 21:19:01.770647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:45.857 [2024-04-18 21:19:01.770681] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:45.857 [2024-04-18 21:19:01.771286] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:45.857 [2024-04-18 21:19:01.771462] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.857 [2024-04-18 21:19:01.771472] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.857 [2024-04-18 21:19:01.771485] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.857 [2024-04-18 21:19:01.774305] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.117 [2024-04-18 21:19:01.782711] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.117 [2024-04-18 21:19:01.783287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.117 [2024-04-18 21:19:01.783621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.117 [2024-04-18 21:19:01.783662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.117 [2024-04-18 21:19:01.783695] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.117 [2024-04-18 21:19:01.784289] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.117 [2024-04-18 21:19:01.784456] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.117 [2024-04-18 21:19:01.784465] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.117 [2024-04-18 21:19:01.784473] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.117 [2024-04-18 21:19:01.787094] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.117 [2024-04-18 21:19:01.795506] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.117 [2024-04-18 21:19:01.796070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.117 [2024-04-18 21:19:01.796424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.117 [2024-04-18 21:19:01.796463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.117 [2024-04-18 21:19:01.796496] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.117 [2024-04-18 21:19:01.797033] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.117 [2024-04-18 21:19:01.797215] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.117 [2024-04-18 21:19:01.797225] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.117 [2024-04-18 21:19:01.797234] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.117 [2024-04-18 21:19:01.799904] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.117 [2024-04-18 21:19:01.808386] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.117 [2024-04-18 21:19:01.808940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.118 [2024-04-18 21:19:01.809361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.118 [2024-04-18 21:19:01.809400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.118 [2024-04-18 21:19:01.809433] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.118 [2024-04-18 21:19:01.810060] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.118 [2024-04-18 21:19:01.810527] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.118 [2024-04-18 21:19:01.810537] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.118 [2024-04-18 21:19:01.810546] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.118 [2024-04-18 21:19:01.813150] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.118 [2024-04-18 21:19:01.821227] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.118 [2024-04-18 21:19:01.821789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.118 [2024-04-18 21:19:01.822208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.118 [2024-04-18 21:19:01.822259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.118 [2024-04-18 21:19:01.822268] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.118 [2024-04-18 21:19:01.822440] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.118 [2024-04-18 21:19:01.822638] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.118 [2024-04-18 21:19:01.822648] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.118 [2024-04-18 21:19:01.822657] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.118 [2024-04-18 21:19:01.825304] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.118 [2024-04-18 21:19:01.834063] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.118 [2024-04-18 21:19:01.834634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.118 [2024-04-18 21:19:01.835054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.118 [2024-04-18 21:19:01.835095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.118 [2024-04-18 21:19:01.835127] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.118 [2024-04-18 21:19:01.835751] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.118 [2024-04-18 21:19:01.836348] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.118 [2024-04-18 21:19:01.836371] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.118 [2024-04-18 21:19:01.836379] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.118 [2024-04-18 21:19:01.839048] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.118 [2024-04-18 21:19:01.847053] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.118 [2024-04-18 21:19:01.847482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.118 [2024-04-18 21:19:01.847915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.118 [2024-04-18 21:19:01.847955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.118 [2024-04-18 21:19:01.847988] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.118 [2024-04-18 21:19:01.848472] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.118 [2024-04-18 21:19:01.848721] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.118 [2024-04-18 21:19:01.848737] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.118 [2024-04-18 21:19:01.848750] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.118 [2024-04-18 21:19:01.852795] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.118 [2024-04-18 21:19:01.860383] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.118 [2024-04-18 21:19:01.860974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.118 [2024-04-18 21:19:01.861328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.118 [2024-04-18 21:19:01.861368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.118 [2024-04-18 21:19:01.861402] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.118 [2024-04-18 21:19:01.861654] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.118 [2024-04-18 21:19:01.861843] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.118 [2024-04-18 21:19:01.861853] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.118 [2024-04-18 21:19:01.861861] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.118 [2024-04-18 21:19:01.864523] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.118 [2024-04-18 21:19:01.873208] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.118 [2024-04-18 21:19:01.873779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.118 [2024-04-18 21:19:01.874218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.118 [2024-04-18 21:19:01.874258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.118 [2024-04-18 21:19:01.874294] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.118 [2024-04-18 21:19:01.874616] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.118 [2024-04-18 21:19:01.874783] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.118 [2024-04-18 21:19:01.874793] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.118 [2024-04-18 21:19:01.874804] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.118 [2024-04-18 21:19:01.877484] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.118 [2024-04-18 21:19:01.886192] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.118 [2024-04-18 21:19:01.886661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.118 [2024-04-18 21:19:01.886869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.118 [2024-04-18 21:19:01.886882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.118 [2024-04-18 21:19:01.886892] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.118 [2024-04-18 21:19:01.887064] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.118 [2024-04-18 21:19:01.887230] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.118 [2024-04-18 21:19:01.887239] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.118 [2024-04-18 21:19:01.887247] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.118 [2024-04-18 21:19:01.889975] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.118 [2024-04-18 21:19:01.899067] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.118 [2024-04-18 21:19:01.899572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.118 [2024-04-18 21:19:01.899910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.118 [2024-04-18 21:19:01.899950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.118 [2024-04-18 21:19:01.899983] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.118 [2024-04-18 21:19:01.900567] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.118 [2024-04-18 21:19:01.900734] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.118 [2024-04-18 21:19:01.900742] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.118 [2024-04-18 21:19:01.900751] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.118 [2024-04-18 21:19:01.903564] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.118 [2024-04-18 21:19:01.912198] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.119 [2024-04-18 21:19:01.912714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.119 [2024-04-18 21:19:01.913015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.119 [2024-04-18 21:19:01.913054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.119 [2024-04-18 21:19:01.913088] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.119 [2024-04-18 21:19:01.913618] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.119 [2024-04-18 21:19:01.913795] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.119 [2024-04-18 21:19:01.913804] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.119 [2024-04-18 21:19:01.913813] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.119 [2024-04-18 21:19:01.916605] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.119 [2024-04-18 21:19:01.925091] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.119 [2024-04-18 21:19:01.925574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.119 [2024-04-18 21:19:01.925920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.119 [2024-04-18 21:19:01.925959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.119 [2024-04-18 21:19:01.925993] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.119 [2024-04-18 21:19:01.926596] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.119 [2024-04-18 21:19:01.926763] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.119 [2024-04-18 21:19:01.926772] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.119 [2024-04-18 21:19:01.926780] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.119 [2024-04-18 21:19:01.929425] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.119 [2024-04-18 21:19:01.938010] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.119 [2024-04-18 21:19:01.938465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.119 [2024-04-18 21:19:01.938718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.119 [2024-04-18 21:19:01.938767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.119 [2024-04-18 21:19:01.938800] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.119 [2024-04-18 21:19:01.939410] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.119 [2024-04-18 21:19:01.939626] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.119 [2024-04-18 21:19:01.939636] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.119 [2024-04-18 21:19:01.939644] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.119 [2024-04-18 21:19:01.943461] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.119 [2024-04-18 21:19:01.951774] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.119 [2024-04-18 21:19:01.952245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.119 [2024-04-18 21:19:01.952591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.119 [2024-04-18 21:19:01.952632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.119 [2024-04-18 21:19:01.952674] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.119 [2024-04-18 21:19:01.952856] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.119 [2024-04-18 21:19:01.953032] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.119 [2024-04-18 21:19:01.953042] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.119 [2024-04-18 21:19:01.953051] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.119 [2024-04-18 21:19:01.955772] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.119 [2024-04-18 21:19:01.964636] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.119 [2024-04-18 21:19:01.965194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.119 [2024-04-18 21:19:01.965482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.119 [2024-04-18 21:19:01.965535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.119 [2024-04-18 21:19:01.965573] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.119 [2024-04-18 21:19:01.965841] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.119 [2024-04-18 21:19:01.966017] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.119 [2024-04-18 21:19:01.966026] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.119 [2024-04-18 21:19:01.966035] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.119 [2024-04-18 21:19:01.968674] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.119 [2024-04-18 21:19:01.977526] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.119 [2024-04-18 21:19:01.977975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.119 [2024-04-18 21:19:01.978320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.119 [2024-04-18 21:19:01.978359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.119 [2024-04-18 21:19:01.978403] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.119 [2024-04-18 21:19:01.978640] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.119 [2024-04-18 21:19:01.978826] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.119 [2024-04-18 21:19:01.978835] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.119 [2024-04-18 21:19:01.978843] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.119 [2024-04-18 21:19:01.981429] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.119 [2024-04-18 21:19:01.990345] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.119 [2024-04-18 21:19:01.990831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.119 [2024-04-18 21:19:01.990999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.119 [2024-04-18 21:19:01.991012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.119 [2024-04-18 21:19:01.991022] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.119 [2024-04-18 21:19:01.991204] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.119 [2024-04-18 21:19:01.991380] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.119 [2024-04-18 21:19:01.991389] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.119 [2024-04-18 21:19:01.991398] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.119 [2024-04-18 21:19:01.994012] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.119 [2024-04-18 21:19:02.003444] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.119 [2024-04-18 21:19:02.003880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.119 [2024-04-18 21:19:02.004168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.119 [2024-04-18 21:19:02.004207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.119 [2024-04-18 21:19:02.004241] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.119 [2024-04-18 21:19:02.004721] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.119 [2024-04-18 21:19:02.004900] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.119 [2024-04-18 21:19:02.004909] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.119 [2024-04-18 21:19:02.004918] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.119 [2024-04-18 21:19:02.007679] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.119 [2024-04-18 21:19:02.016568] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.119 [2024-04-18 21:19:02.017048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.120 [2024-04-18 21:19:02.017412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.120 [2024-04-18 21:19:02.017451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.120 [2024-04-18 21:19:02.017485] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.120 [2024-04-18 21:19:02.018031] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.120 [2024-04-18 21:19:02.018198] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.120 [2024-04-18 21:19:02.018207] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.120 [2024-04-18 21:19:02.018216] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.120 [2024-04-18 21:19:02.020837] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.120 [2024-04-18 21:19:02.029484] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.120 [2024-04-18 21:19:02.029969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.120 [2024-04-18 21:19:02.030321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.120 [2024-04-18 21:19:02.030360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.120 [2024-04-18 21:19:02.030394] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.120 [2024-04-18 21:19:02.030814] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.120 [2024-04-18 21:19:02.031074] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.120 [2024-04-18 21:19:02.031088] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.120 [2024-04-18 21:19:02.031101] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.120 [2024-04-18 21:19:02.035147] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.120 [2024-04-18 21:19:02.043234] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.120 [2024-04-18 21:19:02.043790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.120 [2024-04-18 21:19:02.044194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.120 [2024-04-18 21:19:02.044232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.120 [2024-04-18 21:19:02.044265] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.120 [2024-04-18 21:19:02.044886] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.120 [2024-04-18 21:19:02.045068] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.120 [2024-04-18 21:19:02.045078] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.120 [2024-04-18 21:19:02.045087] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.380 [2024-04-18 21:19:02.047921] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.380 [2024-04-18 21:19:02.056206] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.380 [2024-04-18 21:19:02.056688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.380 [2024-04-18 21:19:02.056934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.380 [2024-04-18 21:19:02.056973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.380 [2024-04-18 21:19:02.057006] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.380 [2024-04-18 21:19:02.057627] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.381 [2024-04-18 21:19:02.058159] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.381 [2024-04-18 21:19:02.058169] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.381 [2024-04-18 21:19:02.058178] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.381 [2024-04-18 21:19:02.060810] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.381 [2024-04-18 21:19:02.069385] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.381 [2024-04-18 21:19:02.069873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.381 [2024-04-18 21:19:02.070029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.381 [2024-04-18 21:19:02.070068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.381 [2024-04-18 21:19:02.070101] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.381 [2024-04-18 21:19:02.070650] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.381 [2024-04-18 21:19:02.070818] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.381 [2024-04-18 21:19:02.070827] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.381 [2024-04-18 21:19:02.070835] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.381 [2024-04-18 21:19:02.074552] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.381 [2024-04-18 21:19:02.083132] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.381 [2024-04-18 21:19:02.083561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.381 [2024-04-18 21:19:02.083908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.381 [2024-04-18 21:19:02.083948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.381 [2024-04-18 21:19:02.083981] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.381 [2024-04-18 21:19:02.084612] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.381 [2024-04-18 21:19:02.085076] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.381 [2024-04-18 21:19:02.085086] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.381 [2024-04-18 21:19:02.085095] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.381 [2024-04-18 21:19:02.087836] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.381 [2024-04-18 21:19:02.096030] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.381 [2024-04-18 21:19:02.096690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.381 [2024-04-18 21:19:02.097121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.381 [2024-04-18 21:19:02.097161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.381 [2024-04-18 21:19:02.097183] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.381 [2024-04-18 21:19:02.097357] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.381 [2024-04-18 21:19:02.097530] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.381 [2024-04-18 21:19:02.097543] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.381 [2024-04-18 21:19:02.097551] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.381 [2024-04-18 21:19:02.100288] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.381 [2024-04-18 21:19:02.109062] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.381 [2024-04-18 21:19:02.109562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.381 [2024-04-18 21:19:02.109959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.381 [2024-04-18 21:19:02.109998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.381 [2024-04-18 21:19:02.110032] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.381 [2024-04-18 21:19:02.110653] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.381 [2024-04-18 21:19:02.111126] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.381 [2024-04-18 21:19:02.111136] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.381 [2024-04-18 21:19:02.111144] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.381 [2024-04-18 21:19:02.113823] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.381 [2024-04-18 21:19:02.122077] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.381 [2024-04-18 21:19:02.122568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.381 [2024-04-18 21:19:02.122868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.381 [2024-04-18 21:19:02.122907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.381 [2024-04-18 21:19:02.122940] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.381 [2024-04-18 21:19:02.123575] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.381 [2024-04-18 21:19:02.124150] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.381 [2024-04-18 21:19:02.124160] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.381 [2024-04-18 21:19:02.124169] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.381 [2024-04-18 21:19:02.126809] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.381 [2024-04-18 21:19:02.134909] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.381 [2024-04-18 21:19:02.135421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.381 [2024-04-18 21:19:02.135646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.381 [2024-04-18 21:19:02.135660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.381 [2024-04-18 21:19:02.135670] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.381 [2024-04-18 21:19:02.135852] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.381 [2024-04-18 21:19:02.136028] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.381 [2024-04-18 21:19:02.136038] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.381 [2024-04-18 21:19:02.136051] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.381 [2024-04-18 21:19:02.138693] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.381 [2024-04-18 21:19:02.147742] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.381 [2024-04-18 21:19:02.148174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.381 [2024-04-18 21:19:02.148596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.381 [2024-04-18 21:19:02.148636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.381 [2024-04-18 21:19:02.148668] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.381 [2024-04-18 21:19:02.149280] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.381 [2024-04-18 21:19:02.149491] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.381 [2024-04-18 21:19:02.149501] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.381 [2024-04-18 21:19:02.149518] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.381 [2024-04-18 21:19:02.152155] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.381 [2024-04-18 21:19:02.160674] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.381 [2024-04-18 21:19:02.161156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.381 [2024-04-18 21:19:02.161581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.381 [2024-04-18 21:19:02.161622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.381 [2024-04-18 21:19:02.161655] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.381 [2024-04-18 21:19:02.162223] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.381 [2024-04-18 21:19:02.162483] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.381 [2024-04-18 21:19:02.162497] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.381 [2024-04-18 21:19:02.162515] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.381 [2024-04-18 21:19:02.166562] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.381 [2024-04-18 21:19:02.174031] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.381 [2024-04-18 21:19:02.174578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.381 [2024-04-18 21:19:02.174813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.381 [2024-04-18 21:19:02.174826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.381 [2024-04-18 21:19:02.174836] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.381 [2024-04-18 21:19:02.175019] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.381 [2024-04-18 21:19:02.175197] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.381 [2024-04-18 21:19:02.175207] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.381 [2024-04-18 21:19:02.175216] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.381 [2024-04-18 21:19:02.177906] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.381 [2024-04-18 21:19:02.186912] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.381 [2024-04-18 21:19:02.187503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.381 [2024-04-18 21:19:02.187888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.381 [2024-04-18 21:19:02.187928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.381 [2024-04-18 21:19:02.187961] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.381 [2024-04-18 21:19:02.188458] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.381 [2024-04-18 21:19:02.188643] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.381 [2024-04-18 21:19:02.188653] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.381 [2024-04-18 21:19:02.188662] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.381 [2024-04-18 21:19:02.191262] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.381 [2024-04-18 21:19:02.199872] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.381 [2024-04-18 21:19:02.200444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.381 [2024-04-18 21:19:02.200821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.381 [2024-04-18 21:19:02.200862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.381 [2024-04-18 21:19:02.200896] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.381 [2024-04-18 21:19:02.201467] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.381 [2024-04-18 21:19:02.201643] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.381 [2024-04-18 21:19:02.201653] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.381 [2024-04-18 21:19:02.201662] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.381 [2024-04-18 21:19:02.204307] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.381 [2024-04-18 21:19:02.212716] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.381 [2024-04-18 21:19:02.213126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.381 [2024-04-18 21:19:02.213439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.381 [2024-04-18 21:19:02.213479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.382 [2024-04-18 21:19:02.213527] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.382 [2024-04-18 21:19:02.214070] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.382 [2024-04-18 21:19:02.214246] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.382 [2024-04-18 21:19:02.214256] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.382 [2024-04-18 21:19:02.214265] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.382 [2024-04-18 21:19:02.216889] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.382 [2024-04-18 21:19:02.225784] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.382 [2024-04-18 21:19:02.226274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.382 [2024-04-18 21:19:02.226689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.382 [2024-04-18 21:19:02.226729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.382 [2024-04-18 21:19:02.226763] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.382 [2024-04-18 21:19:02.227292] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.382 [2024-04-18 21:19:02.227469] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.382 [2024-04-18 21:19:02.227479] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.382 [2024-04-18 21:19:02.227488] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.382 [2024-04-18 21:19:02.230269] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.382 [2024-04-18 21:19:02.238840] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.382 [2024-04-18 21:19:02.239350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.382 [2024-04-18 21:19:02.239701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.382 [2024-04-18 21:19:02.239744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.382 [2024-04-18 21:19:02.239778] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.382 [2024-04-18 21:19:02.240391] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.382 [2024-04-18 21:19:02.240637] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.382 [2024-04-18 21:19:02.240647] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.382 [2024-04-18 21:19:02.240656] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.382 [2024-04-18 21:19:02.243257] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.382 [2024-04-18 21:19:02.251757] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.382 [2024-04-18 21:19:02.252342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.382 [2024-04-18 21:19:02.252780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.382 [2024-04-18 21:19:02.252820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.382 [2024-04-18 21:19:02.252854] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.382 [2024-04-18 21:19:02.253377] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.382 [2024-04-18 21:19:02.253635] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.382 [2024-04-18 21:19:02.253650] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.382 [2024-04-18 21:19:02.253663] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.382 [2024-04-18 21:19:02.257711] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.382 [2024-04-18 21:19:02.265434] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.382 [2024-04-18 21:19:02.265932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.382 [2024-04-18 21:19:02.266239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.382 [2024-04-18 21:19:02.266278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.382 [2024-04-18 21:19:02.266312] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.382 [2024-04-18 21:19:02.266940] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.382 [2024-04-18 21:19:02.267499] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.382 [2024-04-18 21:19:02.267509] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.382 [2024-04-18 21:19:02.267527] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.382 [2024-04-18 21:19:02.270265] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.382 [2024-04-18 21:19:02.278472] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.382 [2024-04-18 21:19:02.278971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.382 [2024-04-18 21:19:02.279336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.382 [2024-04-18 21:19:02.279375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.382 [2024-04-18 21:19:02.279407] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.382 [2024-04-18 21:19:02.280036] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.382 [2024-04-18 21:19:02.280507] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.382 [2024-04-18 21:19:02.280522] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.382 [2024-04-18 21:19:02.280531] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.382 [2024-04-18 21:19:02.283257] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.382 [2024-04-18 21:19:02.291418] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.382 [2024-04-18 21:19:02.291987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.382 [2024-04-18 21:19:02.292369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.382 [2024-04-18 21:19:02.292408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.382 [2024-04-18 21:19:02.292440] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.382 [2024-04-18 21:19:02.292812] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.382 [2024-04-18 21:19:02.292979] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.382 [2024-04-18 21:19:02.292988] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.382 [2024-04-18 21:19:02.292997] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.382 [2024-04-18 21:19:02.295661] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.382 [2024-04-18 21:19:02.304305] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.382 [2024-04-18 21:19:02.304814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.382 [2024-04-18 21:19:02.305170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.382 [2024-04-18 21:19:02.305186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.382 [2024-04-18 21:19:02.305195] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.382 [2024-04-18 21:19:02.305393] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.382 [2024-04-18 21:19:02.305573] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.382 [2024-04-18 21:19:02.305583] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.382 [2024-04-18 21:19:02.305592] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.382 [2024-04-18 21:19:02.308422] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.642 [2024-04-18 21:19:02.317432] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.642 [2024-04-18 21:19:02.317976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.642 [2024-04-18 21:19:02.318405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.642 [2024-04-18 21:19:02.318445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.642 [2024-04-18 21:19:02.318480] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.642 [2024-04-18 21:19:02.318680] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.642 [2024-04-18 21:19:02.318858] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.642 [2024-04-18 21:19:02.318867] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.642 [2024-04-18 21:19:02.318876] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.642 [2024-04-18 21:19:02.321584] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.642 [2024-04-18 21:19:02.330461] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.642 [2024-04-18 21:19:02.331002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.642 [2024-04-18 21:19:02.331307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.642 [2024-04-18 21:19:02.331319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.642 [2024-04-18 21:19:02.331329] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.642 [2024-04-18 21:19:02.331503] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.642 [2024-04-18 21:19:02.331699] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.642 [2024-04-18 21:19:02.331709] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.642 [2024-04-18 21:19:02.331718] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.642 [2024-04-18 21:19:02.334371] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.642 [2024-04-18 21:19:02.343378] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.643 [2024-04-18 21:19:02.343933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.643 [2024-04-18 21:19:02.344297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.643 [2024-04-18 21:19:02.344337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.643 [2024-04-18 21:19:02.344379] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.643 [2024-04-18 21:19:02.344778] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.643 [2024-04-18 21:19:02.344944] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.643 [2024-04-18 21:19:02.344953] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.643 [2024-04-18 21:19:02.344962] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.643 [2024-04-18 21:19:02.347557] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.643 [2024-04-18 21:19:02.356266] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.643 [2024-04-18 21:19:02.356806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.643 [2024-04-18 21:19:02.357222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.643 [2024-04-18 21:19:02.357262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.643 [2024-04-18 21:19:02.357295] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.643 [2024-04-18 21:19:02.357787] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.643 [2024-04-18 21:19:02.357964] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.643 [2024-04-18 21:19:02.357974] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.643 [2024-04-18 21:19:02.357983] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.643 [2024-04-18 21:19:02.360685] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.643 [2024-04-18 21:19:02.369056] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.643 [2024-04-18 21:19:02.369598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.643 [2024-04-18 21:19:02.369987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.643 [2024-04-18 21:19:02.370025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.643 [2024-04-18 21:19:02.370058] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.643 [2024-04-18 21:19:02.370567] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.643 [2024-04-18 21:19:02.370744] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.643 [2024-04-18 21:19:02.370753] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.643 [2024-04-18 21:19:02.370762] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.643 [2024-04-18 21:19:02.373406] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.643 [2024-04-18 21:19:02.381853] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.643 [2024-04-18 21:19:02.382387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.643 [2024-04-18 21:19:02.382691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.643 [2024-04-18 21:19:02.382731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.643 [2024-04-18 21:19:02.382764] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.643 [2024-04-18 21:19:02.383386] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.643 [2024-04-18 21:19:02.383898] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.643 [2024-04-18 21:19:02.383913] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.643 [2024-04-18 21:19:02.383926] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.643 [2024-04-18 21:19:02.387967] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.643 [2024-04-18 21:19:02.395429] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.643 [2024-04-18 21:19:02.395974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.643 [2024-04-18 21:19:02.396360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.643 [2024-04-18 21:19:02.396399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.643 [2024-04-18 21:19:02.396432] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.643 [2024-04-18 21:19:02.396847] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.643 [2024-04-18 21:19:02.397023] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.643 [2024-04-18 21:19:02.397033] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.643 [2024-04-18 21:19:02.397041] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.643 [2024-04-18 21:19:02.399728] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.643 [2024-04-18 21:19:02.408246] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.643 [2024-04-18 21:19:02.408782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.643 [2024-04-18 21:19:02.409199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.643 [2024-04-18 21:19:02.409238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.643 [2024-04-18 21:19:02.409272] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.643 [2024-04-18 21:19:02.409777] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.643 [2024-04-18 21:19:02.409953] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.643 [2024-04-18 21:19:02.409962] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.643 [2024-04-18 21:19:02.409972] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.643 [2024-04-18 21:19:02.412609] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.643 [2024-04-18 21:19:02.421048] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.643 [2024-04-18 21:19:02.421595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.643 [2024-04-18 21:19:02.422026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.643 [2024-04-18 21:19:02.422064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.643 [2024-04-18 21:19:02.422097] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.643 [2024-04-18 21:19:02.422635] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.643 [2024-04-18 21:19:02.422815] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.643 [2024-04-18 21:19:02.422825] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.643 [2024-04-18 21:19:02.422834] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.643 [2024-04-18 21:19:02.425479] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.643 [2024-04-18 21:19:02.433925] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.643 [2024-04-18 21:19:02.434466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.643 [2024-04-18 21:19:02.434908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.643 [2024-04-18 21:19:02.434948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.643 [2024-04-18 21:19:02.434981] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.643 [2024-04-18 21:19:02.435580] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.643 [2024-04-18 21:19:02.435772] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.643 [2024-04-18 21:19:02.435782] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.643 [2024-04-18 21:19:02.435790] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.643 [2024-04-18 21:19:02.438375] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.643 [2024-04-18 21:19:02.446773] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.643 [2024-04-18 21:19:02.447313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.643 [2024-04-18 21:19:02.447720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.643 [2024-04-18 21:19:02.447761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.643 [2024-04-18 21:19:02.447794] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.643 [2024-04-18 21:19:02.448002] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.643 [2024-04-18 21:19:02.448178] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.643 [2024-04-18 21:19:02.448187] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.643 [2024-04-18 21:19:02.448197] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.643 [2024-04-18 21:19:02.450820] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.643 [2024-04-18 21:19:02.459576] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.643 [2024-04-18 21:19:02.460121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.643 [2024-04-18 21:19:02.460553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.643 [2024-04-18 21:19:02.460596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.644 [2024-04-18 21:19:02.460606] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.644 [2024-04-18 21:19:02.460780] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.644 [2024-04-18 21:19:02.460947] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.644 [2024-04-18 21:19:02.460959] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.644 [2024-04-18 21:19:02.460968] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.644 [2024-04-18 21:19:02.463587] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.644 [2024-04-18 21:19:02.472442] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.644 [2024-04-18 21:19:02.472993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.644 [2024-04-18 21:19:02.473352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.644 [2024-04-18 21:19:02.473391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.644 [2024-04-18 21:19:02.473424] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.644 [2024-04-18 21:19:02.473983] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.644 [2024-04-18 21:19:02.474161] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.644 [2024-04-18 21:19:02.474171] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.644 [2024-04-18 21:19:02.474180] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.644 [2024-04-18 21:19:02.476809] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.644 [2024-04-18 21:19:02.485262] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.644 [2024-04-18 21:19:02.485806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.644 [2024-04-18 21:19:02.486242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.644 [2024-04-18 21:19:02.486283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.644 [2024-04-18 21:19:02.486316] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.644 [2024-04-18 21:19:02.486946] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.644 [2024-04-18 21:19:02.487181] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.644 [2024-04-18 21:19:02.487191] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.644 [2024-04-18 21:19:02.487200] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.644 [2024-04-18 21:19:02.489823] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.644 [2024-04-18 21:19:02.498249] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.644 [2024-04-18 21:19:02.498808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.644 [2024-04-18 21:19:02.499197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.644 [2024-04-18 21:19:02.499237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.644 [2024-04-18 21:19:02.499271] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.644 [2024-04-18 21:19:02.499833] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.644 [2024-04-18 21:19:02.500001] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.644 [2024-04-18 21:19:02.500010] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.644 [2024-04-18 21:19:02.500022] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.644 [2024-04-18 21:19:02.502634] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.644 [2024-04-18 21:19:02.511350] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.644 [2024-04-18 21:19:02.511888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.644 [2024-04-18 21:19:02.512256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.644 [2024-04-18 21:19:02.512295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.644 [2024-04-18 21:19:02.512330] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.644 [2024-04-18 21:19:02.512954] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.644 [2024-04-18 21:19:02.513146] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.644 [2024-04-18 21:19:02.513156] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.644 [2024-04-18 21:19:02.513165] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.644 [2024-04-18 21:19:02.515962] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.644 [2024-04-18 21:19:02.524456] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.644 [2024-04-18 21:19:02.525040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.644 [2024-04-18 21:19:02.525441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.644 [2024-04-18 21:19:02.525481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.644 [2024-04-18 21:19:02.525525] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.644 [2024-04-18 21:19:02.526122] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.644 [2024-04-18 21:19:02.526299] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.644 [2024-04-18 21:19:02.526308] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.644 [2024-04-18 21:19:02.526317] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.644 [2024-04-18 21:19:02.529145] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.644 [2024-04-18 21:19:02.537486] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.644 [2024-04-18 21:19:02.538027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.644 [2024-04-18 21:19:02.538385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.644 [2024-04-18 21:19:02.538424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.644 [2024-04-18 21:19:02.538458] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.644 [2024-04-18 21:19:02.538881] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.644 [2024-04-18 21:19:02.539058] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.644 [2024-04-18 21:19:02.539068] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.644 [2024-04-18 21:19:02.539077] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.644 [2024-04-18 21:19:02.541708] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.644 [2024-04-18 21:19:02.550263] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.644 [2024-04-18 21:19:02.550796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.644 [2024-04-18 21:19:02.551106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.644 [2024-04-18 21:19:02.551146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.644 [2024-04-18 21:19:02.551179] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.644 [2024-04-18 21:19:02.551806] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.644 [2024-04-18 21:19:02.552020] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.644 [2024-04-18 21:19:02.552029] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.644 [2024-04-18 21:19:02.552039] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.644 [2024-04-18 21:19:02.554871] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.644 [2024-04-18 21:19:02.563067] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.644 [2024-04-18 21:19:02.563605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.644 [2024-04-18 21:19:02.563992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.644 [2024-04-18 21:19:02.564031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.644 [2024-04-18 21:19:02.564064] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.644 [2024-04-18 21:19:02.564389] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.644 [2024-04-18 21:19:02.564578] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.644 [2024-04-18 21:19:02.564588] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.644 [2024-04-18 21:19:02.564597] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.644 [2024-04-18 21:19:02.567276] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.905 [2024-04-18 21:19:02.575945] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.905 [2024-04-18 21:19:02.576482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.905 [2024-04-18 21:19:02.576916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.905 [2024-04-18 21:19:02.576957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.905 [2024-04-18 21:19:02.576990] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.905 [2024-04-18 21:19:02.577617] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.905 [2024-04-18 21:19:02.578197] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.905 [2024-04-18 21:19:02.578208] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.905 [2024-04-18 21:19:02.578217] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.905 [2024-04-18 21:19:02.580993] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.905 [2024-04-18 21:19:02.588883] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.905 [2024-04-18 21:19:02.589443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.905 [2024-04-18 21:19:02.589869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.905 [2024-04-18 21:19:02.589910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.905 [2024-04-18 21:19:02.589944] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.905 [2024-04-18 21:19:02.590168] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.905 [2024-04-18 21:19:02.590345] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.905 [2024-04-18 21:19:02.590355] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.905 [2024-04-18 21:19:02.590363] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.905 [2024-04-18 21:19:02.592998] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.905 [2024-04-18 21:19:02.601751] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.905 [2024-04-18 21:19:02.602296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.905 [2024-04-18 21:19:02.602635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.905 [2024-04-18 21:19:02.602676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.905 [2024-04-18 21:19:02.602710] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.905 [2024-04-18 21:19:02.603263] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.905 [2024-04-18 21:19:02.603429] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.905 [2024-04-18 21:19:02.603439] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.905 [2024-04-18 21:19:02.603447] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.905 [2024-04-18 21:19:02.606072] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.905 [2024-04-18 21:19:02.614579] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.905 [2024-04-18 21:19:02.615115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.905 [2024-04-18 21:19:02.615483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.905 [2024-04-18 21:19:02.615539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.905 [2024-04-18 21:19:02.615576] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.905 [2024-04-18 21:19:02.616044] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.905 [2024-04-18 21:19:02.616211] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.905 [2024-04-18 21:19:02.616220] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.905 [2024-04-18 21:19:02.616229] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.905 [2024-04-18 21:19:02.619974] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.905 [2024-04-18 21:19:02.628266] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.905 [2024-04-18 21:19:02.628821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.905 [2024-04-18 21:19:02.629294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.905 [2024-04-18 21:19:02.629344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.905 [2024-04-18 21:19:02.629353] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.905 [2024-04-18 21:19:02.629552] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.905 [2024-04-18 21:19:02.629729] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.905 [2024-04-18 21:19:02.629739] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.905 [2024-04-18 21:19:02.629748] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.905 [2024-04-18 21:19:02.632443] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.905 [2024-04-18 21:19:02.641127] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.905 [2024-04-18 21:19:02.641681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.905 [2024-04-18 21:19:02.642072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.905 [2024-04-18 21:19:02.642112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.905 [2024-04-18 21:19:02.642146] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.905 [2024-04-18 21:19:02.642773] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.905 [2024-04-18 21:19:02.643294] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.905 [2024-04-18 21:19:02.643303] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.905 [2024-04-18 21:19:02.643313] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.905 [2024-04-18 21:19:02.645956] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.905 [2024-04-18 21:19:02.654002] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.905 [2024-04-18 21:19:02.654469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.905 [2024-04-18 21:19:02.654919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.905 [2024-04-18 21:19:02.654961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.905 [2024-04-18 21:19:02.654995] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.905 [2024-04-18 21:19:02.655276] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.905 [2024-04-18 21:19:02.655443] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.906 [2024-04-18 21:19:02.655452] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.906 [2024-04-18 21:19:02.655460] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.906 [2024-04-18 21:19:02.658158] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.906 [2024-04-18 21:19:02.666840] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.906 [2024-04-18 21:19:02.667373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.906 [2024-04-18 21:19:02.667819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.906 [2024-04-18 21:19:02.667861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.906 [2024-04-18 21:19:02.667894] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.906 [2024-04-18 21:19:02.668430] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.906 [2024-04-18 21:19:02.668611] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.906 [2024-04-18 21:19:02.668621] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.906 [2024-04-18 21:19:02.668630] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.906 [2024-04-18 21:19:02.671227] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.906 [2024-04-18 21:19:02.679670] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.906 [2024-04-18 21:19:02.680240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.906 [2024-04-18 21:19:02.680632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.906 [2024-04-18 21:19:02.680672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.906 [2024-04-18 21:19:02.680705] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.906 [2024-04-18 21:19:02.681316] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.906 [2024-04-18 21:19:02.681557] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.906 [2024-04-18 21:19:02.681568] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.906 [2024-04-18 21:19:02.681576] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.906 [2024-04-18 21:19:02.684213] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.906 [2024-04-18 21:19:02.692480] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.906 [2024-04-18 21:19:02.692989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.906 [2024-04-18 21:19:02.693291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.906 [2024-04-18 21:19:02.693330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.906 [2024-04-18 21:19:02.693363] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.906 [2024-04-18 21:19:02.693745] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.906 [2024-04-18 21:19:02.693926] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.906 [2024-04-18 21:19:02.693935] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.906 [2024-04-18 21:19:02.693944] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.906 [2024-04-18 21:19:02.696617] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.906 [2024-04-18 21:19:02.705302] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.906 [2024-04-18 21:19:02.705848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.906 [2024-04-18 21:19:02.706191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.906 [2024-04-18 21:19:02.706230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.906 [2024-04-18 21:19:02.706274] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.906 [2024-04-18 21:19:02.706668] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.906 [2024-04-18 21:19:02.706836] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.906 [2024-04-18 21:19:02.706845] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.906 [2024-04-18 21:19:02.706853] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.906 [2024-04-18 21:19:02.709549] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.906 [2024-04-18 21:19:02.718319] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.906 [2024-04-18 21:19:02.718862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.906 [2024-04-18 21:19:02.719277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.906 [2024-04-18 21:19:02.719317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.906 [2024-04-18 21:19:02.719351] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.906 [2024-04-18 21:19:02.719565] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.906 [2024-04-18 21:19:02.719734] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.906 [2024-04-18 21:19:02.719743] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.906 [2024-04-18 21:19:02.719752] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.906 [2024-04-18 21:19:02.722427] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.906 [2024-04-18 21:19:02.731272] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.906 [2024-04-18 21:19:02.731757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.906 [2024-04-18 21:19:02.732196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.906 [2024-04-18 21:19:02.732235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.906 [2024-04-18 21:19:02.732270] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.906 [2024-04-18 21:19:02.732465] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.906 [2024-04-18 21:19:02.732656] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.906 [2024-04-18 21:19:02.732666] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.906 [2024-04-18 21:19:02.732675] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.906 [2024-04-18 21:19:02.735316] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.906 [2024-04-18 21:19:02.744154] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.906 [2024-04-18 21:19:02.744692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.906 [2024-04-18 21:19:02.745126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.906 [2024-04-18 21:19:02.745166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.906 [2024-04-18 21:19:02.745199] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.906 [2024-04-18 21:19:02.745683] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.906 [2024-04-18 21:19:02.745867] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.906 [2024-04-18 21:19:02.745876] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.906 [2024-04-18 21:19:02.745885] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.906 [2024-04-18 21:19:02.748464] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.906 [2024-04-18 21:19:02.756961] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.906 [2024-04-18 21:19:02.757501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.906 [2024-04-18 21:19:02.757871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.906 [2024-04-18 21:19:02.757885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.906 [2024-04-18 21:19:02.757896] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.906 [2024-04-18 21:19:02.758084] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.906 [2024-04-18 21:19:02.758265] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.906 [2024-04-18 21:19:02.758275] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.906 [2024-04-18 21:19:02.758284] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.906 [2024-04-18 21:19:02.761100] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.906 [2024-04-18 21:19:02.770118] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.906 [2024-04-18 21:19:02.770678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.906 [2024-04-18 21:19:02.770995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.906 [2024-04-18 21:19:02.771034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.906 [2024-04-18 21:19:02.771067] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.906 [2024-04-18 21:19:02.771464] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.906 [2024-04-18 21:19:02.771645] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.906 [2024-04-18 21:19:02.771655] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.906 [2024-04-18 21:19:02.771664] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.906 [2024-04-18 21:19:02.774433] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.906 [2024-04-18 21:19:02.783117] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.907 [2024-04-18 21:19:02.783673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.907 [2024-04-18 21:19:02.784079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.907 [2024-04-18 21:19:02.784119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.907 [2024-04-18 21:19:02.784153] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.907 [2024-04-18 21:19:02.784536] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.907 [2024-04-18 21:19:02.784736] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.907 [2024-04-18 21:19:02.784746] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.907 [2024-04-18 21:19:02.784755] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.907 [2024-04-18 21:19:02.787522] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.907 [2024-04-18 21:19:02.796085] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.907 [2024-04-18 21:19:02.796628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.907 [2024-04-18 21:19:02.797051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.907 [2024-04-18 21:19:02.797091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.907 [2024-04-18 21:19:02.797124] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.907 [2024-04-18 21:19:02.797630] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.907 [2024-04-18 21:19:02.797807] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.907 [2024-04-18 21:19:02.797816] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.907 [2024-04-18 21:19:02.797825] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.907 [2024-04-18 21:19:02.800466] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.907 [2024-04-18 21:19:02.808952] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.907 [2024-04-18 21:19:02.809502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.907 [2024-04-18 21:19:02.809937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.907 [2024-04-18 21:19:02.809976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.907 [2024-04-18 21:19:02.810009] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.907 [2024-04-18 21:19:02.810517] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.907 [2024-04-18 21:19:02.810708] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.907 [2024-04-18 21:19:02.810718] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.907 [2024-04-18 21:19:02.810727] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.907 [2024-04-18 21:19:02.813370] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.907 [2024-04-18 21:19:02.821827] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.907 [2024-04-18 21:19:02.822282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.907 [2024-04-18 21:19:02.822632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.907 [2024-04-18 21:19:02.822673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:46.907 [2024-04-18 21:19:02.822706] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:46.907 [2024-04-18 21:19:02.823317] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:46.907 [2024-04-18 21:19:02.823661] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.907 [2024-04-18 21:19:02.823675] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.907 [2024-04-18 21:19:02.823684] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.907 [2024-04-18 21:19:02.826277] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.168 [2024-04-18 21:19:02.834941] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.168 [2024-04-18 21:19:02.835498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.168 [2024-04-18 21:19:02.835861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.168 [2024-04-18 21:19:02.835900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.168 [2024-04-18 21:19:02.835933] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.168 [2024-04-18 21:19:02.836561] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.168 [2024-04-18 21:19:02.836830] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.168 [2024-04-18 21:19:02.836839] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.168 [2024-04-18 21:19:02.836848] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.169 [2024-04-18 21:19:02.839544] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.169 [2024-04-18 21:19:02.847902] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.169 [2024-04-18 21:19:02.848370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.169 [2024-04-18 21:19:02.848771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.169 [2024-04-18 21:19:02.848812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.169 [2024-04-18 21:19:02.848846] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.169 [2024-04-18 21:19:02.849058] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.169 [2024-04-18 21:19:02.849224] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.169 [2024-04-18 21:19:02.849233] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.169 [2024-04-18 21:19:02.849242] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.169 [2024-04-18 21:19:02.851866] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.169 [2024-04-18 21:19:02.860796] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.169 [2024-04-18 21:19:02.861334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.169 [2024-04-18 21:19:02.861690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.169 [2024-04-18 21:19:02.861730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.169 [2024-04-18 21:19:02.861763] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.169 [2024-04-18 21:19:02.862374] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.169 [2024-04-18 21:19:02.862938] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.169 [2024-04-18 21:19:02.862947] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.169 [2024-04-18 21:19:02.862959] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.169 [2024-04-18 21:19:02.865555] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.169 [2024-04-18 21:19:02.873684] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.169 [2024-04-18 21:19:02.874219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.169 [2024-04-18 21:19:02.874598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.169 [2024-04-18 21:19:02.874638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.169 [2024-04-18 21:19:02.874670] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.169 [2024-04-18 21:19:02.875016] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.169 [2024-04-18 21:19:02.875183] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.169 [2024-04-18 21:19:02.875192] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.169 [2024-04-18 21:19:02.875200] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.169 [2024-04-18 21:19:02.877784] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.169 [2024-04-18 21:19:02.886448] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.169 [2024-04-18 21:19:02.887001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.169 [2024-04-18 21:19:02.887432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.169 [2024-04-18 21:19:02.887472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.169 [2024-04-18 21:19:02.887505] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.169 [2024-04-18 21:19:02.888033] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.169 [2024-04-18 21:19:02.888209] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.169 [2024-04-18 21:19:02.888219] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.169 [2024-04-18 21:19:02.888228] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.169 [2024-04-18 21:19:02.890851] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.169 [2024-04-18 21:19:02.899351] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.169 [2024-04-18 21:19:02.899909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.169 [2024-04-18 21:19:02.900347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.169 [2024-04-18 21:19:02.900386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.169 [2024-04-18 21:19:02.900420] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.169 [2024-04-18 21:19:02.901055] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.169 [2024-04-18 21:19:02.901232] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.169 [2024-04-18 21:19:02.901241] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.169 [2024-04-18 21:19:02.901250] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.169 [2024-04-18 21:19:02.903875] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.169 [2024-04-18 21:19:02.912256] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.169 [2024-04-18 21:19:02.912802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.169 [2024-04-18 21:19:02.913234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.169 [2024-04-18 21:19:02.913273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.169 [2024-04-18 21:19:02.913305] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.169 [2024-04-18 21:19:02.913934] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.169 [2024-04-18 21:19:02.914244] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.169 [2024-04-18 21:19:02.914254] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.169 [2024-04-18 21:19:02.914263] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.169 [2024-04-18 21:19:02.916881] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.169 [2024-04-18 21:19:02.925033] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.169 [2024-04-18 21:19:02.925565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.169 [2024-04-18 21:19:02.925956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.169 [2024-04-18 21:19:02.925995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.169 [2024-04-18 21:19:02.926028] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.169 [2024-04-18 21:19:02.926364] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.169 [2024-04-18 21:19:02.926536] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.169 [2024-04-18 21:19:02.926562] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.169 [2024-04-18 21:19:02.926571] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.169 [2024-04-18 21:19:02.929227] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.169 [2024-04-18 21:19:02.937840] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.169 [2024-04-18 21:19:02.938380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.169 [2024-04-18 21:19:02.938812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.169 [2024-04-18 21:19:02.938854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.169 [2024-04-18 21:19:02.938888] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.169 [2024-04-18 21:19:02.939499] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.169 [2024-04-18 21:19:02.940063] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.169 [2024-04-18 21:19:02.940077] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.169 [2024-04-18 21:19:02.940090] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.169 [2024-04-18 21:19:02.944134] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.169 [2024-04-18 21:19:02.951496] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.169 [2024-04-18 21:19:02.952041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.169 [2024-04-18 21:19:02.952451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.169 [2024-04-18 21:19:02.952490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.169 [2024-04-18 21:19:02.952538] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.169 [2024-04-18 21:19:02.953129] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.169 [2024-04-18 21:19:02.953305] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.169 [2024-04-18 21:19:02.953314] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.169 [2024-04-18 21:19:02.953323] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.169 [2024-04-18 21:19:02.956011] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.169 [2024-04-18 21:19:02.964325] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.169 [2024-04-18 21:19:02.964863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.169 [2024-04-18 21:19:02.965248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.170 [2024-04-18 21:19:02.965288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.170 [2024-04-18 21:19:02.965320] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.170 [2024-04-18 21:19:02.965951] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.170 [2024-04-18 21:19:02.966368] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.170 [2024-04-18 21:19:02.966378] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.170 [2024-04-18 21:19:02.966387] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.170 [2024-04-18 21:19:02.968999] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.170 [2024-04-18 21:19:02.977139] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.170 [2024-04-18 21:19:02.977679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.170 [2024-04-18 21:19:02.978117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.170 [2024-04-18 21:19:02.978168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.170 [2024-04-18 21:19:02.978177] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.170 [2024-04-18 21:19:02.978350] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.170 [2024-04-18 21:19:02.978523] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.170 [2024-04-18 21:19:02.978533] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.170 [2024-04-18 21:19:02.978557] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.170 [2024-04-18 21:19:02.981212] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.170 [2024-04-18 21:19:02.989901] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.170 [2024-04-18 21:19:02.990452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.170 [2024-04-18 21:19:02.990898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.170 [2024-04-18 21:19:02.990934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.170 [2024-04-18 21:19:02.990944] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.170 [2024-04-18 21:19:02.991128] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.170 [2024-04-18 21:19:02.991304] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.170 [2024-04-18 21:19:02.991313] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.170 [2024-04-18 21:19:02.991322] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.170 [2024-04-18 21:19:02.993940] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.170 [2024-04-18 21:19:03.002795] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.170 [2024-04-18 21:19:03.003265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.170 [2024-04-18 21:19:03.003674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.170 [2024-04-18 21:19:03.003714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.170 [2024-04-18 21:19:03.003747] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.170 [2024-04-18 21:19:03.004067] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.170 [2024-04-18 21:19:03.004244] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.170 [2024-04-18 21:19:03.004254] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.170 [2024-04-18 21:19:03.004263] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.170 [2024-04-18 21:19:03.006883] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.170 [2024-04-18 21:19:03.015684] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.170 [2024-04-18 21:19:03.016312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.170 [2024-04-18 21:19:03.016663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.170 [2024-04-18 21:19:03.016704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.170 [2024-04-18 21:19:03.016738] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.170 [2024-04-18 21:19:03.017350] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.170 [2024-04-18 21:19:03.017565] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.170 [2024-04-18 21:19:03.017575] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.170 [2024-04-18 21:19:03.017584] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.170 [2024-04-18 21:19:03.020392] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.170 [2024-04-18 21:19:03.028742] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.170 [2024-04-18 21:19:03.029323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.170 [2024-04-18 21:19:03.029822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.170 [2024-04-18 21:19:03.029863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.170 [2024-04-18 21:19:03.029897] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.170 [2024-04-18 21:19:03.030135] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.170 [2024-04-18 21:19:03.030346] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.170 [2024-04-18 21:19:03.030360] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.170 [2024-04-18 21:19:03.030373] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.170 [2024-04-18 21:19:03.034416] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.170 [2024-04-18 21:19:03.042379] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.170 [2024-04-18 21:19:03.042956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.170 [2024-04-18 21:19:03.043385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.170 [2024-04-18 21:19:03.043426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.170 [2024-04-18 21:19:03.043461] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.170 [2024-04-18 21:19:03.044088] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.170 [2024-04-18 21:19:03.044530] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.170 [2024-04-18 21:19:03.044556] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.170 [2024-04-18 21:19:03.044565] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.170 [2024-04-18 21:19:03.047344] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.170 [2024-04-18 21:19:03.055520] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.170 [2024-04-18 21:19:03.056085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.170 [2024-04-18 21:19:03.056535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.170 [2024-04-18 21:19:03.056575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.170 [2024-04-18 21:19:03.056609] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.170 [2024-04-18 21:19:03.057127] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.170 [2024-04-18 21:19:03.057308] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.170 [2024-04-18 21:19:03.057318] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.170 [2024-04-18 21:19:03.057327] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.170 [2024-04-18 21:19:03.060088] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.170 [2024-04-18 21:19:03.068378] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.170 [2024-04-18 21:19:03.068930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.170 [2024-04-18 21:19:03.069361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.170 [2024-04-18 21:19:03.069399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.170 [2024-04-18 21:19:03.069441] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.170 [2024-04-18 21:19:03.069652] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.170 [2024-04-18 21:19:03.069829] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.170 [2024-04-18 21:19:03.069839] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.170 [2024-04-18 21:19:03.069847] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.170 [2024-04-18 21:19:03.072579] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.170 [2024-04-18 21:19:03.081205] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.170 [2024-04-18 21:19:03.081747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.170 [2024-04-18 21:19:03.082173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.170 [2024-04-18 21:19:03.082212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.170 [2024-04-18 21:19:03.082246] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.170 [2024-04-18 21:19:03.082450] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.171 [2024-04-18 21:19:03.082643] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.171 [2024-04-18 21:19:03.082653] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.171 [2024-04-18 21:19:03.082662] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.171 [2024-04-18 21:19:03.085316] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.171 [2024-04-18 21:19:03.094262] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.171 [2024-04-18 21:19:03.094789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.171 [2024-04-18 21:19:03.095169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.171 [2024-04-18 21:19:03.095182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.171 [2024-04-18 21:19:03.095193] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.171 [2024-04-18 21:19:03.095379] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.171 [2024-04-18 21:19:03.095567] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.171 [2024-04-18 21:19:03.095578] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.171 [2024-04-18 21:19:03.095587] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.450 [2024-04-18 21:19:03.098407] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.450 [2024-04-18 21:19:03.107347] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.450 [2024-04-18 21:19:03.107926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.450 [2024-04-18 21:19:03.108342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.450 [2024-04-18 21:19:03.108382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.450 [2024-04-18 21:19:03.108416] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.450 [2024-04-18 21:19:03.109055] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.450 [2024-04-18 21:19:03.109572] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.451 [2024-04-18 21:19:03.109582] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.451 [2024-04-18 21:19:03.109592] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.451 [2024-04-18 21:19:03.112302] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.451 [2024-04-18 21:19:03.120263] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.451 [2024-04-18 21:19:03.120742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.451 [2024-04-18 21:19:03.121052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.451 [2024-04-18 21:19:03.121092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.451 [2024-04-18 21:19:03.121130] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.451 [2024-04-18 21:19:03.121608] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.451 [2024-04-18 21:19:03.121797] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.451 [2024-04-18 21:19:03.121807] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.451 [2024-04-18 21:19:03.121815] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.451 [2024-04-18 21:19:03.124398] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.451 [2024-04-18 21:19:03.133159] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.451 [2024-04-18 21:19:03.133730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.451 [2024-04-18 21:19:03.134068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.451 [2024-04-18 21:19:03.134107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.451 [2024-04-18 21:19:03.134141] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.451 [2024-04-18 21:19:03.134488] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.451 [2024-04-18 21:19:03.134660] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.451 [2024-04-18 21:19:03.134670] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.451 [2024-04-18 21:19:03.134678] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.451 [2024-04-18 21:19:03.137314] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.451 [2024-04-18 21:19:03.146046] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.451 [2024-04-18 21:19:03.146613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.451 [2024-04-18 21:19:03.147055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.451 [2024-04-18 21:19:03.147095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.451 [2024-04-18 21:19:03.147128] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.451 [2024-04-18 21:19:03.147685] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.451 [2024-04-18 21:19:03.147866] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.451 [2024-04-18 21:19:03.147875] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.451 [2024-04-18 21:19:03.147884] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.451 [2024-04-18 21:19:03.150575] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.451 [2024-04-18 21:19:03.158937] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.451 [2024-04-18 21:19:03.159497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.451 [2024-04-18 21:19:03.159856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.451 [2024-04-18 21:19:03.159896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.451 [2024-04-18 21:19:03.159929] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.451 [2024-04-18 21:19:03.160557] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.451 [2024-04-18 21:19:03.160793] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.451 [2024-04-18 21:19:03.160803] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.451 [2024-04-18 21:19:03.160812] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.451 [2024-04-18 21:19:03.163458] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.451 [2024-04-18 21:19:03.171757] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.451 [2024-04-18 21:19:03.172322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.451 [2024-04-18 21:19:03.172731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.451 [2024-04-18 21:19:03.172745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.451 [2024-04-18 21:19:03.172756] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.451 [2024-04-18 21:19:03.172940] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.451 [2024-04-18 21:19:03.173116] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.451 [2024-04-18 21:19:03.173126] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.451 [2024-04-18 21:19:03.173135] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.451 [2024-04-18 21:19:03.175759] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.451 [2024-04-18 21:19:03.184579] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.451 [2024-04-18 21:19:03.185114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.451 [2024-04-18 21:19:03.185457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.451 [2024-04-18 21:19:03.185497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.451 [2024-04-18 21:19:03.185548] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.451 [2024-04-18 21:19:03.186162] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.451 [2024-04-18 21:19:03.186752] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.451 [2024-04-18 21:19:03.186764] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.451 [2024-04-18 21:19:03.186773] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.451 [2024-04-18 21:19:03.189367] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.451 [2024-04-18 21:19:03.197461] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.451 [2024-04-18 21:19:03.198028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.451 [2024-04-18 21:19:03.198393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.451 [2024-04-18 21:19:03.198431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.451 [2024-04-18 21:19:03.198464] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.451 [2024-04-18 21:19:03.198877] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.451 [2024-04-18 21:19:03.199053] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.451 [2024-04-18 21:19:03.199062] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.451 [2024-04-18 21:19:03.199071] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.451 [2024-04-18 21:19:03.201707] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.451 [2024-04-18 21:19:03.210357] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.451 [2024-04-18 21:19:03.210940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.451 [2024-04-18 21:19:03.211358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.451 [2024-04-18 21:19:03.211397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.451 [2024-04-18 21:19:03.211431] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.451 [2024-04-18 21:19:03.211904] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.451 [2024-04-18 21:19:03.212165] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.451 [2024-04-18 21:19:03.212178] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.451 [2024-04-18 21:19:03.212191] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.451 [2024-04-18 21:19:03.216229] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.451 [2024-04-18 21:19:03.223780] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.451 [2024-04-18 21:19:03.224321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.451 [2024-04-18 21:19:03.224738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.451 [2024-04-18 21:19:03.224779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.451 [2024-04-18 21:19:03.224815] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.451 [2024-04-18 21:19:03.224999] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.451 [2024-04-18 21:19:03.225175] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.451 [2024-04-18 21:19:03.225185] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.451 [2024-04-18 21:19:03.225201] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.451 [2024-04-18 21:19:03.227887] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.451 [2024-04-18 21:19:03.236689] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.451 [2024-04-18 21:19:03.237248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.451 [2024-04-18 21:19:03.237591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.451 [2024-04-18 21:19:03.237632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.451 [2024-04-18 21:19:03.237665] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.451 [2024-04-18 21:19:03.237998] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.451 [2024-04-18 21:19:03.238165] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.451 [2024-04-18 21:19:03.238174] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.451 [2024-04-18 21:19:03.238183] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.451 [2024-04-18 21:19:03.240768] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.451 [2024-04-18 21:19:03.249574] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.451 [2024-04-18 21:19:03.250125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.451 [2024-04-18 21:19:03.250528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.451 [2024-04-18 21:19:03.250568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.451 [2024-04-18 21:19:03.250601] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.451 [2024-04-18 21:19:03.250817] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.451 [2024-04-18 21:19:03.250994] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.451 [2024-04-18 21:19:03.251004] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.451 [2024-04-18 21:19:03.251013] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.451 [2024-04-18 21:19:03.253690] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.451 [2024-04-18 21:19:03.262495] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.451 [2024-04-18 21:19:03.263089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.451 [2024-04-18 21:19:03.263276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.451 [2024-04-18 21:19:03.263315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.451 [2024-04-18 21:19:03.263349] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.451 [2024-04-18 21:19:03.263761] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.451 [2024-04-18 21:19:03.263941] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.451 [2024-04-18 21:19:03.263950] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.451 [2024-04-18 21:19:03.263959] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.451 [2024-04-18 21:19:03.266561] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.451 [2024-04-18 21:19:03.275632] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.452 [2024-04-18 21:19:03.276056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.452 [2024-04-18 21:19:03.276345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.452 [2024-04-18 21:19:03.276358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.452 [2024-04-18 21:19:03.276370] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.452 [2024-04-18 21:19:03.276565] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.452 [2024-04-18 21:19:03.276747] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.452 [2024-04-18 21:19:03.276757] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.452 [2024-04-18 21:19:03.276766] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.452 [2024-04-18 21:19:03.279586] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.452 [2024-04-18 21:19:03.288722] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.452 [2024-04-18 21:19:03.289187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.452 [2024-04-18 21:19:03.289428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.452 [2024-04-18 21:19:03.289441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.452 [2024-04-18 21:19:03.289452] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.452 [2024-04-18 21:19:03.289648] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.452 [2024-04-18 21:19:03.289832] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.452 [2024-04-18 21:19:03.289842] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.452 [2024-04-18 21:19:03.289851] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.452 [2024-04-18 21:19:03.292667] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.452 [2024-04-18 21:19:03.301804] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.452 [2024-04-18 21:19:03.302366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.452 [2024-04-18 21:19:03.302655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.452 [2024-04-18 21:19:03.302668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.452 [2024-04-18 21:19:03.302679] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.452 [2024-04-18 21:19:03.302867] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.452 [2024-04-18 21:19:03.303048] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.452 [2024-04-18 21:19:03.303058] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.452 [2024-04-18 21:19:03.303067] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.452 [2024-04-18 21:19:03.305900] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.452 [2024-04-18 21:19:03.314879] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.452 [2024-04-18 21:19:03.315459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.452 [2024-04-18 21:19:03.315746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.452 [2024-04-18 21:19:03.315760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.452 [2024-04-18 21:19:03.315770] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.452 [2024-04-18 21:19:03.315958] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.452 [2024-04-18 21:19:03.316139] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.452 [2024-04-18 21:19:03.316148] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.452 [2024-04-18 21:19:03.316158] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.452 [2024-04-18 21:19:03.318972] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.452 [2024-04-18 21:19:03.327971] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.452 [2024-04-18 21:19:03.328551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.452 [2024-04-18 21:19:03.328914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.452 [2024-04-18 21:19:03.328928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.452 [2024-04-18 21:19:03.328938] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.452 [2024-04-18 21:19:03.329126] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.452 [2024-04-18 21:19:03.329308] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.452 [2024-04-18 21:19:03.329318] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.452 [2024-04-18 21:19:03.329327] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.452 [2024-04-18 21:19:03.332148] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.452 [2024-04-18 21:19:03.341111] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.452 [2024-04-18 21:19:03.341679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.452 [2024-04-18 21:19:03.341967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.452 [2024-04-18 21:19:03.341980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.452 [2024-04-18 21:19:03.341990] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.452 [2024-04-18 21:19:03.342179] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.452 [2024-04-18 21:19:03.342361] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.452 [2024-04-18 21:19:03.342371] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.452 [2024-04-18 21:19:03.342380] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.452 [2024-04-18 21:19:03.345203] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.452 [2024-04-18 21:19:03.354168] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.452 [2024-04-18 21:19:03.354739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.452 [2024-04-18 21:19:03.355099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.452 [2024-04-18 21:19:03.355111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.452 [2024-04-18 21:19:03.355122] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.452 [2024-04-18 21:19:03.355308] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.452 [2024-04-18 21:19:03.355490] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.452 [2024-04-18 21:19:03.355499] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.452 [2024-04-18 21:19:03.355508] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.452 [2024-04-18 21:19:03.358330] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.452 [2024-04-18 21:19:03.367283] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.452 [2024-04-18 21:19:03.367828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.452 [2024-04-18 21:19:03.368196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.452 [2024-04-18 21:19:03.368209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.452 [2024-04-18 21:19:03.368220] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.452 [2024-04-18 21:19:03.368407] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.452 [2024-04-18 21:19:03.368593] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.452 [2024-04-18 21:19:03.368603] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.452 [2024-04-18 21:19:03.368612] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.452 [2024-04-18 21:19:03.371424] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.711 [2024-04-18 21:19:03.380414] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.711 [2024-04-18 21:19:03.380970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.711 [2024-04-18 21:19:03.381332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.711 [2024-04-18 21:19:03.381344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.711 [2024-04-18 21:19:03.381355] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.712 [2024-04-18 21:19:03.381548] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.712 [2024-04-18 21:19:03.381730] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.712 [2024-04-18 21:19:03.381740] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.712 [2024-04-18 21:19:03.381749] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.712 [2024-04-18 21:19:03.384567] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.712 [2024-04-18 21:19:03.393579] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.712 [2024-04-18 21:19:03.394154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.712 [2024-04-18 21:19:03.394522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.712 [2024-04-18 21:19:03.394536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.712 [2024-04-18 21:19:03.394546] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.712 [2024-04-18 21:19:03.394735] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.712 [2024-04-18 21:19:03.394917] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.712 [2024-04-18 21:19:03.394926] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.712 [2024-04-18 21:19:03.394936] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.712 [2024-04-18 21:19:03.397760] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.712 [2024-04-18 21:19:03.406731] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.712 [2024-04-18 21:19:03.407307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.712 [2024-04-18 21:19:03.407599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.712 [2024-04-18 21:19:03.407613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.712 [2024-04-18 21:19:03.407623] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.712 [2024-04-18 21:19:03.407811] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.712 [2024-04-18 21:19:03.407992] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.712 [2024-04-18 21:19:03.408002] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.712 [2024-04-18 21:19:03.408012] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.712 [2024-04-18 21:19:03.410833] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.712 [2024-04-18 21:19:03.419793] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.712 [2024-04-18 21:19:03.420344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.712 [2024-04-18 21:19:03.420702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.712 [2024-04-18 21:19:03.420716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.712 [2024-04-18 21:19:03.420726] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.712 [2024-04-18 21:19:03.420915] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.712 [2024-04-18 21:19:03.421097] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.712 [2024-04-18 21:19:03.421107] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.712 [2024-04-18 21:19:03.421116] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.712 [2024-04-18 21:19:03.423942] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.712 [2024-04-18 21:19:03.432909] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.712 [2024-04-18 21:19:03.433486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.712 [2024-04-18 21:19:03.433853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.712 [2024-04-18 21:19:03.433868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.712 [2024-04-18 21:19:03.433881] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.712 [2024-04-18 21:19:03.434070] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.712 [2024-04-18 21:19:03.434251] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.712 [2024-04-18 21:19:03.434260] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.712 [2024-04-18 21:19:03.434269] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.712 [2024-04-18 21:19:03.437095] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.712 [2024-04-18 21:19:03.446052] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.712 [2024-04-18 21:19:03.446460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.712 [2024-04-18 21:19:03.446821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.712 [2024-04-18 21:19:03.446835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.712 [2024-04-18 21:19:03.446845] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.712 [2024-04-18 21:19:03.447034] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.712 [2024-04-18 21:19:03.447216] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.712 [2024-04-18 21:19:03.447226] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.712 [2024-04-18 21:19:03.447235] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.712 [2024-04-18 21:19:03.450057] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.712 [2024-04-18 21:19:03.459183] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.712 [2024-04-18 21:19:03.459766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.712 [2024-04-18 21:19:03.460121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.712 [2024-04-18 21:19:03.460135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.712 [2024-04-18 21:19:03.460145] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.712 [2024-04-18 21:19:03.460334] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.712 [2024-04-18 21:19:03.460520] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.712 [2024-04-18 21:19:03.460530] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.712 [2024-04-18 21:19:03.460539] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.712 [2024-04-18 21:19:03.463357] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.712 [2024-04-18 21:19:03.472328] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.712 [2024-04-18 21:19:03.472847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.712 [2024-04-18 21:19:03.473155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.712 [2024-04-18 21:19:03.473168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.712 [2024-04-18 21:19:03.473178] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.712 [2024-04-18 21:19:03.473369] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.712 [2024-04-18 21:19:03.473557] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.712 [2024-04-18 21:19:03.473567] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.712 [2024-04-18 21:19:03.473576] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.712 [2024-04-18 21:19:03.476388] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.712 [2024-04-18 21:19:03.485504] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.712 [2024-04-18 21:19:03.486082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.712 [2024-04-18 21:19:03.486372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.712 [2024-04-18 21:19:03.486385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.712 [2024-04-18 21:19:03.486396] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.712 [2024-04-18 21:19:03.486590] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.712 [2024-04-18 21:19:03.486773] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.712 [2024-04-18 21:19:03.486782] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.712 [2024-04-18 21:19:03.486791] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.712 [2024-04-18 21:19:03.489610] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.712 [2024-04-18 21:19:03.498572] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.712 [2024-04-18 21:19:03.499096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.712 [2024-04-18 21:19:03.499437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.712 [2024-04-18 21:19:03.499477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.713 [2024-04-18 21:19:03.499523] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.713 [2024-04-18 21:19:03.499712] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.713 [2024-04-18 21:19:03.499902] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.713 [2024-04-18 21:19:03.499912] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.713 [2024-04-18 21:19:03.499921] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.713 [2024-04-18 21:19:03.502712] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.713 [2024-04-18 21:19:03.511662] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.713 [2024-04-18 21:19:03.512223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.713 [2024-04-18 21:19:03.512450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.713 [2024-04-18 21:19:03.512491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.713 [2024-04-18 21:19:03.512542] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.713 [2024-04-18 21:19:03.513164] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.713 [2024-04-18 21:19:03.513598] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.713 [2024-04-18 21:19:03.513608] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.713 [2024-04-18 21:19:03.513617] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.713 [2024-04-18 21:19:03.516350] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.713 [2024-04-18 21:19:03.524705] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.713 [2024-04-18 21:19:03.525223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.713 [2024-04-18 21:19:03.525658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.713 [2024-04-18 21:19:03.525699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.713 [2024-04-18 21:19:03.525733] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.713 [2024-04-18 21:19:03.526346] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.713 [2024-04-18 21:19:03.526650] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.713 [2024-04-18 21:19:03.526662] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.713 [2024-04-18 21:19:03.526671] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.713 [2024-04-18 21:19:03.529484] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.713 [2024-04-18 21:19:03.537805] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.713 [2024-04-18 21:19:03.538233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.713 [2024-04-18 21:19:03.538526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.713 [2024-04-18 21:19:03.538539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.713 [2024-04-18 21:19:03.538550] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.713 [2024-04-18 21:19:03.538738] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.713 [2024-04-18 21:19:03.538919] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.713 [2024-04-18 21:19:03.538929] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.713 [2024-04-18 21:19:03.538938] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.713 [2024-04-18 21:19:03.541763] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.713 [2024-04-18 21:19:03.550889] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.713 [2024-04-18 21:19:03.551371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.713 [2024-04-18 21:19:03.551659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.713 [2024-04-18 21:19:03.551675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.713 [2024-04-18 21:19:03.551685] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.713 [2024-04-18 21:19:03.551876] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.713 [2024-04-18 21:19:03.552061] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.713 [2024-04-18 21:19:03.552071] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.713 [2024-04-18 21:19:03.552080] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.713 [2024-04-18 21:19:03.555119] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.713 [2024-04-18 21:19:03.564153] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.713 [2024-04-18 21:19:03.564728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.713 [2024-04-18 21:19:03.565021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.713 [2024-04-18 21:19:03.565035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.713 [2024-04-18 21:19:03.565047] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.713 [2024-04-18 21:19:03.565256] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.713 [2024-04-18 21:19:03.565455] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.713 [2024-04-18 21:19:03.565466] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.713 [2024-04-18 21:19:03.565476] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.713 [2024-04-18 21:19:03.568469] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.713 [2024-04-18 21:19:03.577428] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.713 [2024-04-18 21:19:03.577941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.713 [2024-04-18 21:19:03.578300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.713 [2024-04-18 21:19:03.578314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.713 [2024-04-18 21:19:03.578325] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.713 [2024-04-18 21:19:03.578525] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.713 [2024-04-18 21:19:03.578712] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.713 [2024-04-18 21:19:03.578722] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.713 [2024-04-18 21:19:03.578732] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.713 [2024-04-18 21:19:03.581635] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.713 [2024-04-18 21:19:03.590698] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.713 [2024-04-18 21:19:03.591250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.713 [2024-04-18 21:19:03.591615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.713 [2024-04-18 21:19:03.591631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.713 [2024-04-18 21:19:03.591642] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.713 [2024-04-18 21:19:03.591837] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.713 [2024-04-18 21:19:03.592024] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.713 [2024-04-18 21:19:03.592034] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.713 [2024-04-18 21:19:03.592047] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.713 [2024-04-18 21:19:03.594949] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.713 [2024-04-18 21:19:03.603989] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.713 [2024-04-18 21:19:03.604577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.713 [2024-04-18 21:19:03.604863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.713 [2024-04-18 21:19:03.604877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.713 [2024-04-18 21:19:03.604888] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.713 [2024-04-18 21:19:03.605084] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.713 [2024-04-18 21:19:03.605271] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.713 [2024-04-18 21:19:03.605281] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.713 [2024-04-18 21:19:03.605291] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.713 [2024-04-18 21:19:03.608265] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.713 [2024-04-18 21:19:03.617382] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.713 [2024-04-18 21:19:03.617974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.713 [2024-04-18 21:19:03.618258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.714 [2024-04-18 21:19:03.618272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.714 [2024-04-18 21:19:03.618284] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.714 [2024-04-18 21:19:03.618489] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.714 [2024-04-18 21:19:03.618694] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.714 [2024-04-18 21:19:03.618705] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.714 [2024-04-18 21:19:03.618715] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.714 [2024-04-18 21:19:03.621810] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.714 [2024-04-18 21:19:03.630536] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.714 [2024-04-18 21:19:03.631133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.714 [2024-04-18 21:19:03.631471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.714 [2024-04-18 21:19:03.631485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.714 [2024-04-18 21:19:03.631497] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.714 [2024-04-18 21:19:03.631710] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.714 [2024-04-18 21:19:03.631911] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.714 [2024-04-18 21:19:03.631921] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.714 [2024-04-18 21:19:03.631935] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.714 [2024-04-18 21:19:03.635010] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.973 [2024-04-18 21:19:03.643733] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.973 [2024-04-18 21:19:03.644317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.973 [2024-04-18 21:19:03.644652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.973 [2024-04-18 21:19:03.644667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.973 [2024-04-18 21:19:03.644677] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.973 [2024-04-18 21:19:03.644871] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.973 [2024-04-18 21:19:03.645058] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.973 [2024-04-18 21:19:03.645068] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.973 [2024-04-18 21:19:03.645077] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.973 [2024-04-18 21:19:03.647982] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.973 [2024-04-18 21:19:03.656842] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.973 [2024-04-18 21:19:03.657428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.973 [2024-04-18 21:19:03.657768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.973 [2024-04-18 21:19:03.657809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.973 [2024-04-18 21:19:03.657842] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.973 [2024-04-18 21:19:03.658056] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.973 [2024-04-18 21:19:03.658239] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.973 [2024-04-18 21:19:03.658249] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.973 [2024-04-18 21:19:03.658258] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.973 [2024-04-18 21:19:03.661074] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.973 [2024-04-18 21:19:03.669864] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.973 [2024-04-18 21:19:03.670448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.973 [2024-04-18 21:19:03.670873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.973 [2024-04-18 21:19:03.670914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.973 [2024-04-18 21:19:03.670948] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.973 [2024-04-18 21:19:03.671572] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.973 [2024-04-18 21:19:03.671999] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.973 [2024-04-18 21:19:03.672008] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.973 [2024-04-18 21:19:03.672016] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.973 [2024-04-18 21:19:03.674622] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.973 [2024-04-18 21:19:03.682755] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.973 [2024-04-18 21:19:03.683315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.973 [2024-04-18 21:19:03.683727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.973 [2024-04-18 21:19:03.683767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.973 [2024-04-18 21:19:03.683801] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.973 [2024-04-18 21:19:03.684072] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.973 [2024-04-18 21:19:03.684249] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.973 [2024-04-18 21:19:03.684258] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.973 [2024-04-18 21:19:03.684267] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.973 [2024-04-18 21:19:03.686885] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.973 [2024-04-18 21:19:03.695638] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.973 [2024-04-18 21:19:03.696200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.973 [2024-04-18 21:19:03.696499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.973 [2024-04-18 21:19:03.696554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.973 [2024-04-18 21:19:03.696588] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.973 [2024-04-18 21:19:03.697154] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.973 [2024-04-18 21:19:03.697414] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.973 [2024-04-18 21:19:03.697428] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.973 [2024-04-18 21:19:03.697441] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.973 [2024-04-18 21:19:03.701488] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.973 [2024-04-18 21:19:03.709415] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.973 [2024-04-18 21:19:03.709825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.973 [2024-04-18 21:19:03.710063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.973 [2024-04-18 21:19:03.710076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.973 [2024-04-18 21:19:03.710086] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.973 [2024-04-18 21:19:03.710263] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.973 [2024-04-18 21:19:03.710436] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.973 [2024-04-18 21:19:03.710445] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.973 [2024-04-18 21:19:03.710454] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.973 [2024-04-18 21:19:03.713210] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.973 [2024-04-18 21:19:03.722354] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.973 [2024-04-18 21:19:03.722851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.973 [2024-04-18 21:19:03.723146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.973 [2024-04-18 21:19:03.723186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.973 [2024-04-18 21:19:03.723220] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.973 [2024-04-18 21:19:03.723784] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.973 [2024-04-18 21:19:03.723952] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.973 [2024-04-18 21:19:03.723961] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.973 [2024-04-18 21:19:03.723969] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.973 [2024-04-18 21:19:03.726646] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.973 [2024-04-18 21:19:03.735341] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.973 [2024-04-18 21:19:03.735899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.974 [2024-04-18 21:19:03.736249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.974 [2024-04-18 21:19:03.736288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.974 [2024-04-18 21:19:03.736321] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.974 [2024-04-18 21:19:03.736820] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.974 [2024-04-18 21:19:03.736996] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.974 [2024-04-18 21:19:03.737006] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.974 [2024-04-18 21:19:03.737015] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.974 [2024-04-18 21:19:03.739705] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.974 [2024-04-18 21:19:03.748156] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.974 [2024-04-18 21:19:03.748719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.974 [2024-04-18 21:19:03.749133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.974 [2024-04-18 21:19:03.749172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.974 [2024-04-18 21:19:03.749205] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.974 [2024-04-18 21:19:03.749783] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.974 [2024-04-18 21:19:03.749960] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.974 [2024-04-18 21:19:03.749969] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.974 [2024-04-18 21:19:03.749978] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.974 [2024-04-18 21:19:03.752613] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.974 [2024-04-18 21:19:03.761088] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.974 [2024-04-18 21:19:03.761662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.974 [2024-04-18 21:19:03.762087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.974 [2024-04-18 21:19:03.762126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.974 [2024-04-18 21:19:03.762159] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.974 [2024-04-18 21:19:03.762429] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.974 [2024-04-18 21:19:03.762620] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.974 [2024-04-18 21:19:03.762630] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.974 [2024-04-18 21:19:03.762639] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.974 [2024-04-18 21:19:03.765294] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.974 [2024-04-18 21:19:03.773890] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.974 [2024-04-18 21:19:03.774431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.974 [2024-04-18 21:19:03.774767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.974 [2024-04-18 21:19:03.774809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.974 [2024-04-18 21:19:03.774842] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.974 [2024-04-18 21:19:03.775068] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.974 [2024-04-18 21:19:03.775245] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.974 [2024-04-18 21:19:03.775254] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.974 [2024-04-18 21:19:03.775263] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.974 [2024-04-18 21:19:03.777952] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.974 [2024-04-18 21:19:03.786961] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.974 [2024-04-18 21:19:03.787528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.974 [2024-04-18 21:19:03.787941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.974 [2024-04-18 21:19:03.787979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.974 [2024-04-18 21:19:03.788009] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.974 [2024-04-18 21:19:03.788205] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.974 [2024-04-18 21:19:03.788441] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.974 [2024-04-18 21:19:03.788455] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.974 [2024-04-18 21:19:03.788468] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.974 [2024-04-18 21:19:03.792501] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.974 [2024-04-18 21:19:03.800297] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.974 [2024-04-18 21:19:03.800804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.974 [2024-04-18 21:19:03.801129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.974 [2024-04-18 21:19:03.801168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.974 [2024-04-18 21:19:03.801212] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.974 [2024-04-18 21:19:03.801838] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.974 [2024-04-18 21:19:03.802046] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.974 [2024-04-18 21:19:03.802055] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.974 [2024-04-18 21:19:03.802064] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.974 [2024-04-18 21:19:03.804779] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.974 [2024-04-18 21:19:03.813283] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.974 [2024-04-18 21:19:03.813817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.974 [2024-04-18 21:19:03.814133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.974 [2024-04-18 21:19:03.814172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.974 [2024-04-18 21:19:03.814205] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.974 [2024-04-18 21:19:03.814838] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.974 [2024-04-18 21:19:03.815316] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.974 [2024-04-18 21:19:03.815326] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.974 [2024-04-18 21:19:03.815335] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.974 [2024-04-18 21:19:03.817947] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.974 [2024-04-18 21:19:03.826111] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.974 [2024-04-18 21:19:03.826672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.974 [2024-04-18 21:19:03.827038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.974 [2024-04-18 21:19:03.827077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.974 [2024-04-18 21:19:03.827110] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.974 [2024-04-18 21:19:03.827486] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.974 [2024-04-18 21:19:03.827680] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.974 [2024-04-18 21:19:03.827690] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.974 [2024-04-18 21:19:03.827699] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.974 [2024-04-18 21:19:03.830394] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.974 [2024-04-18 21:19:03.838894] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.974 [2024-04-18 21:19:03.839278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.974 [2024-04-18 21:19:03.839692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.974 [2024-04-18 21:19:03.839733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.974 [2024-04-18 21:19:03.839782] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.974 [2024-04-18 21:19:03.840263] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.974 [2024-04-18 21:19:03.840430] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.974 [2024-04-18 21:19:03.840438] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.974 [2024-04-18 21:19:03.840447] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.974 [2024-04-18 21:19:03.843067] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.974 [2024-04-18 21:19:03.851774] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.974 [2024-04-18 21:19:03.852344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.974 [2024-04-18 21:19:03.852619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.974 [2024-04-18 21:19:03.852660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.974 [2024-04-18 21:19:03.852692] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.974 [2024-04-18 21:19:03.853304] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.974 [2024-04-18 21:19:03.853532] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.974 [2024-04-18 21:19:03.853541] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.974 [2024-04-18 21:19:03.853550] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.974 [2024-04-18 21:19:03.856159] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.974 [2024-04-18 21:19:03.864574] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.974 [2024-04-18 21:19:03.865143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.974 [2024-04-18 21:19:03.865576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.974 [2024-04-18 21:19:03.865618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.974 [2024-04-18 21:19:03.865650] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.974 [2024-04-18 21:19:03.866092] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.974 [2024-04-18 21:19:03.866258] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.974 [2024-04-18 21:19:03.866267] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.974 [2024-04-18 21:19:03.866276] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.974 [2024-04-18 21:19:03.868854] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.974 [2024-04-18 21:19:03.877366] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.974 [2024-04-18 21:19:03.877912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.974 [2024-04-18 21:19:03.878305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.974 [2024-04-18 21:19:03.878343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.974 [2024-04-18 21:19:03.878377] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.974 [2024-04-18 21:19:03.878605] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.974 [2024-04-18 21:19:03.878782] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.974 [2024-04-18 21:19:03.878792] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.974 [2024-04-18 21:19:03.878801] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.974 [2024-04-18 21:19:03.881449] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.974 [2024-04-18 21:19:03.890251] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.974 [2024-04-18 21:19:03.890807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.974 [2024-04-18 21:19:03.891170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.974 [2024-04-18 21:19:03.891209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:47.974 [2024-04-18 21:19:03.891244] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:47.974 [2024-04-18 21:19:03.891636] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:47.974 [2024-04-18 21:19:03.891813] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.974 [2024-04-18 21:19:03.891822] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.974 [2024-04-18 21:19:03.891831] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.974 [2024-04-18 21:19:03.894470] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.235 [2024-04-18 21:19:03.903429] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.235 [2024-04-18 21:19:03.904015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-04-18 21:19:03.904429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-04-18 21:19:03.904468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.235 [2024-04-18 21:19:03.904508] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.235 [2024-04-18 21:19:03.904722] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.235 [2024-04-18 21:19:03.904897] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.235 [2024-04-18 21:19:03.904907] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.235 [2024-04-18 21:19:03.904915] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.235 [2024-04-18 21:19:03.907683] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.235 [2024-04-18 21:19:03.916355] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.235 [2024-04-18 21:19:03.916904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-04-18 21:19:03.917318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-04-18 21:19:03.917357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.235 [2024-04-18 21:19:03.917390] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.235 [2024-04-18 21:19:03.918018] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.235 [2024-04-18 21:19:03.918486] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.235 [2024-04-18 21:19:03.918495] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.235 [2024-04-18 21:19:03.918504] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.235 [2024-04-18 21:19:03.921113] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.235 [2024-04-18 21:19:03.929263] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.235 [2024-04-18 21:19:03.929807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-04-18 21:19:03.930089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-04-18 21:19:03.930128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.235 [2024-04-18 21:19:03.930157] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.235 [2024-04-18 21:19:03.930329] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.235 [2024-04-18 21:19:03.930495] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.235 [2024-04-18 21:19:03.930504] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.235 [2024-04-18 21:19:03.930518] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.235 [2024-04-18 21:19:03.933197] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.235 [2024-04-18 21:19:03.942102] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.235 [2024-04-18 21:19:03.942590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-04-18 21:19:03.943008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-04-18 21:19:03.943047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.235 [2024-04-18 21:19:03.943081] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.235 [2024-04-18 21:19:03.943673] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.235 [2024-04-18 21:19:03.943849] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.235 [2024-04-18 21:19:03.943859] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.235 [2024-04-18 21:19:03.943868] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.235 [2024-04-18 21:19:03.946514] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.235 [2024-04-18 21:19:03.954974] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.235 [2024-04-18 21:19:03.955559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-04-18 21:19:03.955955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-04-18 21:19:03.955996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.235 [2024-04-18 21:19:03.956029] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.235 [2024-04-18 21:19:03.956656] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.235 [2024-04-18 21:19:03.956848] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.235 [2024-04-18 21:19:03.956861] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.235 [2024-04-18 21:19:03.956870] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.235 [2024-04-18 21:19:03.959514] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.235 [2024-04-18 21:19:03.967864] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.235 [2024-04-18 21:19:03.968439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-04-18 21:19:03.968850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-04-18 21:19:03.968891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.235 [2024-04-18 21:19:03.968925] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.235 [2024-04-18 21:19:03.969552] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.235 [2024-04-18 21:19:03.969917] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.235 [2024-04-18 21:19:03.969927] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.235 [2024-04-18 21:19:03.969935] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.235 [2024-04-18 21:19:03.972586] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.235 [2024-04-18 21:19:03.980708] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.235 [2024-04-18 21:19:03.981263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-04-18 21:19:03.981581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-04-18 21:19:03.981622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.235 [2024-04-18 21:19:03.981655] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.235 [2024-04-18 21:19:03.981956] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.235 [2024-04-18 21:19:03.982123] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.235 [2024-04-18 21:19:03.982132] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.235 [2024-04-18 21:19:03.982140] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.235 [2024-04-18 21:19:03.984725] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.235 [2024-04-18 21:19:03.993569] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.235 [2024-04-18 21:19:03.994056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-04-18 21:19:03.994400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-04-18 21:19:03.994438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.235 [2024-04-18 21:19:03.994472] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.235 [2024-04-18 21:19:03.994754] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.235 [2024-04-18 21:19:03.994931] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.235 [2024-04-18 21:19:03.994940] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.235 [2024-04-18 21:19:03.994954] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.235 [2024-04-18 21:19:03.997643] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.235 [2024-04-18 21:19:04.006391] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.235 [2024-04-18 21:19:04.006967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-04-18 21:19:04.007232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-04-18 21:19:04.007273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.235 [2024-04-18 21:19:04.007306] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.235 [2024-04-18 21:19:04.007653] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.235 [2024-04-18 21:19:04.007830] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.235 [2024-04-18 21:19:04.007839] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.235 [2024-04-18 21:19:04.007848] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.235 [2024-04-18 21:19:04.010488] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.235 [2024-04-18 21:19:04.019219] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.236 [2024-04-18 21:19:04.019788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-04-18 21:19:04.020131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-04-18 21:19:04.020170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.236 [2024-04-18 21:19:04.020204] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.236 [2024-04-18 21:19:04.020545] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.236 [2024-04-18 21:19:04.020736] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.236 [2024-04-18 21:19:04.020746] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.236 [2024-04-18 21:19:04.020755] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.236 [2024-04-18 21:19:04.024610] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.236 [2024-04-18 21:19:04.032897] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.236 [2024-04-18 21:19:04.033394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-04-18 21:19:04.033763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-04-18 21:19:04.033805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.236 [2024-04-18 21:19:04.033838] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.236 [2024-04-18 21:19:04.034452] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.236 [2024-04-18 21:19:04.034833] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.236 [2024-04-18 21:19:04.034843] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.236 [2024-04-18 21:19:04.034852] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.236 [2024-04-18 21:19:04.037600] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.236 [2024-04-18 21:19:04.046012] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.236 [2024-04-18 21:19:04.046583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-04-18 21:19:04.046898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-04-18 21:19:04.046937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.236 [2024-04-18 21:19:04.046970] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.236 [2024-04-18 21:19:04.047597] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.236 [2024-04-18 21:19:04.047918] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.236 [2024-04-18 21:19:04.047927] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.236 [2024-04-18 21:19:04.047937] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.236 [2024-04-18 21:19:04.050737] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.236 [2024-04-18 21:19:04.058965] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.236 [2024-04-18 21:19:04.059530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-04-18 21:19:04.059813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-04-18 21:19:04.059853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.236 [2024-04-18 21:19:04.059886] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.236 [2024-04-18 21:19:04.060144] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.236 [2024-04-18 21:19:04.060312] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.236 [2024-04-18 21:19:04.060322] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.236 [2024-04-18 21:19:04.060330] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.236 [2024-04-18 21:19:04.063014] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.236 [2024-04-18 21:19:04.071825] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.236 [2024-04-18 21:19:04.072367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-04-18 21:19:04.072784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-04-18 21:19:04.072818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.236 [2024-04-18 21:19:04.072828] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.236 [2024-04-18 21:19:04.073000] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.236 [2024-04-18 21:19:04.073166] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.236 [2024-04-18 21:19:04.073175] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.236 [2024-04-18 21:19:04.073184] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.236 [2024-04-18 21:19:04.075761] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.236 [2024-04-18 21:19:04.084742] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.236 [2024-04-18 21:19:04.085309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-04-18 21:19:04.085716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-04-18 21:19:04.085729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.236 [2024-04-18 21:19:04.085738] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.236 [2024-04-18 21:19:04.085911] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.236 [2024-04-18 21:19:04.086077] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.236 [2024-04-18 21:19:04.086087] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.236 [2024-04-18 21:19:04.086095] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.236 [2024-04-18 21:19:04.088681] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.236 [2024-04-18 21:19:04.097613] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.236 [2024-04-18 21:19:04.098139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-04-18 21:19:04.098497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-04-18 21:19:04.098515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.236 [2024-04-18 21:19:04.098527] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.236 [2024-04-18 21:19:04.098710] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.236 [2024-04-18 21:19:04.098886] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.236 [2024-04-18 21:19:04.098896] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.236 [2024-04-18 21:19:04.098905] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.236 [2024-04-18 21:19:04.101552] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.236 [2024-04-18 21:19:04.110441] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.236 [2024-04-18 21:19:04.110985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-04-18 21:19:04.111356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-04-18 21:19:04.111395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.236 [2024-04-18 21:19:04.111427] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.236 [2024-04-18 21:19:04.111848] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.236 [2024-04-18 21:19:04.112109] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.236 [2024-04-18 21:19:04.112122] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.236 [2024-04-18 21:19:04.112135] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.236 [2024-04-18 21:19:04.116176] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.236 [2024-04-18 21:19:04.123785] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.236 [2024-04-18 21:19:04.124328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-04-18 21:19:04.124686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-04-18 21:19:04.124700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.236 [2024-04-18 21:19:04.124709] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.236 [2024-04-18 21:19:04.124887] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.236 [2024-04-18 21:19:04.125058] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.236 [2024-04-18 21:19:04.125067] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.236 [2024-04-18 21:19:04.125076] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.236 [2024-04-18 21:19:04.127734] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.236 [2024-04-18 21:19:04.136667] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.236 [2024-04-18 21:19:04.137235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-04-18 21:19:04.137644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-04-18 21:19:04.137684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.237 [2024-04-18 21:19:04.137717] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.237 [2024-04-18 21:19:04.137926] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.237 [2024-04-18 21:19:04.138093] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.237 [2024-04-18 21:19:04.138102] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.237 [2024-04-18 21:19:04.138110] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.237 [2024-04-18 21:19:04.140695] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.237 [2024-04-18 21:19:04.149512] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.237 [2024-04-18 21:19:04.150062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-04-18 21:19:04.150459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-04-18 21:19:04.150498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.237 [2024-04-18 21:19:04.150549] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.237 [2024-04-18 21:19:04.151123] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.237 [2024-04-18 21:19:04.151299] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.237 [2024-04-18 21:19:04.151308] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.237 [2024-04-18 21:19:04.151317] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.237 [2024-04-18 21:19:04.153938] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.237 [2024-04-18 21:19:04.162582] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.237 [2024-04-18 21:19:04.163167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-04-18 21:19:04.163582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-04-18 21:19:04.163631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.237 [2024-04-18 21:19:04.163664] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.497 [2024-04-18 21:19:04.164245] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.497 [2024-04-18 21:19:04.164426] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.497 [2024-04-18 21:19:04.164436] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.497 [2024-04-18 21:19:04.164446] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.497 [2024-04-18 21:19:04.167220] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.497 [2024-04-18 21:19:04.175590] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.497 [2024-04-18 21:19:04.176155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.497 [2024-04-18 21:19:04.176574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.497 [2024-04-18 21:19:04.176614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.497 [2024-04-18 21:19:04.176647] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.497 [2024-04-18 21:19:04.177113] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.498 [2024-04-18 21:19:04.177280] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.498 [2024-04-18 21:19:04.177289] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.498 [2024-04-18 21:19:04.177297] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.498 [2024-04-18 21:19:04.179886] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.498 [2024-04-18 21:19:04.188497] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.498 [2024-04-18 21:19:04.188999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.498 [2024-04-18 21:19:04.189414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.498 [2024-04-18 21:19:04.189453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.498 [2024-04-18 21:19:04.189485] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.498 [2024-04-18 21:19:04.189954] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.498 [2024-04-18 21:19:04.190121] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.498 [2024-04-18 21:19:04.190130] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.498 [2024-04-18 21:19:04.190139] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.498 [2024-04-18 21:19:04.192819] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.498 [2024-04-18 21:19:04.201370] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.498 [2024-04-18 21:19:04.201923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.498 [2024-04-18 21:19:04.202270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.498 [2024-04-18 21:19:04.202309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.498 [2024-04-18 21:19:04.202352] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.498 [2024-04-18 21:19:04.202937] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.498 [2024-04-18 21:19:04.203198] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.498 [2024-04-18 21:19:04.203211] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.498 [2024-04-18 21:19:04.203224] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.498 [2024-04-18 21:19:04.207258] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.498 [2024-04-18 21:19:04.214884] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.498 [2024-04-18 21:19:04.215434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.498 [2024-04-18 21:19:04.215862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.498 [2024-04-18 21:19:04.215911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.498 [2024-04-18 21:19:04.215921] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.498 [2024-04-18 21:19:04.216104] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.498 [2024-04-18 21:19:04.216279] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.498 [2024-04-18 21:19:04.216288] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.498 [2024-04-18 21:19:04.216297] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.498 [2024-04-18 21:19:04.218979] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.498 [2024-04-18 21:19:04.227767] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.498 [2024-04-18 21:19:04.228304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.498 [2024-04-18 21:19:04.228693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.498 [2024-04-18 21:19:04.228734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.498 [2024-04-18 21:19:04.228767] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.498 [2024-04-18 21:19:04.229110] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.498 [2024-04-18 21:19:04.229287] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.498 [2024-04-18 21:19:04.229297] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.498 [2024-04-18 21:19:04.229306] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.498 [2024-04-18 21:19:04.231927] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.498 [2024-04-18 21:19:04.240579] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.498 [2024-04-18 21:19:04.241116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.498 [2024-04-18 21:19:04.241506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.498 [2024-04-18 21:19:04.241561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.498 [2024-04-18 21:19:04.241594] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.498 [2024-04-18 21:19:04.242106] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.498 [2024-04-18 21:19:04.242281] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.498 [2024-04-18 21:19:04.242291] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.498 [2024-04-18 21:19:04.242300] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.498 [2024-04-18 21:19:04.244924] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.498 [2024-04-18 21:19:04.253455] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.498 [2024-04-18 21:19:04.253999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.498 [2024-04-18 21:19:04.254407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.498 [2024-04-18 21:19:04.254446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.498 [2024-04-18 21:19:04.254479] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.498 [2024-04-18 21:19:04.255035] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.498 [2024-04-18 21:19:04.255212] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.498 [2024-04-18 21:19:04.255222] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.498 [2024-04-18 21:19:04.255231] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.498 [2024-04-18 21:19:04.257849] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.498 [2024-04-18 21:19:04.266384] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.498 [2024-04-18 21:19:04.266918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.498 [2024-04-18 21:19:04.267309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.498 [2024-04-18 21:19:04.267348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.498 [2024-04-18 21:19:04.267381] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.498 [2024-04-18 21:19:04.268007] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.498 [2024-04-18 21:19:04.268446] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.498 [2024-04-18 21:19:04.268456] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.498 [2024-04-18 21:19:04.268465] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.498 [2024-04-18 21:19:04.271076] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.498 [2024-04-18 21:19:04.279223] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.498 [2024-04-18 21:19:04.279765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.498 [2024-04-18 21:19:04.280116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.498 [2024-04-18 21:19:04.280155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.498 [2024-04-18 21:19:04.280188] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.498 [2024-04-18 21:19:04.280456] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.498 [2024-04-18 21:19:04.280652] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.498 [2024-04-18 21:19:04.280663] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.498 [2024-04-18 21:19:04.280671] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.498 [2024-04-18 21:19:04.283316] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.498 [2024-04-18 21:19:04.292249] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.498 [2024-04-18 21:19:04.292765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.498 [2024-04-18 21:19:04.293131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.498 [2024-04-18 21:19:04.293171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.498 [2024-04-18 21:19:04.293204] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.499 [2024-04-18 21:19:04.293619] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.499 [2024-04-18 21:19:04.293807] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.499 [2024-04-18 21:19:04.293817] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.499 [2024-04-18 21:19:04.293826] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.499 [2024-04-18 21:19:04.297655] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.499 [2024-04-18 21:19:04.305784] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.499 [2024-04-18 21:19:04.306337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.499 [2024-04-18 21:19:04.306727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.499 [2024-04-18 21:19:04.306768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.499 [2024-04-18 21:19:04.306801] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.499 [2024-04-18 21:19:04.307413] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.499 [2024-04-18 21:19:04.307588] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.499 [2024-04-18 21:19:04.307598] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.499 [2024-04-18 21:19:04.307607] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.499 [2024-04-18 21:19:04.310259] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.499 [2024-04-18 21:19:04.318796] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.499 [2024-04-18 21:19:04.319355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.499 [2024-04-18 21:19:04.319720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.499 [2024-04-18 21:19:04.319762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.499 [2024-04-18 21:19:04.319795] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.499 [2024-04-18 21:19:04.320330] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.499 [2024-04-18 21:19:04.320496] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.499 [2024-04-18 21:19:04.320514] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.499 [2024-04-18 21:19:04.320524] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.499 [2024-04-18 21:19:04.323195] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.499 [2024-04-18 21:19:04.331763] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.499 [2024-04-18 21:19:04.332340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.499 [2024-04-18 21:19:04.332722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.499 [2024-04-18 21:19:04.332736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.499 [2024-04-18 21:19:04.332746] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.499 [2024-04-18 21:19:04.332920] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.499 [2024-04-18 21:19:04.333087] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.499 [2024-04-18 21:19:04.333096] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.499 [2024-04-18 21:19:04.333105] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.499 [2024-04-18 21:19:04.335785] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.499 [2024-04-18 21:19:04.344636] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.499 [2024-04-18 21:19:04.345196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.499 [2024-04-18 21:19:04.345621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.499 [2024-04-18 21:19:04.345676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.499 [2024-04-18 21:19:04.345687] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.499 [2024-04-18 21:19:04.345875] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.499 [2024-04-18 21:19:04.346042] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.499 [2024-04-18 21:19:04.346051] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.499 [2024-04-18 21:19:04.346060] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.499 [2024-04-18 21:19:04.348733] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.499 [2024-04-18 21:19:04.357588] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.499 [2024-04-18 21:19:04.358083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.499 [2024-04-18 21:19:04.358502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.499 [2024-04-18 21:19:04.358551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.499 [2024-04-18 21:19:04.358577] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.499 [2024-04-18 21:19:04.358766] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.499 [2024-04-18 21:19:04.358933] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.499 [2024-04-18 21:19:04.358942] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.499 [2024-04-18 21:19:04.358964] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.499 [2024-04-18 21:19:04.361544] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.499 [2024-04-18 21:19:04.370458] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.499 [2024-04-18 21:19:04.371041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.499 [2024-04-18 21:19:04.371397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.499 [2024-04-18 21:19:04.371409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.499 [2024-04-18 21:19:04.371419] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.499 [2024-04-18 21:19:04.371610] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.499 [2024-04-18 21:19:04.371787] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.499 [2024-04-18 21:19:04.371797] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.499 [2024-04-18 21:19:04.371806] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.499 [2024-04-18 21:19:04.374532] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.499 [2024-04-18 21:19:04.383493] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.499 [2024-04-18 21:19:04.383993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.499 [2024-04-18 21:19:04.384330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.499 [2024-04-18 21:19:04.384370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.499 [2024-04-18 21:19:04.384414] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.499 [2024-04-18 21:19:04.384628] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.499 [2024-04-18 21:19:04.384888] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.499 [2024-04-18 21:19:04.384901] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.499 [2024-04-18 21:19:04.384915] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.499 [2024-04-18 21:19:04.388977] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.499 [2024-04-18 21:19:04.396903] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.499 [2024-04-18 21:19:04.397465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.499 [2024-04-18 21:19:04.397894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.499 [2024-04-18 21:19:04.397935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.499 [2024-04-18 21:19:04.397969] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.499 [2024-04-18 21:19:04.398590] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.499 [2024-04-18 21:19:04.398839] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.499 [2024-04-18 21:19:04.398848] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.499 [2024-04-18 21:19:04.398857] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.499 [2024-04-18 21:19:04.401558] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.499 [2024-04-18 21:19:04.409793] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.499 [2024-04-18 21:19:04.410363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.499 [2024-04-18 21:19:04.410761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.500 [2024-04-18 21:19:04.410802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.500 [2024-04-18 21:19:04.410836] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.500 [2024-04-18 21:19:04.411445] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.500 [2024-04-18 21:19:04.411635] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.500 [2024-04-18 21:19:04.411645] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.500 [2024-04-18 21:19:04.411654] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.500 [2024-04-18 21:19:04.414346] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.500 [2024-04-18 21:19:04.422663] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.500 [2024-04-18 21:19:04.423211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.500 [2024-04-18 21:19:04.423585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.500 [2024-04-18 21:19:04.423625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.500 [2024-04-18 21:19:04.423659] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.500 [2024-04-18 21:19:04.424271] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.500 [2024-04-18 21:19:04.424735] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.500 [2024-04-18 21:19:04.424745] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.500 [2024-04-18 21:19:04.424754] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.770 [2024-04-18 21:19:04.427582] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.770 [2024-04-18 21:19:04.435764] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.770 [2024-04-18 21:19:04.436336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.770 [2024-04-18 21:19:04.436641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.770 [2024-04-18 21:19:04.436683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.770 [2024-04-18 21:19:04.436715] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.770 [2024-04-18 21:19:04.437328] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.770 [2024-04-18 21:19:04.437822] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.770 [2024-04-18 21:19:04.437832] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.770 [2024-04-18 21:19:04.437841] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.770 [2024-04-18 21:19:04.440540] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.770 [2024-04-18 21:19:04.448623] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.770 [2024-04-18 21:19:04.449175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.770 [2024-04-18 21:19:04.449587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.770 [2024-04-18 21:19:04.449627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.770 [2024-04-18 21:19:04.449660] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.770 [2024-04-18 21:19:04.450270] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.770 [2024-04-18 21:19:04.450649] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.770 [2024-04-18 21:19:04.450659] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.770 [2024-04-18 21:19:04.450669] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.770 [2024-04-18 21:19:04.453327] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.770 [2024-04-18 21:19:04.461587] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.770 [2024-04-18 21:19:04.462119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.770 [2024-04-18 21:19:04.462429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.770 [2024-04-18 21:19:04.462468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.770 [2024-04-18 21:19:04.462501] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.770 [2024-04-18 21:19:04.462898] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.770 [2024-04-18 21:19:04.463076] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.770 [2024-04-18 21:19:04.463085] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.770 [2024-04-18 21:19:04.463094] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.770 [2024-04-18 21:19:04.465754] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.770 [2024-04-18 21:19:04.474345] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.770 [2024-04-18 21:19:04.474891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.770 [2024-04-18 21:19:04.475325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.770 [2024-04-18 21:19:04.475364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.770 [2024-04-18 21:19:04.475397] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.770 [2024-04-18 21:19:04.475996] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.770 [2024-04-18 21:19:04.476257] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.770 [2024-04-18 21:19:04.476271] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.770 [2024-04-18 21:19:04.476284] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.770 [2024-04-18 21:19:04.480324] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3196926 Killed "${NVMF_APP[@]}" "$@" 00:25:48.770 21:19:04 -- host/bdevperf.sh@36 -- # tgt_init 00:25:48.770 21:19:04 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:48.770 21:19:04 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:48.770 21:19:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:48.770 21:19:04 -- common/autotest_common.sh@10 -- # set +x 00:25:48.770 [2024-04-18 21:19:04.487822] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.770 [2024-04-18 21:19:04.488299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.770 [2024-04-18 21:19:04.488692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.770 [2024-04-18 21:19:04.488705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.770 [2024-04-18 21:19:04.488715] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.770 [2024-04-18 21:19:04.488897] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.770 [2024-04-18 21:19:04.489073] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.770 [2024-04-18 21:19:04.489082] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.770 [2024-04-18 21:19:04.489091] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.770 [2024-04-18 21:19:04.491910] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.770 21:19:04 -- nvmf/common.sh@470 -- # nvmfpid=3198332 00:25:48.770 21:19:04 -- nvmf/common.sh@471 -- # waitforlisten 3198332 00:25:48.770 21:19:04 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:48.770 21:19:04 -- common/autotest_common.sh@817 -- # '[' -z 3198332 ']' 00:25:48.770 21:19:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:48.770 21:19:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:48.770 21:19:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:48.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:48.770 21:19:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:48.770 21:19:04 -- common/autotest_common.sh@10 -- # set +x 00:25:48.770 [2024-04-18 21:19:04.500864] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.770 [2024-04-18 21:19:04.501450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.770 [2024-04-18 21:19:04.501816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.770 [2024-04-18 21:19:04.501830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.770 [2024-04-18 21:19:04.501841] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.770 [2024-04-18 21:19:04.502030] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.770 [2024-04-18 21:19:04.502213] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.770 [2024-04-18 21:19:04.502223] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.770 [2024-04-18 21:19:04.502232] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.770 [2024-04-18 21:19:04.505060] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.770 [2024-04-18 21:19:04.514045] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.770 [2024-04-18 21:19:04.514620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.770 [2024-04-18 21:19:04.514982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.770 [2024-04-18 21:19:04.514996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.770 [2024-04-18 21:19:04.515011] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.770 [2024-04-18 21:19:04.515200] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.770 [2024-04-18 21:19:04.515383] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.770 [2024-04-18 21:19:04.515392] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.770 [2024-04-18 21:19:04.515401] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.770 [2024-04-18 21:19:04.518225] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.771 [2024-04-18 21:19:04.527174] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.771 [2024-04-18 21:19:04.527748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.771 [2024-04-18 21:19:04.528112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.771 [2024-04-18 21:19:04.528126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.771 [2024-04-18 21:19:04.528136] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.771 [2024-04-18 21:19:04.528325] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.771 [2024-04-18 21:19:04.528507] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.771 [2024-04-18 21:19:04.528528] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.771 [2024-04-18 21:19:04.528537] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.771 [2024-04-18 21:19:04.531280] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.771 [2024-04-18 21:19:04.537575] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:25:48.771 [2024-04-18 21:19:04.537612] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:48.771 [2024-04-18 21:19:04.540198] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.771 [2024-04-18 21:19:04.540777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.771 [2024-04-18 21:19:04.541067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.771 [2024-04-18 21:19:04.541080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.771 [2024-04-18 21:19:04.541091] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.771 [2024-04-18 21:19:04.541280] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.771 [2024-04-18 21:19:04.541461] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.771 [2024-04-18 21:19:04.541471] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.771 [2024-04-18 21:19:04.541481] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.771 [2024-04-18 21:19:04.544298] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.771 [2024-04-18 21:19:04.553533] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.771 [2024-04-18 21:19:04.554034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.771 [2024-04-18 21:19:04.554405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.771 [2024-04-18 21:19:04.554419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.771 [2024-04-18 21:19:04.554430] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.771 [2024-04-18 21:19:04.554626] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.771 [2024-04-18 21:19:04.554819] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.771 [2024-04-18 21:19:04.554829] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.771 [2024-04-18 21:19:04.554838] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.771 [2024-04-18 21:19:04.557674] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.771 [2024-04-18 21:19:04.566540] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.771 [2024-04-18 21:19:04.567086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.771 [2024-04-18 21:19:04.567375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.771 [2024-04-18 21:19:04.567388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.771 [2024-04-18 21:19:04.567398] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.771 [2024-04-18 21:19:04.567585] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.771 EAL: No free 2048 kB hugepages reported on node 1 00:25:48.771 [2024-04-18 21:19:04.567762] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.771 [2024-04-18 21:19:04.567772] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.771 [2024-04-18 21:19:04.567781] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.771 [2024-04-18 21:19:04.570559] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.771 [2024-04-18 21:19:04.579576] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.771 [2024-04-18 21:19:04.580154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.771 [2024-04-18 21:19:04.580431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.771 [2024-04-18 21:19:04.580444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.771 [2024-04-18 21:19:04.580454] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.771 [2024-04-18 21:19:04.580651] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.771 [2024-04-18 21:19:04.580843] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.771 [2024-04-18 21:19:04.580853] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.771 [2024-04-18 21:19:04.580862] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.771 [2024-04-18 21:19:04.583615] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.771 [2024-04-18 21:19:04.592610] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.771 [2024-04-18 21:19:04.593199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.771 [2024-04-18 21:19:04.593487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.771 [2024-04-18 21:19:04.593503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.771 [2024-04-18 21:19:04.593519] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.771 [2024-04-18 21:19:04.593703] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.771 [2024-04-18 21:19:04.593879] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.771 [2024-04-18 21:19:04.593890] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.771 [2024-04-18 21:19:04.593899] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.771 [2024-04-18 21:19:04.596637] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.771 [2024-04-18 21:19:04.601872] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:48.771 [2024-04-18 21:19:04.605669] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.771 [2024-04-18 21:19:04.606238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.771 [2024-04-18 21:19:04.606600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.771 [2024-04-18 21:19:04.606614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.771 [2024-04-18 21:19:04.606624] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.771 [2024-04-18 21:19:04.606809] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.771 [2024-04-18 21:19:04.606986] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.771 [2024-04-18 21:19:04.606996] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.771 [2024-04-18 21:19:04.607005] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.771 [2024-04-18 21:19:04.609746] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.771 [2024-04-18 21:19:04.618606] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.771 [2024-04-18 21:19:04.619180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.771 [2024-04-18 21:19:04.619474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.771 [2024-04-18 21:19:04.619487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.771 [2024-04-18 21:19:04.619497] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.771 [2024-04-18 21:19:04.619685] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.771 [2024-04-18 21:19:04.619862] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.771 [2024-04-18 21:19:04.619873] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.771 [2024-04-18 21:19:04.619883] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.771 [2024-04-18 21:19:04.622621] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.771 [2024-04-18 21:19:04.631637] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.771 [2024-04-18 21:19:04.632188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.771 [2024-04-18 21:19:04.632417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.771 [2024-04-18 21:19:04.632434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.771 [2024-04-18 21:19:04.632444] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.771 [2024-04-18 21:19:04.632634] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.771 [2024-04-18 21:19:04.632811] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.772 [2024-04-18 21:19:04.632822] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.772 [2024-04-18 21:19:04.632831] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.772 [2024-04-18 21:19:04.635570] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.772 [2024-04-18 21:19:04.644587] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.772 [2024-04-18 21:19:04.645189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.772 [2024-04-18 21:19:04.645557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.772 [2024-04-18 21:19:04.645571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.772 [2024-04-18 21:19:04.645583] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.772 [2024-04-18 21:19:04.645771] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.772 [2024-04-18 21:19:04.645949] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.772 [2024-04-18 21:19:04.645959] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.772 [2024-04-18 21:19:04.645968] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.772 [2024-04-18 21:19:04.648791] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.772 [2024-04-18 21:19:04.657757] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.772 [2024-04-18 21:19:04.658332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.772 [2024-04-18 21:19:04.658673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.772 [2024-04-18 21:19:04.658687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.772 [2024-04-18 21:19:04.658698] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.772 [2024-04-18 21:19:04.658887] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.772 [2024-04-18 21:19:04.659068] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.772 [2024-04-18 21:19:04.659078] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.772 [2024-04-18 21:19:04.659087] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.772 [2024-04-18 21:19:04.662051] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.772 [2024-04-18 21:19:04.670858] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.772 [2024-04-18 21:19:04.671438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.772 [2024-04-18 21:19:04.671711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.772 [2024-04-18 21:19:04.671726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.772 [2024-04-18 21:19:04.671743] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.772 [2024-04-18 21:19:04.671933] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.772 [2024-04-18 21:19:04.672115] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.772 [2024-04-18 21:19:04.672125] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.772 [2024-04-18 21:19:04.672134] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.772 [2024-04-18 21:19:04.674891] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.772 [2024-04-18 21:19:04.676472] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:48.772 [2024-04-18 21:19:04.676496] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:48.772 [2024-04-18 21:19:04.676503] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:48.772 [2024-04-18 21:19:04.676514] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:48.772 [2024-04-18 21:19:04.676520] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:48.772 [2024-04-18 21:19:04.676555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:48.772 [2024-04-18 21:19:04.676638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:48.772 [2024-04-18 21:19:04.676640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:48.772 [2024-04-18 21:19:04.683891] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.772 [2024-04-18 21:19:04.684489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.772 [2024-04-18 21:19:04.684788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.772 [2024-04-18 21:19:04.684803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:48.772 [2024-04-18 21:19:04.684816] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:48.772 [2024-04-18 21:19:04.685011] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:48.772 [2024-04-18 21:19:04.685195] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.772 [2024-04-18 21:19:04.685205] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.772 [2024-04-18 21:19:04.685215] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.772 [2024-04-18 21:19:04.688065] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.076 [2024-04-18 21:19:04.696969] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.076 [2024-04-18 21:19:04.697562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.076 [2024-04-18 21:19:04.697810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.076 [2024-04-18 21:19:04.697824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.076 [2024-04-18 21:19:04.697836] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.076 [2024-04-18 21:19:04.698030] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.076 [2024-04-18 21:19:04.698213] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.076 [2024-04-18 21:19:04.698224] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.076 [2024-04-18 21:19:04.698241] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.076 [2024-04-18 21:19:04.701073] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.076 [2024-04-18 21:19:04.710070] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.076 [2024-04-18 21:19:04.710647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.076 [2024-04-18 21:19:04.711016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.076 [2024-04-18 21:19:04.711029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.076 [2024-04-18 21:19:04.711042] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.076 [2024-04-18 21:19:04.711233] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.076 [2024-04-18 21:19:04.711418] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.076 [2024-04-18 21:19:04.711428] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.076 [2024-04-18 21:19:04.711439] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.076 [2024-04-18 21:19:04.714264] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.076 [2024-04-18 21:19:04.723248] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.076 [2024-04-18 21:19:04.723767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.076 [2024-04-18 21:19:04.724109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.076 [2024-04-18 21:19:04.724124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.076 [2024-04-18 21:19:04.724137] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.076 [2024-04-18 21:19:04.724328] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.076 [2024-04-18 21:19:04.724518] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.076 [2024-04-18 21:19:04.724529] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.076 [2024-04-18 21:19:04.724539] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.076 [2024-04-18 21:19:04.727356] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.076 [2024-04-18 21:19:04.736357] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.076 [2024-04-18 21:19:04.736816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.076 [2024-04-18 21:19:04.737067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.076 [2024-04-18 21:19:04.737080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.076 [2024-04-18 21:19:04.737092] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.077 [2024-04-18 21:19:04.737284] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.077 [2024-04-18 21:19:04.737468] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.077 [2024-04-18 21:19:04.737478] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.077 [2024-04-18 21:19:04.737488] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.077 [2024-04-18 21:19:04.740319] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.077 [2024-04-18 21:19:04.749450] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.077 [2024-04-18 21:19:04.749959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.077 [2024-04-18 21:19:04.750310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.077 [2024-04-18 21:19:04.750324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.077 [2024-04-18 21:19:04.750334] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.077 [2024-04-18 21:19:04.750529] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.077 [2024-04-18 21:19:04.750712] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.077 [2024-04-18 21:19:04.750722] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.077 [2024-04-18 21:19:04.750731] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.077 [2024-04-18 21:19:04.753548] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.077 [2024-04-18 21:19:04.762506] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.077 [2024-04-18 21:19:04.763017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.077 [2024-04-18 21:19:04.763397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.077 [2024-04-18 21:19:04.763410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.077 [2024-04-18 21:19:04.763421] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.077 [2024-04-18 21:19:04.763617] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.077 [2024-04-18 21:19:04.763799] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.077 [2024-04-18 21:19:04.763809] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.077 [2024-04-18 21:19:04.763819] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.077 [2024-04-18 21:19:04.766646] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.077 [2024-04-18 21:19:04.775626] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.077 [2024-04-18 21:19:04.776067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.077 [2024-04-18 21:19:04.776462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.077 [2024-04-18 21:19:04.776475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.077 [2024-04-18 21:19:04.776486] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.077 [2024-04-18 21:19:04.776682] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.077 [2024-04-18 21:19:04.776866] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.077 [2024-04-18 21:19:04.776876] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.077 [2024-04-18 21:19:04.776885] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.077 [2024-04-18 21:19:04.779710] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.077 [2024-04-18 21:19:04.788684] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.077 [2024-04-18 21:19:04.789188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.077 [2024-04-18 21:19:04.789418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.077 [2024-04-18 21:19:04.789431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.077 [2024-04-18 21:19:04.789442] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.077 [2024-04-18 21:19:04.789640] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.077 [2024-04-18 21:19:04.789824] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.077 [2024-04-18 21:19:04.789834] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.077 [2024-04-18 21:19:04.789844] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.077 [2024-04-18 21:19:04.792672] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.077 [2024-04-18 21:19:04.801803] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.077 [2024-04-18 21:19:04.802300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.077 [2024-04-18 21:19:04.802618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.077 [2024-04-18 21:19:04.802632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.077 [2024-04-18 21:19:04.802642] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.077 [2024-04-18 21:19:04.802832] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.077 [2024-04-18 21:19:04.803015] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.077 [2024-04-18 21:19:04.803025] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.077 [2024-04-18 21:19:04.803035] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.077 [2024-04-18 21:19:04.805859] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.077 [2024-04-18 21:19:04.814832] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.077 [2024-04-18 21:19:04.815365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.077 [2024-04-18 21:19:04.815708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.077 [2024-04-18 21:19:04.815721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.077 [2024-04-18 21:19:04.815732] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.077 [2024-04-18 21:19:04.815920] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.077 [2024-04-18 21:19:04.816102] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.077 [2024-04-18 21:19:04.816112] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.077 [2024-04-18 21:19:04.816121] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.077 [2024-04-18 21:19:04.818948] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.077 [2024-04-18 21:19:04.827904] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.077 [2024-04-18 21:19:04.828474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.077 [2024-04-18 21:19:04.828824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.077 [2024-04-18 21:19:04.828837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.077 [2024-04-18 21:19:04.828848] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.077 [2024-04-18 21:19:04.829036] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.077 [2024-04-18 21:19:04.829218] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.077 [2024-04-18 21:19:04.829227] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.077 [2024-04-18 21:19:04.829237] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.077 [2024-04-18 21:19:04.832068] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.077 [2024-04-18 21:19:04.841032] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.077 [2024-04-18 21:19:04.841580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.077 [2024-04-18 21:19:04.841873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.077 [2024-04-18 21:19:04.841886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.077 [2024-04-18 21:19:04.841896] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.077 [2024-04-18 21:19:04.842084] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.077 [2024-04-18 21:19:04.842266] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.077 [2024-04-18 21:19:04.842276] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.077 [2024-04-18 21:19:04.842285] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.077 [2024-04-18 21:19:04.845106] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.077 [2024-04-18 21:19:04.854083] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.077 [2024-04-18 21:19:04.854675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.077 [2024-04-18 21:19:04.854966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.077 [2024-04-18 21:19:04.854979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.077 [2024-04-18 21:19:04.854989] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.077 [2024-04-18 21:19:04.855179] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.077 [2024-04-18 21:19:04.855362] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.077 [2024-04-18 21:19:04.855372] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.077 [2024-04-18 21:19:04.855381] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.078 [2024-04-18 21:19:04.858208] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.078 [2024-04-18 21:19:04.867180] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.078 [2024-04-18 21:19:04.867786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-04-18 21:19:04.868128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-04-18 21:19:04.868144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.078 [2024-04-18 21:19:04.868155] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.078 [2024-04-18 21:19:04.868342] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.078 [2024-04-18 21:19:04.868531] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.078 [2024-04-18 21:19:04.868542] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.078 [2024-04-18 21:19:04.868552] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.078 [2024-04-18 21:19:04.871372] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.078 [2024-04-18 21:19:04.880346] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.078 [2024-04-18 21:19:04.880841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-04-18 21:19:04.881072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-04-18 21:19:04.881085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.078 [2024-04-18 21:19:04.881095] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.078 [2024-04-18 21:19:04.881283] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.078 [2024-04-18 21:19:04.881465] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.078 [2024-04-18 21:19:04.881475] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.078 [2024-04-18 21:19:04.881484] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.078 [2024-04-18 21:19:04.884299] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.078 [2024-04-18 21:19:04.893428] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.078 [2024-04-18 21:19:04.893940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-04-18 21:19:04.894331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-04-18 21:19:04.894344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.078 [2024-04-18 21:19:04.894355] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.078 [2024-04-18 21:19:04.894549] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.078 [2024-04-18 21:19:04.894731] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.078 [2024-04-18 21:19:04.894741] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.078 [2024-04-18 21:19:04.894750] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.078 [2024-04-18 21:19:04.897567] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.078 [2024-04-18 21:19:04.906526] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.078 [2024-04-18 21:19:04.907123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-04-18 21:19:04.907479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-04-18 21:19:04.907491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.078 [2024-04-18 21:19:04.907505] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.078 [2024-04-18 21:19:04.907717] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.078 [2024-04-18 21:19:04.907900] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.078 [2024-04-18 21:19:04.907909] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.078 [2024-04-18 21:19:04.907919] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.078 [2024-04-18 21:19:04.910739] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.078 [2024-04-18 21:19:04.919655] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.078 [2024-04-18 21:19:04.920150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-04-18 21:19:04.920517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-04-18 21:19:04.920530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.078 [2024-04-18 21:19:04.920541] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.078 [2024-04-18 21:19:04.920728] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.078 [2024-04-18 21:19:04.920910] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.078 [2024-04-18 21:19:04.920920] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.078 [2024-04-18 21:19:04.920929] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.078 [2024-04-18 21:19:04.923750] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.078 [2024-04-18 21:19:04.932726] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.078 [2024-04-18 21:19:04.933170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-04-18 21:19:04.933507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-04-18 21:19:04.933525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.078 [2024-04-18 21:19:04.933535] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.078 [2024-04-18 21:19:04.933722] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.078 [2024-04-18 21:19:04.933903] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.078 [2024-04-18 21:19:04.933913] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.078 [2024-04-18 21:19:04.933923] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.078 [2024-04-18 21:19:04.936742] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.078 [2024-04-18 21:19:04.945880] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.078 [2024-04-18 21:19:04.946374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-04-18 21:19:04.946785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-04-18 21:19:04.946799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.078 [2024-04-18 21:19:04.946810] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.078 [2024-04-18 21:19:04.947002] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.078 [2024-04-18 21:19:04.947185] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.078 [2024-04-18 21:19:04.947195] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.078 [2024-04-18 21:19:04.947204] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.078 [2024-04-18 21:19:04.950059] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.078 [2024-04-18 21:19:04.959017] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.078 [2024-04-18 21:19:04.959501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-04-18 21:19:04.959824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-04-18 21:19:04.959837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.078 [2024-04-18 21:19:04.959847] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.078 [2024-04-18 21:19:04.960035] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.078 [2024-04-18 21:19:04.960218] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.078 [2024-04-18 21:19:04.960228] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.078 [2024-04-18 21:19:04.960237] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.078 [2024-04-18 21:19:04.963061] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.078 [2024-04-18 21:19:04.972192] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.078 [2024-04-18 21:19:04.972760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-04-18 21:19:04.973104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-04-18 21:19:04.973117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.078 [2024-04-18 21:19:04.973128] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.078 [2024-04-18 21:19:04.973315] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.078 [2024-04-18 21:19:04.973497] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.078 [2024-04-18 21:19:04.973507] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.078 [2024-04-18 21:19:04.973520] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.078 [2024-04-18 21:19:04.976339] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.078 [2024-04-18 21:19:04.985295] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.078 [2024-04-18 21:19:04.985802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.079 [2024-04-18 21:19:04.986026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.079 [2024-04-18 21:19:04.986039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.079 [2024-04-18 21:19:04.986050] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.079 [2024-04-18 21:19:04.986238] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.079 [2024-04-18 21:19:04.986423] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.079 [2024-04-18 21:19:04.986432] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.079 [2024-04-18 21:19:04.986441] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.079 [2024-04-18 21:19:04.989258] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.079 [2024-04-18 21:19:04.998378] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.079 [2024-04-18 21:19:04.998858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.079 [2024-04-18 21:19:04.999100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.079 [2024-04-18 21:19:04.999113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.079 [2024-04-18 21:19:04.999124] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.079 [2024-04-18 21:19:04.999313] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.079 [2024-04-18 21:19:04.999494] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.079 [2024-04-18 21:19:04.999504] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.079 [2024-04-18 21:19:04.999520] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.079 [2024-04-18 21:19:05.002331] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.341 [2024-04-18 21:19:05.011459] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.341 [2024-04-18 21:19:05.011898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.341 [2024-04-18 21:19:05.012210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.341 [2024-04-18 21:19:05.012223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.341 [2024-04-18 21:19:05.012233] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.341 [2024-04-18 21:19:05.012423] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.341 [2024-04-18 21:19:05.012613] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.341 [2024-04-18 21:19:05.012623] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.341 [2024-04-18 21:19:05.012633] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.341 [2024-04-18 21:19:05.015447] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.341 [2024-04-18 21:19:05.024566] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.341 [2024-04-18 21:19:05.024997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.341 [2024-04-18 21:19:05.025367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.341 [2024-04-18 21:19:05.025380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.341 [2024-04-18 21:19:05.025390] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.341 [2024-04-18 21:19:05.025583] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.341 [2024-04-18 21:19:05.025765] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.341 [2024-04-18 21:19:05.025779] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.341 [2024-04-18 21:19:05.025788] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.341 [2024-04-18 21:19:05.028605] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.341 [2024-04-18 21:19:05.037752] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.341 [2024-04-18 21:19:05.038233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.341 [2024-04-18 21:19:05.038466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.341 [2024-04-18 21:19:05.038480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.341 [2024-04-18 21:19:05.038490] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.341 [2024-04-18 21:19:05.038686] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.341 [2024-04-18 21:19:05.038869] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.341 [2024-04-18 21:19:05.038880] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.341 [2024-04-18 21:19:05.038889] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.341 [2024-04-18 21:19:05.041713] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.341 [2024-04-18 21:19:05.050847] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.341 [2024-04-18 21:19:05.051433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.341 [2024-04-18 21:19:05.051734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.341 [2024-04-18 21:19:05.051748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.341 [2024-04-18 21:19:05.051758] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.341 [2024-04-18 21:19:05.051947] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.341 [2024-04-18 21:19:05.052128] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.341 [2024-04-18 21:19:05.052138] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.341 [2024-04-18 21:19:05.052147] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.341 [2024-04-18 21:19:05.054965] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.341 [2024-04-18 21:19:05.063926] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.341 [2024-04-18 21:19:05.064486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.341 [2024-04-18 21:19:05.064802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.341 [2024-04-18 21:19:05.064815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.341 [2024-04-18 21:19:05.064825] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.341 [2024-04-18 21:19:05.065013] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.341 [2024-04-18 21:19:05.065194] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.341 [2024-04-18 21:19:05.065204] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.341 [2024-04-18 21:19:05.065221] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.341 [2024-04-18 21:19:05.068045] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.341 [2024-04-18 21:19:05.077257] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.341 [2024-04-18 21:19:05.077741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.341 [2024-04-18 21:19:05.078077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.341 [2024-04-18 21:19:05.078091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.341 [2024-04-18 21:19:05.078102] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.341 [2024-04-18 21:19:05.078290] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.342 [2024-04-18 21:19:05.078472] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.342 [2024-04-18 21:19:05.078482] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.342 [2024-04-18 21:19:05.078491] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.342 [2024-04-18 21:19:05.081306] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.342 [2024-04-18 21:19:05.090423] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.342 [2024-04-18 21:19:05.090972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.342 [2024-04-18 21:19:05.091330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.342 [2024-04-18 21:19:05.091344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.342 [2024-04-18 21:19:05.091354] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.342 [2024-04-18 21:19:05.091547] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.342 [2024-04-18 21:19:05.091729] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.342 [2024-04-18 21:19:05.091739] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.342 [2024-04-18 21:19:05.091748] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.342 [2024-04-18 21:19:05.094565] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.342 [2024-04-18 21:19:05.103521] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.342 [2024-04-18 21:19:05.104088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.342 [2024-04-18 21:19:05.104448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.342 [2024-04-18 21:19:05.104460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.342 [2024-04-18 21:19:05.104471] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.342 [2024-04-18 21:19:05.104664] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.342 [2024-04-18 21:19:05.104846] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.342 [2024-04-18 21:19:05.104856] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.342 [2024-04-18 21:19:05.104865] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.342 [2024-04-18 21:19:05.107687] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.342 [2024-04-18 21:19:05.116642] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.342 [2024-04-18 21:19:05.117214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.342 [2024-04-18 21:19:05.117588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.342 [2024-04-18 21:19:05.117601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.342 [2024-04-18 21:19:05.117612] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.342 [2024-04-18 21:19:05.117800] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.342 [2024-04-18 21:19:05.117981] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.342 [2024-04-18 21:19:05.117991] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.342 [2024-04-18 21:19:05.118000] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.342 [2024-04-18 21:19:05.120815] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.342 [2024-04-18 21:19:05.129764] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.342 [2024-04-18 21:19:05.130317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.342 [2024-04-18 21:19:05.130689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.342 [2024-04-18 21:19:05.130703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.342 [2024-04-18 21:19:05.130713] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.342 [2024-04-18 21:19:05.130901] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.342 [2024-04-18 21:19:05.131083] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.342 [2024-04-18 21:19:05.131093] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.342 [2024-04-18 21:19:05.131102] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.342 [2024-04-18 21:19:05.133914] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.342 [2024-04-18 21:19:05.142867] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.342 [2024-04-18 21:19:05.143419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.342 [2024-04-18 21:19:05.143797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.342 [2024-04-18 21:19:05.143811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.342 [2024-04-18 21:19:05.143821] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.342 [2024-04-18 21:19:05.144009] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.342 [2024-04-18 21:19:05.144191] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.342 [2024-04-18 21:19:05.144201] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.342 [2024-04-18 21:19:05.144210] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.342 [2024-04-18 21:19:05.147032] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.342 [2024-04-18 21:19:05.155978] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.342 [2024-04-18 21:19:05.156470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.342 [2024-04-18 21:19:05.156829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.342 [2024-04-18 21:19:05.156844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.342 [2024-04-18 21:19:05.156854] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.342 [2024-04-18 21:19:05.157045] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.342 [2024-04-18 21:19:05.157227] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.342 [2024-04-18 21:19:05.157237] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.342 [2024-04-18 21:19:05.157246] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.342 [2024-04-18 21:19:05.160065] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.342 [2024-04-18 21:19:05.169028] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.342 [2024-04-18 21:19:05.169575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.342 [2024-04-18 21:19:05.169911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.342 [2024-04-18 21:19:05.169924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.342 [2024-04-18 21:19:05.169935] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.342 [2024-04-18 21:19:05.170124] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.342 [2024-04-18 21:19:05.170310] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.342 [2024-04-18 21:19:05.170320] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.342 [2024-04-18 21:19:05.170329] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.342 [2024-04-18 21:19:05.173144] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.342 [2024-04-18 21:19:05.182097] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.342 [2024-04-18 21:19:05.182589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.342 [2024-04-18 21:19:05.182953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.342 [2024-04-18 21:19:05.182967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.342 [2024-04-18 21:19:05.182977] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.342 [2024-04-18 21:19:05.183166] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.342 [2024-04-18 21:19:05.183349] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.342 [2024-04-18 21:19:05.183358] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.342 [2024-04-18 21:19:05.183368] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.342 [2024-04-18 21:19:05.186194] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.342 [2024-04-18 21:19:05.195154] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.342 [2024-04-18 21:19:05.195730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.342 [2024-04-18 21:19:05.196103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.342 [2024-04-18 21:19:05.196117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.342 [2024-04-18 21:19:05.196128] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.342 [2024-04-18 21:19:05.196316] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.343 [2024-04-18 21:19:05.196497] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.343 [2024-04-18 21:19:05.196507] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.343 [2024-04-18 21:19:05.196522] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.343 [2024-04-18 21:19:05.199340] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.343 [2024-04-18 21:19:05.208309] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.343 [2024-04-18 21:19:05.208810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.343 [2024-04-18 21:19:05.209170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.343 [2024-04-18 21:19:05.209183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.343 [2024-04-18 21:19:05.209194] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.343 [2024-04-18 21:19:05.209382] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.343 [2024-04-18 21:19:05.209569] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.343 [2024-04-18 21:19:05.209579] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.343 [2024-04-18 21:19:05.209588] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.343 [2024-04-18 21:19:05.212397] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.343 [2024-04-18 21:19:05.221369] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.343 [2024-04-18 21:19:05.221950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.343 [2024-04-18 21:19:05.222290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.343 [2024-04-18 21:19:05.222304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.343 [2024-04-18 21:19:05.222314] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.343 [2024-04-18 21:19:05.222504] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.343 [2024-04-18 21:19:05.222692] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.343 [2024-04-18 21:19:05.222702] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.343 [2024-04-18 21:19:05.222711] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.343 [2024-04-18 21:19:05.225528] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.343 [2024-04-18 21:19:05.234482] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.343 [2024-04-18 21:19:05.234863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.343 [2024-04-18 21:19:05.235144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.343 [2024-04-18 21:19:05.235160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.343 [2024-04-18 21:19:05.235171] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.343 [2024-04-18 21:19:05.235360] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.343 [2024-04-18 21:19:05.235549] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.343 [2024-04-18 21:19:05.235560] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.343 [2024-04-18 21:19:05.235569] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.343 [2024-04-18 21:19:05.238383] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.343 [2024-04-18 21:19:05.247500] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.343 [2024-04-18 21:19:05.248077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.343 [2024-04-18 21:19:05.248207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.343 [2024-04-18 21:19:05.248220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.343 [2024-04-18 21:19:05.248230] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.343 [2024-04-18 21:19:05.248417] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.343 [2024-04-18 21:19:05.248606] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.343 [2024-04-18 21:19:05.248616] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.343 [2024-04-18 21:19:05.248625] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.343 [2024-04-18 21:19:05.251446] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.343 [2024-04-18 21:19:05.260565] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.343 [2024-04-18 21:19:05.261137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.343 [2024-04-18 21:19:05.261380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.343 [2024-04-18 21:19:05.261393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.343 [2024-04-18 21:19:05.261403] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.343 [2024-04-18 21:19:05.261596] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.343 [2024-04-18 21:19:05.261778] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.343 [2024-04-18 21:19:05.261789] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.343 [2024-04-18 21:19:05.261798] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.343 [2024-04-18 21:19:05.264613] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.603 [2024-04-18 21:19:05.273727] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.603 [2024-04-18 21:19:05.274227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.603 [2024-04-18 21:19:05.274590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.603 [2024-04-18 21:19:05.274604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.603 [2024-04-18 21:19:05.274620] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.603 [2024-04-18 21:19:05.274808] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.603 [2024-04-18 21:19:05.274991] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.603 [2024-04-18 21:19:05.275001] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.603 [2024-04-18 21:19:05.275011] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.603 [2024-04-18 21:19:05.277826] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.603 [2024-04-18 21:19:05.286785] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.603 [2024-04-18 21:19:05.287355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.603 [2024-04-18 21:19:05.287717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.603 [2024-04-18 21:19:05.287732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.603 [2024-04-18 21:19:05.287742] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.603 [2024-04-18 21:19:05.287932] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.603 [2024-04-18 21:19:05.288114] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.603 [2024-04-18 21:19:05.288124] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.603 [2024-04-18 21:19:05.288133] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.603 [2024-04-18 21:19:05.290950] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.603 [2024-04-18 21:19:05.299915] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.603 [2024-04-18 21:19:05.300419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.603 [2024-04-18 21:19:05.300706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.603 [2024-04-18 21:19:05.300721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.603 [2024-04-18 21:19:05.300731] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.603 [2024-04-18 21:19:05.300921] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.603 [2024-04-18 21:19:05.301103] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.603 [2024-04-18 21:19:05.301113] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.603 [2024-04-18 21:19:05.301124] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.603 [2024-04-18 21:19:05.303940] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.603 [2024-04-18 21:19:05.313070] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.603 [2024-04-18 21:19:05.313637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.603 [2024-04-18 21:19:05.313980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.603 [2024-04-18 21:19:05.313993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.603 [2024-04-18 21:19:05.314003] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.603 [2024-04-18 21:19:05.314195] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.603 [2024-04-18 21:19:05.314377] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.603 [2024-04-18 21:19:05.314386] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.603 [2024-04-18 21:19:05.314396] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.603 [2024-04-18 21:19:05.317211] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.603 [2024-04-18 21:19:05.326154] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.603 [2024-04-18 21:19:05.326623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.603 [2024-04-18 21:19:05.326986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.603 [2024-04-18 21:19:05.326999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.603 [2024-04-18 21:19:05.327009] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.603 [2024-04-18 21:19:05.327197] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.603 [2024-04-18 21:19:05.327378] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.603 [2024-04-18 21:19:05.327387] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.603 [2024-04-18 21:19:05.327396] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.603 [2024-04-18 21:19:05.330229] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.603 [2024-04-18 21:19:05.339177] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.603 [2024-04-18 21:19:05.339621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.603 [2024-04-18 21:19:05.339986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.603 [2024-04-18 21:19:05.339999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.603 [2024-04-18 21:19:05.340010] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.604 [2024-04-18 21:19:05.340197] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.604 [2024-04-18 21:19:05.340379] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.604 [2024-04-18 21:19:05.340389] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.604 [2024-04-18 21:19:05.340398] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.604 [2024-04-18 21:19:05.343213] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.604 21:19:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:49.604 21:19:05 -- common/autotest_common.sh@850 -- # return 0 00:25:49.604 21:19:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:49.604 21:19:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:49.604 21:19:05 -- common/autotest_common.sh@10 -- # set +x 00:25:49.604 [2024-04-18 21:19:05.352339] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.604 [2024-04-18 21:19:05.352841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.604 [2024-04-18 21:19:05.353227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.604 [2024-04-18 21:19:05.353240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.604 [2024-04-18 21:19:05.353254] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.604 [2024-04-18 21:19:05.353442] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.604 [2024-04-18 21:19:05.353630] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.604 [2024-04-18 21:19:05.353641] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.604 [2024-04-18 21:19:05.353650] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.604 [2024-04-18 21:19:05.356464] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.604 [2024-04-18 21:19:05.365420] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.604 [2024-04-18 21:19:05.365946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.604 [2024-04-18 21:19:05.366312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.604 [2024-04-18 21:19:05.366325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.604 [2024-04-18 21:19:05.366336] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.604 [2024-04-18 21:19:05.366530] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.604 [2024-04-18 21:19:05.366714] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.604 [2024-04-18 21:19:05.366724] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.604 [2024-04-18 21:19:05.366733] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.604 [2024-04-18 21:19:05.369551] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.604 [2024-04-18 21:19:05.378514] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.604 [2024-04-18 21:19:05.378918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.604 [2024-04-18 21:19:05.379225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.604 [2024-04-18 21:19:05.379237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.604 [2024-04-18 21:19:05.379248] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.604 [2024-04-18 21:19:05.379435] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.604 [2024-04-18 21:19:05.379622] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.604 [2024-04-18 21:19:05.379634] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.604 [2024-04-18 21:19:05.379643] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.604 21:19:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:49.604 21:19:05 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:49.604 21:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:49.604 21:19:05 -- common/autotest_common.sh@10 -- # set +x 00:25:49.604 [2024-04-18 21:19:05.382459] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.604 [2024-04-18 21:19:05.386135] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:49.604 21:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:49.604 21:19:05 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:49.604 [2024-04-18 21:19:05.391602] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.604 21:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:49.604 21:19:05 -- common/autotest_common.sh@10 -- # set +x 00:25:49.604 [2024-04-18 21:19:05.392080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.604 [2024-04-18 21:19:05.392326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.604 [2024-04-18 21:19:05.392338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.604 [2024-04-18 21:19:05.392348] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.604 [2024-04-18 21:19:05.392541] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.604 [2024-04-18 21:19:05.392723] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.604 [2024-04-18 21:19:05.392733] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.604 [2024-04-18 21:19:05.392742] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.604 [2024-04-18 21:19:05.395561] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.604 [2024-04-18 21:19:05.404678] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.604 [2024-04-18 21:19:05.405175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.604 [2024-04-18 21:19:05.405534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.604 [2024-04-18 21:19:05.405548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.604 [2024-04-18 21:19:05.405558] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.604 [2024-04-18 21:19:05.405745] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.604 [2024-04-18 21:19:05.405927] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.604 [2024-04-18 21:19:05.405936] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.604 [2024-04-18 21:19:05.405945] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.604 [2024-04-18 21:19:05.408767] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.604 [2024-04-18 21:19:05.417733] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.604 [2024-04-18 21:19:05.418308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.604 [2024-04-18 21:19:05.418624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.604 [2024-04-18 21:19:05.418638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.604 [2024-04-18 21:19:05.418648] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.604 [2024-04-18 21:19:05.418839] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.604 [2024-04-18 21:19:05.419021] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.604 [2024-04-18 21:19:05.419031] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.604 [2024-04-18 21:19:05.419040] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.604 [2024-04-18 21:19:05.421860] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.604 Malloc0 00:25:49.604 21:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:49.604 21:19:05 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:49.604 21:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:49.604 21:19:05 -- common/autotest_common.sh@10 -- # set +x 00:25:49.604 [2024-04-18 21:19:05.430834] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.604 [2024-04-18 21:19:05.431409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.604 [2024-04-18 21:19:05.431723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.604 [2024-04-18 21:19:05.431736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.604 [2024-04-18 21:19:05.431747] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.604 [2024-04-18 21:19:05.431936] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.604 [2024-04-18 21:19:05.432118] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.604 [2024-04-18 21:19:05.432128] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.605 [2024-04-18 21:19:05.432138] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.605 [2024-04-18 21:19:05.434953] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.605 21:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:49.605 21:19:05 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:49.605 21:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:49.605 21:19:05 -- common/autotest_common.sh@10 -- # set +x 00:25:49.605 [2024-04-18 21:19:05.443915] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.605 [2024-04-18 21:19:05.444415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.605 [2024-04-18 21:19:05.444750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.605 [2024-04-18 21:19:05.444764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cf900 with addr=10.0.0.2, port=4420 00:25:49.605 [2024-04-18 21:19:05.444775] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf900 is same with the state(5) to be set 00:25:49.605 [2024-04-18 21:19:05.444964] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:25:49.605 [2024-04-18 21:19:05.445146] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.605 [2024-04-18 21:19:05.445156] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.605 [2024-04-18 21:19:05.445165] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.605 21:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:49.605 21:19:05 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:49.605 21:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:49.605 21:19:05 -- common/autotest_common.sh@10 -- # set +x 00:25:49.605 [2024-04-18 21:19:05.447989] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.605 [2024-04-18 21:19:05.450523] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:49.605 21:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:49.605 21:19:05 -- host/bdevperf.sh@38 -- # wait 3197403 00:25:49.605 [2024-04-18 21:19:05.456956] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.863 [2024-04-18 21:19:05.571861] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:59.839 00:25:59.839 Latency(us) 00:25:59.839 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:59.839 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:59.839 Verification LBA range: start 0x0 length 0x4000 00:25:59.839 Nvme1n1 : 15.01 8144.15 31.81 12187.11 0.00 6275.61 865.50 19603.81 00:25:59.839 =================================================================================================================== 00:25:59.839 Total : 8144.15 31.81 12187.11 0.00 6275.61 865.50 19603.81 00:25:59.839 21:19:14 -- host/bdevperf.sh@39 -- # sync 00:25:59.839 21:19:14 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:59.839 21:19:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:59.839 21:19:14 -- common/autotest_common.sh@10 -- # set +x 00:25:59.839 21:19:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:59.839 21:19:14 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:25:59.839 21:19:14 -- host/bdevperf.sh@44 -- # nvmftestfini 00:25:59.839 21:19:14 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:59.839 21:19:14 -- nvmf/common.sh@117 -- # sync 00:25:59.839 21:19:14 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:59.839 21:19:14 -- nvmf/common.sh@120 -- # set +e 00:25:59.839 21:19:14 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:59.839 21:19:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:59.839 rmmod nvme_tcp 00:25:59.839 rmmod nvme_fabrics 00:25:59.839 rmmod nvme_keyring 00:25:59.839 21:19:14 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:59.839 21:19:14 -- nvmf/common.sh@124 -- # set -e 00:25:59.839 21:19:14 -- nvmf/common.sh@125 -- # return 0 00:25:59.839 21:19:14 -- nvmf/common.sh@478 -- # '[' -n 3198332 ']' 00:25:59.839 21:19:14 -- nvmf/common.sh@479 -- # killprocess 3198332 00:25:59.839 21:19:14 -- common/autotest_common.sh@936 -- # '[' -z 3198332 ']' 00:25:59.839 21:19:14 -- common/autotest_common.sh@940 -- # kill -0 3198332 00:25:59.839 21:19:14 -- common/autotest_common.sh@941 -- # uname 00:25:59.839 21:19:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:59.839 21:19:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3198332 00:25:59.839 21:19:14 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:59.839 21:19:14 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:59.839 21:19:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3198332' 00:25:59.839 killing process with pid 3198332 00:25:59.839 21:19:14 -- common/autotest_common.sh@955 -- # kill 3198332 00:25:59.839 21:19:14 -- common/autotest_common.sh@960 -- # wait 3198332 00:25:59.839 21:19:14 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:59.839 21:19:14 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:59.839 21:19:14 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:59.839 21:19:14 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:59.839 21:19:14 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:59.839 21:19:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:59.839 21:19:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:59.839 21:19:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:00.776 21:19:16 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:00.776 00:26:00.776 real 0m26.855s 00:26:00.776 user 1m3.328s 00:26:00.776 sys 0m6.740s 00:26:00.776 21:19:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:00.776 21:19:16 -- common/autotest_common.sh@10 -- # set +x 00:26:00.776 ************************************ 00:26:00.776 END TEST nvmf_bdevperf 00:26:00.776 ************************************ 00:26:01.035 21:19:16 -- nvmf/nvmf.sh@121 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:01.035 21:19:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:01.035 21:19:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:01.035 21:19:16 -- common/autotest_common.sh@10 -- # set +x 00:26:01.035 ************************************ 00:26:01.035 START TEST nvmf_target_disconnect 00:26:01.035 ************************************ 00:26:01.035 21:19:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:01.035 * Looking for test storage... 00:26:01.035 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:01.035 21:19:16 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:01.035 21:19:16 -- nvmf/common.sh@7 -- # uname -s 00:26:01.035 21:19:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:01.035 21:19:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:01.035 21:19:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:01.035 21:19:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:01.035 21:19:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:01.035 21:19:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:01.035 21:19:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:01.035 21:19:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:01.035 21:19:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:01.035 21:19:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:01.035 21:19:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:01.035 21:19:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:01.035 21:19:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:01.035 21:19:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:01.035 21:19:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:01.035 21:19:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:01.035 21:19:16 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:01.035 21:19:16 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:01.035 21:19:16 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:01.035 21:19:16 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:01.035 21:19:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.035 21:19:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.036 21:19:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.036 21:19:16 -- paths/export.sh@5 -- # export PATH 00:26:01.036 21:19:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.036 21:19:16 -- nvmf/common.sh@47 -- # : 0 00:26:01.036 21:19:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:01.036 21:19:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:01.036 21:19:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:01.036 21:19:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:01.036 21:19:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:01.036 21:19:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:01.036 21:19:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:01.036 21:19:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:01.036 21:19:16 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:01.036 21:19:16 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:01.036 21:19:16 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:01.036 21:19:16 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:26:01.036 21:19:16 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:01.036 21:19:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:01.036 21:19:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:01.036 21:19:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:01.036 21:19:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:01.036 21:19:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:01.036 21:19:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:01.036 21:19:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:01.036 21:19:16 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:26:01.036 21:19:16 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:26:01.036 21:19:16 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:01.036 21:19:16 -- common/autotest_common.sh@10 -- # set +x 00:26:07.608 21:19:22 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:07.608 21:19:22 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:07.608 21:19:22 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:07.608 21:19:22 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:07.608 21:19:22 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:07.608 21:19:22 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:07.608 21:19:22 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:07.608 21:19:22 -- nvmf/common.sh@295 -- # net_devs=() 00:26:07.608 21:19:22 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:07.608 21:19:22 -- nvmf/common.sh@296 -- # e810=() 00:26:07.608 21:19:22 -- nvmf/common.sh@296 -- # local -ga e810 00:26:07.608 21:19:22 -- nvmf/common.sh@297 -- # x722=() 00:26:07.608 21:19:22 -- nvmf/common.sh@297 -- # local -ga x722 00:26:07.608 21:19:22 -- nvmf/common.sh@298 -- # mlx=() 00:26:07.608 21:19:22 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:07.608 21:19:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:07.608 21:19:22 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:07.608 21:19:22 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:07.608 21:19:22 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:07.608 21:19:22 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:07.608 21:19:22 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:07.608 21:19:22 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:07.608 21:19:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:07.608 21:19:22 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:07.608 21:19:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:07.608 21:19:22 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:07.608 21:19:22 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:07.608 21:19:22 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:07.608 21:19:22 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:07.608 21:19:22 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:07.608 21:19:22 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:07.608 21:19:22 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:07.608 21:19:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:07.608 21:19:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:07.608 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:07.608 21:19:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:07.608 21:19:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:07.608 21:19:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:07.608 21:19:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:07.608 21:19:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:07.608 21:19:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:07.608 21:19:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:07.608 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:07.608 21:19:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:07.608 21:19:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:07.608 21:19:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:07.608 21:19:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:07.608 21:19:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:07.608 21:19:22 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:07.608 21:19:22 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:07.608 21:19:22 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:07.608 21:19:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:07.608 21:19:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:07.608 21:19:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:07.608 21:19:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:07.608 21:19:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:07.608 Found net devices under 0000:86:00.0: cvl_0_0 00:26:07.608 21:19:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:07.608 21:19:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:07.608 21:19:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:07.608 21:19:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:07.608 21:19:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:07.608 21:19:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:07.608 Found net devices under 0000:86:00.1: cvl_0_1 00:26:07.608 21:19:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:07.608 21:19:22 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:26:07.608 21:19:22 -- nvmf/common.sh@403 -- # is_hw=yes 00:26:07.608 21:19:22 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:26:07.608 21:19:22 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:26:07.608 21:19:22 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:26:07.608 21:19:22 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:07.608 21:19:22 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:07.608 21:19:22 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:07.608 21:19:22 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:07.608 21:19:22 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:07.608 21:19:22 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:07.608 21:19:22 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:07.608 21:19:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:07.608 21:19:22 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:07.608 21:19:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:07.608 21:19:22 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:07.608 21:19:22 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:07.608 21:19:22 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:07.608 21:19:22 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:07.608 21:19:22 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:07.608 21:19:22 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:07.608 21:19:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:07.608 21:19:22 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:07.608 21:19:22 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:07.608 21:19:22 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:07.608 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:07.608 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:26:07.608 00:26:07.608 --- 10.0.0.2 ping statistics --- 00:26:07.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:07.608 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:26:07.608 21:19:22 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:07.608 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:07.608 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:26:07.608 00:26:07.608 --- 10.0.0.1 ping statistics --- 00:26:07.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:07.608 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:26:07.608 21:19:22 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:07.608 21:19:22 -- nvmf/common.sh@411 -- # return 0 00:26:07.608 21:19:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:07.608 21:19:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:07.608 21:19:22 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:07.608 21:19:22 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:07.608 21:19:22 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:07.608 21:19:22 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:07.608 21:19:22 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:07.608 21:19:22 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:26:07.608 21:19:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:07.608 21:19:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:07.608 21:19:22 -- common/autotest_common.sh@10 -- # set +x 00:26:07.608 ************************************ 00:26:07.608 START TEST nvmf_target_disconnect_tc1 00:26:07.608 ************************************ 00:26:07.608 21:19:23 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc1 00:26:07.608 21:19:23 -- host/target_disconnect.sh@32 -- # set +e 00:26:07.608 21:19:23 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:07.608 EAL: No free 2048 kB hugepages reported on node 1 00:26:07.608 [2024-04-18 21:19:23.200743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.609 [2024-04-18 21:19:23.201169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.609 [2024-04-18 21:19:23.201211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1261b60 with addr=10.0.0.2, port=4420 00:26:07.609 [2024-04-18 21:19:23.201247] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:07.609 [2024-04-18 21:19:23.201268] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:07.609 [2024-04-18 21:19:23.201281] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:26:07.609 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:26:07.609 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:26:07.609 Initializing NVMe Controllers 00:26:07.609 21:19:23 -- host/target_disconnect.sh@33 -- # trap - ERR 00:26:07.609 21:19:23 -- host/target_disconnect.sh@33 -- # print_backtrace 00:26:07.609 21:19:23 -- common/autotest_common.sh@1139 -- # [[ hxBET =~ e ]] 00:26:07.609 21:19:23 -- common/autotest_common.sh@1139 -- # return 0 00:26:07.609 21:19:23 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:26:07.609 21:19:23 -- host/target_disconnect.sh@41 -- # set -e 00:26:07.609 00:26:07.609 real 0m0.096s 00:26:07.609 user 0m0.035s 00:26:07.609 sys 0m0.060s 00:26:07.609 21:19:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:07.609 21:19:23 -- common/autotest_common.sh@10 -- # set +x 00:26:07.609 ************************************ 00:26:07.609 END TEST nvmf_target_disconnect_tc1 00:26:07.609 ************************************ 00:26:07.609 21:19:23 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:26:07.609 21:19:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:07.609 21:19:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:07.609 21:19:23 -- common/autotest_common.sh@10 -- # set +x 00:26:07.609 ************************************ 00:26:07.609 START TEST nvmf_target_disconnect_tc2 00:26:07.609 ************************************ 00:26:07.609 21:19:23 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc2 00:26:07.609 21:19:23 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:26:07.609 21:19:23 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:07.609 21:19:23 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:07.609 21:19:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:07.609 21:19:23 -- common/autotest_common.sh@10 -- # set +x 00:26:07.609 21:19:23 -- nvmf/common.sh@470 -- # nvmfpid=3203927 00:26:07.609 21:19:23 -- nvmf/common.sh@471 -- # waitforlisten 3203927 00:26:07.609 21:19:23 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:07.609 21:19:23 -- common/autotest_common.sh@817 -- # '[' -z 3203927 ']' 00:26:07.609 21:19:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:07.609 21:19:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:07.609 21:19:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:07.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:07.609 21:19:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:07.609 21:19:23 -- common/autotest_common.sh@10 -- # set +x 00:26:07.609 [2024-04-18 21:19:23.425483] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:26:07.609 [2024-04-18 21:19:23.425527] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:07.609 EAL: No free 2048 kB hugepages reported on node 1 00:26:07.609 [2024-04-18 21:19:23.501670] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:07.868 [2024-04-18 21:19:23.577548] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:07.868 [2024-04-18 21:19:23.577582] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:07.868 [2024-04-18 21:19:23.577589] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:07.868 [2024-04-18 21:19:23.577595] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:07.868 [2024-04-18 21:19:23.577600] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:07.868 [2024-04-18 21:19:23.577713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:26:07.868 [2024-04-18 21:19:23.577821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:26:07.868 [2024-04-18 21:19:23.577928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:07.868 [2024-04-18 21:19:23.577929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:26:08.436 21:19:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:08.436 21:19:24 -- common/autotest_common.sh@850 -- # return 0 00:26:08.436 21:19:24 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:08.436 21:19:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:08.436 21:19:24 -- common/autotest_common.sh@10 -- # set +x 00:26:08.436 21:19:24 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:08.436 21:19:24 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:08.436 21:19:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:08.436 21:19:24 -- common/autotest_common.sh@10 -- # set +x 00:26:08.436 Malloc0 00:26:08.436 21:19:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:08.436 21:19:24 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:08.436 21:19:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:08.436 21:19:24 -- common/autotest_common.sh@10 -- # set +x 00:26:08.436 [2024-04-18 21:19:24.298419] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:08.436 21:19:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:08.436 21:19:24 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:08.436 21:19:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:08.436 21:19:24 -- common/autotest_common.sh@10 -- # set +x 00:26:08.436 21:19:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:08.436 21:19:24 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:08.436 21:19:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:08.436 21:19:24 -- common/autotest_common.sh@10 -- # set +x 00:26:08.436 21:19:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:08.436 21:19:24 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:08.436 21:19:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:08.436 21:19:24 -- common/autotest_common.sh@10 -- # set +x 00:26:08.436 [2024-04-18 21:19:24.326695] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:08.436 21:19:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:08.436 21:19:24 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:08.436 21:19:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:08.436 21:19:24 -- common/autotest_common.sh@10 -- # set +x 00:26:08.436 21:19:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:08.436 21:19:24 -- host/target_disconnect.sh@50 -- # reconnectpid=3204047 00:26:08.436 21:19:24 -- host/target_disconnect.sh@52 -- # sleep 2 00:26:08.436 21:19:24 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:08.695 EAL: No free 2048 kB hugepages reported on node 1 00:26:10.606 21:19:26 -- host/target_disconnect.sh@53 -- # kill -9 3203927 00:26:10.606 21:19:26 -- host/target_disconnect.sh@55 -- # sleep 2 00:26:10.606 Read completed with error (sct=0, sc=8) 00:26:10.606 starting I/O failed 00:26:10.606 Read completed with error (sct=0, sc=8) 00:26:10.606 starting I/O failed 00:26:10.606 Read completed with error (sct=0, sc=8) 00:26:10.606 starting I/O failed 00:26:10.606 Read completed with error (sct=0, sc=8) 00:26:10.606 starting I/O failed 00:26:10.606 Read completed with error (sct=0, sc=8) 00:26:10.606 starting I/O failed 00:26:10.606 Read completed with error (sct=0, sc=8) 00:26:10.606 starting I/O failed 00:26:10.606 Read completed with error (sct=0, sc=8) 00:26:10.606 starting I/O failed 00:26:10.606 Read completed with error (sct=0, sc=8) 00:26:10.606 starting I/O failed 00:26:10.606 Read completed with error (sct=0, sc=8) 00:26:10.606 starting I/O failed 00:26:10.606 Read completed with error (sct=0, sc=8) 00:26:10.606 starting I/O failed 00:26:10.606 Read completed with error (sct=0, sc=8) 00:26:10.606 starting I/O failed 00:26:10.606 Write completed with error (sct=0, sc=8) 00:26:10.606 starting I/O failed 00:26:10.606 Read completed with error (sct=0, sc=8) 00:26:10.606 starting I/O failed 00:26:10.606 Read completed with error (sct=0, sc=8) 00:26:10.606 starting I/O failed 00:26:10.606 Write completed with error (sct=0, sc=8) 00:26:10.606 starting I/O failed 00:26:10.606 Read completed with error (sct=0, sc=8) 00:26:10.606 starting I/O failed 00:26:10.606 Read completed with error (sct=0, sc=8) 00:26:10.606 starting I/O failed 00:26:10.606 Write completed with error (sct=0, sc=8) 00:26:10.606 starting I/O failed 00:26:10.606 Write completed with error (sct=0, sc=8) 00:26:10.606 starting I/O failed 00:26:10.606 Read completed with error (sct=0, sc=8) 00:26:10.606 starting I/O failed 00:26:10.606 Write completed with error (sct=0, sc=8) 00:26:10.606 starting I/O failed 00:26:10.606 Read completed with error (sct=0, sc=8) 00:26:10.606 starting I/O failed 00:26:10.606 Write completed with error (sct=0, sc=8) 00:26:10.606 starting I/O failed 00:26:10.606 Write completed with error (sct=0, sc=8) 00:26:10.606 starting I/O failed 00:26:10.606 Read completed with error (sct=0, sc=8) 00:26:10.606 starting I/O failed 00:26:10.606 Write completed with error (sct=0, sc=8) 00:26:10.606 starting I/O failed 00:26:10.606 Write completed with error (sct=0, sc=8) 00:26:10.606 starting I/O failed 00:26:10.606 Write completed with error (sct=0, sc=8) 00:26:10.606 starting I/O failed 00:26:10.606 Read completed with error (sct=0, sc=8) 00:26:10.606 starting I/O failed 00:26:10.606 Write completed with error (sct=0, sc=8) 00:26:10.606 starting I/O failed 00:26:10.606 Write completed with error (sct=0, sc=8) 00:26:10.606 starting I/O failed 00:26:10.606 Write completed with error (sct=0, sc=8) 00:26:10.606 starting I/O failed 00:26:10.606 Read completed with error (sct=0, sc=8) 00:26:10.606 starting I/O failed 00:26:10.606 Read completed with error (sct=0, sc=8) 00:26:10.606 starting I/O failed 00:26:10.606 [2024-04-18 21:19:26.354262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:10.606 Read completed with error (sct=0, sc=8) 00:26:10.606 starting I/O failed 00:26:10.606 Read completed with error (sct=0, sc=8) 00:26:10.606 starting I/O failed 00:26:10.606 Read completed with error (sct=0, sc=8) 00:26:10.606 starting I/O failed 00:26:10.606 Read completed with error (sct=0, sc=8) 00:26:10.606 starting I/O failed 00:26:10.606 Read completed with error (sct=0, sc=8) 00:26:10.606 starting I/O failed 00:26:10.606 Read completed with error (sct=0, sc=8) 00:26:10.606 starting I/O failed 00:26:10.606 Read completed with error (sct=0, sc=8) 00:26:10.606 starting I/O failed 00:26:10.606 Read completed with error (sct=0, sc=8) 00:26:10.606 starting I/O failed 00:26:10.606 Read completed with error (sct=0, sc=8) 00:26:10.606 starting I/O failed 00:26:10.606 Read completed with error (sct=0, sc=8) 00:26:10.606 starting I/O failed 00:26:10.606 Read completed with error (sct=0, sc=8) 00:26:10.606 starting I/O failed 00:26:10.606 Read completed with error (sct=0, sc=8) 00:26:10.606 starting I/O failed 00:26:10.606 Read completed with error (sct=0, sc=8) 00:26:10.606 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Write completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Write completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Write completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Write completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 [2024-04-18 21:19:26.354505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Write completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Write completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Write completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Write completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Write completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Write completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Write completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Write completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Write completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Write completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Write completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 [2024-04-18 21:19:26.354697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Write completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Write completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Write completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Write completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Write completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Write completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Write completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Write completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Write completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Write completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Write completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 Read completed with error (sct=0, sc=8) 00:26:10.607 starting I/O failed 00:26:10.607 [2024-04-18 21:19:26.354887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.607 [2024-04-18 21:19:26.355278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.607 [2024-04-18 21:19:26.355632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.607 [2024-04-18 21:19:26.355666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.607 qpair failed and we were unable to recover it. 00:26:10.607 [2024-04-18 21:19:26.355998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.607 [2024-04-18 21:19:26.356385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.607 [2024-04-18 21:19:26.356415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.607 qpair failed and we were unable to recover it. 00:26:10.607 [2024-04-18 21:19:26.356825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.607 [2024-04-18 21:19:26.357148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.607 [2024-04-18 21:19:26.357178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.607 qpair failed and we were unable to recover it. 00:26:10.607 [2024-04-18 21:19:26.357620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.607 [2024-04-18 21:19:26.357880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.607 [2024-04-18 21:19:26.357910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.607 qpair failed and we were unable to recover it. 00:26:10.607 [2024-04-18 21:19:26.358182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.607 [2024-04-18 21:19:26.358469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.607 [2024-04-18 21:19:26.358499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.607 qpair failed and we were unable to recover it. 00:26:10.607 [2024-04-18 21:19:26.358962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.607 [2024-04-18 21:19:26.359355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.607 [2024-04-18 21:19:26.359368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.607 qpair failed and we were unable to recover it. 00:26:10.607 [2024-04-18 21:19:26.359729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.607 [2024-04-18 21:19:26.360033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.607 [2024-04-18 21:19:26.360063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.607 qpair failed and we were unable to recover it. 00:26:10.607 [2024-04-18 21:19:26.360488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.607 [2024-04-18 21:19:26.360769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.607 [2024-04-18 21:19:26.360799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.607 qpair failed and we were unable to recover it. 00:26:10.607 [2024-04-18 21:19:26.361062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.607 [2024-04-18 21:19:26.361470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.361500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.608 qpair failed and we were unable to recover it. 00:26:10.608 [2024-04-18 21:19:26.361858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.362200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.362238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.608 qpair failed and we were unable to recover it. 00:26:10.608 [2024-04-18 21:19:26.362662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.362982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.363011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.608 qpair failed and we were unable to recover it. 00:26:10.608 [2024-04-18 21:19:26.363246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.363502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.363539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.608 qpair failed and we were unable to recover it. 00:26:10.608 [2024-04-18 21:19:26.363887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.364324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.364354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.608 qpair failed and we were unable to recover it. 00:26:10.608 [2024-04-18 21:19:26.364674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.364965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.364994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.608 qpair failed and we were unable to recover it. 00:26:10.608 [2024-04-18 21:19:26.365356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.365675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.365705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.608 qpair failed and we were unable to recover it. 00:26:10.608 [2024-04-18 21:19:26.366047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.366456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.366471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.608 qpair failed and we were unable to recover it. 00:26:10.608 [2024-04-18 21:19:26.366745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.367016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.367029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.608 qpair failed and we were unable to recover it. 00:26:10.608 [2024-04-18 21:19:26.367249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.367631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.367645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.608 qpair failed and we were unable to recover it. 00:26:10.608 [2024-04-18 21:19:26.368007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.368305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.368319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.608 qpair failed and we were unable to recover it. 00:26:10.608 [2024-04-18 21:19:26.368748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.369140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.369175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.608 qpair failed and we were unable to recover it. 00:26:10.608 [2024-04-18 21:19:26.369576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.369946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.369975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.608 qpair failed and we were unable to recover it. 00:26:10.608 [2024-04-18 21:19:26.370240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.370545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.370559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.608 qpair failed and we were unable to recover it. 00:26:10.608 [2024-04-18 21:19:26.370877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.371198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.371227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.608 qpair failed and we were unable to recover it. 00:26:10.608 [2024-04-18 21:19:26.371614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.371877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.371906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.608 qpair failed and we were unable to recover it. 00:26:10.608 [2024-04-18 21:19:26.372162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.372628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.372658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.608 qpair failed and we were unable to recover it. 00:26:10.608 [2024-04-18 21:19:26.372927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.373240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.373253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.608 qpair failed and we were unable to recover it. 00:26:10.608 [2024-04-18 21:19:26.373633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.373873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.373902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.608 qpair failed and we were unable to recover it. 00:26:10.608 [2024-04-18 21:19:26.374170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.374477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.374506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.608 qpair failed and we were unable to recover it. 00:26:10.608 [2024-04-18 21:19:26.374808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.375087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.375117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.608 qpair failed and we were unable to recover it. 00:26:10.608 [2024-04-18 21:19:26.375494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.375781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.375795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.608 qpair failed and we were unable to recover it. 00:26:10.608 [2024-04-18 21:19:26.376018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.376332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.376345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.608 qpair failed and we were unable to recover it. 00:26:10.608 [2024-04-18 21:19:26.376635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.376908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.376936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.608 qpair failed and we were unable to recover it. 00:26:10.608 [2024-04-18 21:19:26.377257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.377596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.377610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.608 qpair failed and we were unable to recover it. 00:26:10.608 [2024-04-18 21:19:26.377901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.378356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.378385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.608 qpair failed and we were unable to recover it. 00:26:10.608 [2024-04-18 21:19:26.378770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.379092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.379121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.608 qpair failed and we were unable to recover it. 00:26:10.608 [2024-04-18 21:19:26.379525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.379825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.379854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.608 qpair failed and we were unable to recover it. 00:26:10.608 [2024-04-18 21:19:26.380166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.380494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.608 [2024-04-18 21:19:26.380530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.609 qpair failed and we were unable to recover it. 00:26:10.609 [2024-04-18 21:19:26.380806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.381066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.381108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.609 qpair failed and we were unable to recover it. 00:26:10.609 [2024-04-18 21:19:26.381410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.381847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.381877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.609 qpair failed and we were unable to recover it. 00:26:10.609 [2024-04-18 21:19:26.382256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.382596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.382627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.609 qpair failed and we were unable to recover it. 00:26:10.609 [2024-04-18 21:19:26.382898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.383226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.383256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.609 qpair failed and we were unable to recover it. 00:26:10.609 [2024-04-18 21:19:26.383582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.383835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.383864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.609 qpair failed and we were unable to recover it. 00:26:10.609 [2024-04-18 21:19:26.384191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.384504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.384543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.609 qpair failed and we were unable to recover it. 00:26:10.609 [2024-04-18 21:19:26.384866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.385117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.385146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.609 qpair failed and we were unable to recover it. 00:26:10.609 [2024-04-18 21:19:26.385584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.385903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.385932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.609 qpair failed and we were unable to recover it. 00:26:10.609 [2024-04-18 21:19:26.386241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.386599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.386614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.609 qpair failed and we were unable to recover it. 00:26:10.609 [2024-04-18 21:19:26.386939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.387212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.387241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.609 qpair failed and we were unable to recover it. 00:26:10.609 [2024-04-18 21:19:26.387554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.387944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.387975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.609 qpair failed and we were unable to recover it. 00:26:10.609 [2024-04-18 21:19:26.388244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.388491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.388505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.609 qpair failed and we were unable to recover it. 00:26:10.609 [2024-04-18 21:19:26.388772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.389072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.389101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.609 qpair failed and we were unable to recover it. 00:26:10.609 [2024-04-18 21:19:26.389415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.389727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.389742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.609 qpair failed and we were unable to recover it. 00:26:10.609 [2024-04-18 21:19:26.390089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.390443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.390472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.609 qpair failed and we were unable to recover it. 00:26:10.609 [2024-04-18 21:19:26.390756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.391064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.391094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.609 qpair failed and we were unable to recover it. 00:26:10.609 [2024-04-18 21:19:26.391524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.391860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.391889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.609 qpair failed and we were unable to recover it. 00:26:10.609 [2024-04-18 21:19:26.392194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.392608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.392639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.609 qpair failed and we were unable to recover it. 00:26:10.609 [2024-04-18 21:19:26.393053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.393370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.393405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.609 qpair failed and we were unable to recover it. 00:26:10.609 [2024-04-18 21:19:26.393698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.394015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.394045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.609 qpair failed and we were unable to recover it. 00:26:10.609 [2024-04-18 21:19:26.394358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.394678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.394708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.609 qpair failed and we were unable to recover it. 00:26:10.609 [2024-04-18 21:19:26.395044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.395479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.395509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.609 qpair failed and we were unable to recover it. 00:26:10.609 [2024-04-18 21:19:26.395889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.396284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.396313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.609 qpair failed and we were unable to recover it. 00:26:10.609 [2024-04-18 21:19:26.396581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.396845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.396874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.609 qpair failed and we were unable to recover it. 00:26:10.609 [2024-04-18 21:19:26.397142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.397467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.397497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.609 qpair failed and we were unable to recover it. 00:26:10.609 [2024-04-18 21:19:26.397788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.398099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.398128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.609 qpair failed and we were unable to recover it. 00:26:10.609 [2024-04-18 21:19:26.398429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.398731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.398761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.609 qpair failed and we were unable to recover it. 00:26:10.609 [2024-04-18 21:19:26.399129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.399453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.399483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.609 qpair failed and we were unable to recover it. 00:26:10.609 [2024-04-18 21:19:26.399796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.400115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.609 [2024-04-18 21:19:26.400145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.610 qpair failed and we were unable to recover it. 00:26:10.610 [2024-04-18 21:19:26.400541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.400816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.400846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.610 qpair failed and we were unable to recover it. 00:26:10.610 [2024-04-18 21:19:26.401122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.401489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.401527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.610 qpair failed and we were unable to recover it. 00:26:10.610 [2024-04-18 21:19:26.401792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.402116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.402145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.610 qpair failed and we were unable to recover it. 00:26:10.610 [2024-04-18 21:19:26.402486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.402818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.402832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.610 qpair failed and we were unable to recover it. 00:26:10.610 [2024-04-18 21:19:26.403187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.403623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.403654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.610 qpair failed and we were unable to recover it. 00:26:10.610 [2024-04-18 21:19:26.404086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.404491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.404531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.610 qpair failed and we were unable to recover it. 00:26:10.610 [2024-04-18 21:19:26.404787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.405074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.405088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.610 qpair failed and we were unable to recover it. 00:26:10.610 [2024-04-18 21:19:26.405526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.405876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.405907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.610 qpair failed and we were unable to recover it. 00:26:10.610 [2024-04-18 21:19:26.406232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.406547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.406577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.610 qpair failed and we were unable to recover it. 00:26:10.610 [2024-04-18 21:19:26.406860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.407183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.407213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.610 qpair failed and we were unable to recover it. 00:26:10.610 [2024-04-18 21:19:26.407473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.407802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.407833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.610 qpair failed and we were unable to recover it. 00:26:10.610 [2024-04-18 21:19:26.408171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.408507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.408544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.610 qpair failed and we were unable to recover it. 00:26:10.610 [2024-04-18 21:19:26.408855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.409172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.409202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.610 qpair failed and we were unable to recover it. 00:26:10.610 [2024-04-18 21:19:26.409610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.409832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.409846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.610 qpair failed and we were unable to recover it. 00:26:10.610 [2024-04-18 21:19:26.410089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.410481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.410531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.610 qpair failed and we were unable to recover it. 00:26:10.610 [2024-04-18 21:19:26.410858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.411197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.411227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.610 qpair failed and we were unable to recover it. 00:26:10.610 [2024-04-18 21:19:26.411631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.412000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.412030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.610 qpair failed and we were unable to recover it. 00:26:10.610 [2024-04-18 21:19:26.412396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.412793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.412824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.610 qpair failed and we were unable to recover it. 00:26:10.610 [2024-04-18 21:19:26.413146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.413555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.413585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.610 qpair failed and we were unable to recover it. 00:26:10.610 [2024-04-18 21:19:26.413938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.414180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.414209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.610 qpair failed and we were unable to recover it. 00:26:10.610 [2024-04-18 21:19:26.414540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.414940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.414970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.610 qpair failed and we were unable to recover it. 00:26:10.610 [2024-04-18 21:19:26.415387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.415773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.415803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.610 qpair failed and we were unable to recover it. 00:26:10.610 [2024-04-18 21:19:26.416130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.416377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.416407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.610 qpair failed and we were unable to recover it. 00:26:10.610 [2024-04-18 21:19:26.416726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.417097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.417127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.610 qpair failed and we were unable to recover it. 00:26:10.610 [2024-04-18 21:19:26.417445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.417740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.417754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.610 qpair failed and we were unable to recover it. 00:26:10.610 [2024-04-18 21:19:26.418068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.418446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.418476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.610 qpair failed and we were unable to recover it. 00:26:10.610 [2024-04-18 21:19:26.418777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.419151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.419180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.610 qpair failed and we were unable to recover it. 00:26:10.610 [2024-04-18 21:19:26.419578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.419915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.610 [2024-04-18 21:19:26.419944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.610 qpair failed and we were unable to recover it. 00:26:10.610 [2024-04-18 21:19:26.420287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.611 [2024-04-18 21:19:26.420617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.611 [2024-04-18 21:19:26.420648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.611 qpair failed and we were unable to recover it. 00:26:10.611 [2024-04-18 21:19:26.421022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.611 [2024-04-18 21:19:26.421344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.611 [2024-04-18 21:19:26.421374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.611 qpair failed and we were unable to recover it. 00:26:10.611 [2024-04-18 21:19:26.421744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.611 [2024-04-18 21:19:26.422035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.611 [2024-04-18 21:19:26.422049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.611 qpair failed and we were unable to recover it. 00:26:10.611 [2024-04-18 21:19:26.422416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.611 [2024-04-18 21:19:26.422731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.611 [2024-04-18 21:19:26.422762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.611 qpair failed and we were unable to recover it. 00:26:10.611 [2024-04-18 21:19:26.423084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.611 [2024-04-18 21:19:26.423440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.611 [2024-04-18 21:19:26.423478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.611 qpair failed and we were unable to recover it. 00:26:10.611 [2024-04-18 21:19:26.423845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.611 [2024-04-18 21:19:26.424162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.611 [2024-04-18 21:19:26.424192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.611 qpair failed and we were unable to recover it. 00:26:10.611 [2024-04-18 21:19:26.424539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.611 [2024-04-18 21:19:26.424857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.611 [2024-04-18 21:19:26.424887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.611 qpair failed and we were unable to recover it. 00:26:10.611 [2024-04-18 21:19:26.425204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.611 [2024-04-18 21:19:26.425570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.611 [2024-04-18 21:19:26.425600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.611 qpair failed and we were unable to recover it. 00:26:10.611 [2024-04-18 21:19:26.425923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.611 [2024-04-18 21:19:26.426241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.611 [2024-04-18 21:19:26.426271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.611 qpair failed and we were unable to recover it. 00:26:10.611 [2024-04-18 21:19:26.426628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.611 [2024-04-18 21:19:26.426939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.611 [2024-04-18 21:19:26.426969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.611 qpair failed and we were unable to recover it. 00:26:10.611 [2024-04-18 21:19:26.427311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.611 [2024-04-18 21:19:26.427724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.611 [2024-04-18 21:19:26.427755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.611 qpair failed and we were unable to recover it. 00:26:10.611 [2024-04-18 21:19:26.428071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.611 [2024-04-18 21:19:26.428443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.611 [2024-04-18 21:19:26.428473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.611 qpair failed and we were unable to recover it. 00:26:10.611 [2024-04-18 21:19:26.428827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.611 [2024-04-18 21:19:26.429143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.611 [2024-04-18 21:19:26.429173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.611 qpair failed and we were unable to recover it. 00:26:10.611 [2024-04-18 21:19:26.429491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.611 [2024-04-18 21:19:26.429770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.611 [2024-04-18 21:19:26.429800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.611 qpair failed and we were unable to recover it. 00:26:10.611 [2024-04-18 21:19:26.430060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.611 [2024-04-18 21:19:26.430449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.611 [2024-04-18 21:19:26.430479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.611 qpair failed and we were unable to recover it. 00:26:10.611 [2024-04-18 21:19:26.430772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.611 [2024-04-18 21:19:26.431081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.611 [2024-04-18 21:19:26.431111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.611 qpair failed and we were unable to recover it. 00:26:10.611 [2024-04-18 21:19:26.431499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.611 [2024-04-18 21:19:26.431871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.611 [2024-04-18 21:19:26.431901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.611 qpair failed and we were unable to recover it. 00:26:10.611 [2024-04-18 21:19:26.432242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.611 [2024-04-18 21:19:26.432622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.611 [2024-04-18 21:19:26.432653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.611 qpair failed and we were unable to recover it. 00:26:10.611 [2024-04-18 21:19:26.432981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.611 [2024-04-18 21:19:26.433351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.611 [2024-04-18 21:19:26.433381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.611 qpair failed and we were unable to recover it. 00:26:10.611 [2024-04-18 21:19:26.433726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.611 [2024-04-18 21:19:26.434020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.611 [2024-04-18 21:19:26.434049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.611 qpair failed and we were unable to recover it. 00:26:10.611 [2024-04-18 21:19:26.434486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.611 [2024-04-18 21:19:26.434892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.611 [2024-04-18 21:19:26.434906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.611 qpair failed and we were unable to recover it. 00:26:10.611 [2024-04-18 21:19:26.435247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.611 [2024-04-18 21:19:26.435550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.611 [2024-04-18 21:19:26.435581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.611 qpair failed and we were unable to recover it. 00:26:10.611 [2024-04-18 21:19:26.435845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.611 [2024-04-18 21:19:26.436165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.611 [2024-04-18 21:19:26.436195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.611 qpair failed and we were unable to recover it. 00:26:10.611 [2024-04-18 21:19:26.436542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.611 [2024-04-18 21:19:26.436804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.611 [2024-04-18 21:19:26.436833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.611 qpair failed and we were unable to recover it. 00:26:10.612 [2024-04-18 21:19:26.437255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.437564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.437595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.612 qpair failed and we were unable to recover it. 00:26:10.612 [2024-04-18 21:19:26.437972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.438402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.438432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.612 qpair failed and we were unable to recover it. 00:26:10.612 [2024-04-18 21:19:26.438864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.439233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.439263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.612 qpair failed and we were unable to recover it. 00:26:10.612 [2024-04-18 21:19:26.439607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.439932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.439962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.612 qpair failed and we were unable to recover it. 00:26:10.612 [2024-04-18 21:19:26.440357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.440670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.440699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.612 qpair failed and we were unable to recover it. 00:26:10.612 [2024-04-18 21:19:26.441079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.441406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.441436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.612 qpair failed and we were unable to recover it. 00:26:10.612 [2024-04-18 21:19:26.441769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.442085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.442114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.612 qpair failed and we were unable to recover it. 00:26:10.612 [2024-04-18 21:19:26.442424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.442749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.442763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.612 qpair failed and we were unable to recover it. 00:26:10.612 [2024-04-18 21:19:26.443136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.443519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.443550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.612 qpair failed and we were unable to recover it. 00:26:10.612 [2024-04-18 21:19:26.443901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.444225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.444254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.612 qpair failed and we were unable to recover it. 00:26:10.612 [2024-04-18 21:19:26.444584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.444963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.444992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.612 qpair failed and we were unable to recover it. 00:26:10.612 [2024-04-18 21:19:26.445340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.445766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.445797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.612 qpair failed and we were unable to recover it. 00:26:10.612 [2024-04-18 21:19:26.446259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.446660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.446691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.612 qpair failed and we were unable to recover it. 00:26:10.612 [2024-04-18 21:19:26.447066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.447437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.447467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.612 qpair failed and we were unable to recover it. 00:26:10.612 [2024-04-18 21:19:26.447791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.448019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.448048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.612 qpair failed and we were unable to recover it. 00:26:10.612 [2024-04-18 21:19:26.448381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.448703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.448735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.612 qpair failed and we were unable to recover it. 00:26:10.612 [2024-04-18 21:19:26.449064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.449462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.449493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.612 qpair failed and we were unable to recover it. 00:26:10.612 [2024-04-18 21:19:26.449904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.450286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.450315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.612 qpair failed and we were unable to recover it. 00:26:10.612 [2024-04-18 21:19:26.450648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.450980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.451009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.612 qpair failed and we were unable to recover it. 00:26:10.612 [2024-04-18 21:19:26.451415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.451811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.451842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.612 qpair failed and we were unable to recover it. 00:26:10.612 [2024-04-18 21:19:26.452121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.452447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.452476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.612 qpair failed and we were unable to recover it. 00:26:10.612 [2024-04-18 21:19:26.452877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.453171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.453201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.612 qpair failed and we were unable to recover it. 00:26:10.612 [2024-04-18 21:19:26.453531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.453855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.453885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.612 qpair failed and we were unable to recover it. 00:26:10.612 [2024-04-18 21:19:26.454196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.454561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.454593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.612 qpair failed and we were unable to recover it. 00:26:10.612 [2024-04-18 21:19:26.454970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.455336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.455365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.612 qpair failed and we were unable to recover it. 00:26:10.612 [2024-04-18 21:19:26.455746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.456062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.456091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.612 qpair failed and we were unable to recover it. 00:26:10.612 [2024-04-18 21:19:26.456501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.456838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.456868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.612 qpair failed and we were unable to recover it. 00:26:10.612 [2024-04-18 21:19:26.457256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.457588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.457619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.612 qpair failed and we were unable to recover it. 00:26:10.612 [2024-04-18 21:19:26.457971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.458303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.612 [2024-04-18 21:19:26.458333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.612 qpair failed and we were unable to recover it. 00:26:10.613 [2024-04-18 21:19:26.458742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.459069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.459099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.613 qpair failed and we were unable to recover it. 00:26:10.613 [2024-04-18 21:19:26.459409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.459693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.459708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.613 qpair failed and we were unable to recover it. 00:26:10.613 [2024-04-18 21:19:26.460101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.460406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.460436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.613 qpair failed and we were unable to recover it. 00:26:10.613 [2024-04-18 21:19:26.460769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.461063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.461079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.613 qpair failed and we were unable to recover it. 00:26:10.613 [2024-04-18 21:19:26.461490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.461907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.461938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.613 qpair failed and we were unable to recover it. 00:26:10.613 [2024-04-18 21:19:26.462194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.462596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.462628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.613 qpair failed and we were unable to recover it. 00:26:10.613 [2024-04-18 21:19:26.462957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.463342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.463373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.613 qpair failed and we were unable to recover it. 00:26:10.613 [2024-04-18 21:19:26.463770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.464036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.464066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.613 qpair failed and we were unable to recover it. 00:26:10.613 [2024-04-18 21:19:26.464404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.464726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.464756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.613 qpair failed and we were unable to recover it. 00:26:10.613 [2024-04-18 21:19:26.465105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.465491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.465529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.613 qpair failed and we were unable to recover it. 00:26:10.613 [2024-04-18 21:19:26.465913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.466239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.466268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.613 qpair failed and we were unable to recover it. 00:26:10.613 [2024-04-18 21:19:26.466684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.467023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.467052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.613 qpair failed and we were unable to recover it. 00:26:10.613 [2024-04-18 21:19:26.467383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.467772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.467787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.613 qpair failed and we were unable to recover it. 00:26:10.613 [2024-04-18 21:19:26.468095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.468425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.468460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.613 qpair failed and we were unable to recover it. 00:26:10.613 [2024-04-18 21:19:26.468817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.469186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.469216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.613 qpair failed and we were unable to recover it. 00:26:10.613 [2024-04-18 21:19:26.469540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.469780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.469794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.613 qpair failed and we were unable to recover it. 00:26:10.613 [2024-04-18 21:19:26.470143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.470549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.470580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.613 qpair failed and we were unable to recover it. 00:26:10.613 [2024-04-18 21:19:26.470910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.471232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.471261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.613 qpair failed and we were unable to recover it. 00:26:10.613 [2024-04-18 21:19:26.471669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.472051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.472079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.613 qpair failed and we were unable to recover it. 00:26:10.613 [2024-04-18 21:19:26.472481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.472913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.472944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.613 qpair failed and we were unable to recover it. 00:26:10.613 [2024-04-18 21:19:26.473347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.473677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.473708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.613 qpair failed and we were unable to recover it. 00:26:10.613 [2024-04-18 21:19:26.474117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.474456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.474485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:10.613 qpair failed and we were unable to recover it. 00:26:10.613 [2024-04-18 21:19:26.474946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.475426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.475482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.613 qpair failed and we were unable to recover it. 00:26:10.613 [2024-04-18 21:19:26.475880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.476233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.476280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.613 qpair failed and we were unable to recover it. 00:26:10.613 [2024-04-18 21:19:26.476739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.477113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.477150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.613 qpair failed and we were unable to recover it. 00:26:10.613 [2024-04-18 21:19:26.477563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.477939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.477976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.613 qpair failed and we were unable to recover it. 00:26:10.613 [2024-04-18 21:19:26.478414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.478832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.478872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.613 qpair failed and we were unable to recover it. 00:26:10.613 [2024-04-18 21:19:26.479166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.479596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.479635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.613 qpair failed and we were unable to recover it. 00:26:10.613 [2024-04-18 21:19:26.480066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.480465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.613 [2024-04-18 21:19:26.480501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.613 qpair failed and we were unable to recover it. 00:26:10.614 [2024-04-18 21:19:26.480950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.481373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.481409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.614 qpair failed and we were unable to recover it. 00:26:10.614 [2024-04-18 21:19:26.481847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.482273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.482310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.614 qpair failed and we were unable to recover it. 00:26:10.614 [2024-04-18 21:19:26.482745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.483083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.483119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.614 qpair failed and we were unable to recover it. 00:26:10.614 [2024-04-18 21:19:26.483557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.483978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.484014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.614 qpair failed and we were unable to recover it. 00:26:10.614 [2024-04-18 21:19:26.484461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.484821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.484858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.614 qpair failed and we were unable to recover it. 00:26:10.614 [2024-04-18 21:19:26.485279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.485627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.485660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.614 qpair failed and we were unable to recover it. 00:26:10.614 [2024-04-18 21:19:26.486000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.486397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.486433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.614 qpair failed and we were unable to recover it. 00:26:10.614 [2024-04-18 21:19:26.486905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.487328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.487365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.614 qpair failed and we were unable to recover it. 00:26:10.614 [2024-04-18 21:19:26.487805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.488223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.488259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.614 qpair failed and we were unable to recover it. 00:26:10.614 [2024-04-18 21:19:26.488699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.489096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.489132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.614 qpair failed and we were unable to recover it. 00:26:10.614 [2024-04-18 21:19:26.489570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.489970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.489987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.614 qpair failed and we were unable to recover it. 00:26:10.614 [2024-04-18 21:19:26.490314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.490758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.490797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.614 qpair failed and we were unable to recover it. 00:26:10.614 [2024-04-18 21:19:26.491155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.491493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.491538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.614 qpair failed and we were unable to recover it. 00:26:10.614 [2024-04-18 21:19:26.491834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.492202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.492238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.614 qpair failed and we were unable to recover it. 00:26:10.614 [2024-04-18 21:19:26.492707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.493058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.493094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.614 qpair failed and we were unable to recover it. 00:26:10.614 [2024-04-18 21:19:26.493529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.493954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.493991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.614 qpair failed and we were unable to recover it. 00:26:10.614 [2024-04-18 21:19:26.494423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.494846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.494884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.614 qpair failed and we were unable to recover it. 00:26:10.614 [2024-04-18 21:19:26.495265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.495684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.495722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.614 qpair failed and we were unable to recover it. 00:26:10.614 [2024-04-18 21:19:26.496158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.496582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.496618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.614 qpair failed and we were unable to recover it. 00:26:10.614 [2024-04-18 21:19:26.497074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.497474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.497520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.614 qpair failed and we were unable to recover it. 00:26:10.614 [2024-04-18 21:19:26.497983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.498397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.498433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.614 qpair failed and we were unable to recover it. 00:26:10.614 [2024-04-18 21:19:26.498882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.499232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.499268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.614 qpair failed and we were unable to recover it. 00:26:10.614 [2024-04-18 21:19:26.499669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.500111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.500147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.614 qpair failed and we were unable to recover it. 00:26:10.614 [2024-04-18 21:19:26.500584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.500913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.500949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.614 qpair failed and we were unable to recover it. 00:26:10.614 [2024-04-18 21:19:26.501310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.501651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.501697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.614 qpair failed and we were unable to recover it. 00:26:10.614 [2024-04-18 21:19:26.502153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.502576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.502614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.614 qpair failed and we were unable to recover it. 00:26:10.614 [2024-04-18 21:19:26.503037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.503456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.503492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.614 qpair failed and we were unable to recover it. 00:26:10.614 [2024-04-18 21:19:26.503904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.504243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.504279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.614 qpair failed and we were unable to recover it. 00:26:10.614 [2024-04-18 21:19:26.504715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.505069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.614 [2024-04-18 21:19:26.505105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.615 qpair failed and we were unable to recover it. 00:26:10.615 [2024-04-18 21:19:26.505476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.615 [2024-04-18 21:19:26.505811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.615 [2024-04-18 21:19:26.505860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.615 qpair failed and we were unable to recover it. 00:26:10.615 [2024-04-18 21:19:26.506245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.615 [2024-04-18 21:19:26.506664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.615 [2024-04-18 21:19:26.506701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.615 qpair failed and we were unable to recover it. 00:26:10.615 [2024-04-18 21:19:26.507158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.615 [2024-04-18 21:19:26.507472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.615 [2024-04-18 21:19:26.507508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.615 qpair failed and we were unable to recover it. 00:26:10.615 [2024-04-18 21:19:26.507940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.615 [2024-04-18 21:19:26.508351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.615 [2024-04-18 21:19:26.508388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.615 qpair failed and we were unable to recover it. 00:26:10.615 [2024-04-18 21:19:26.508750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.615 [2024-04-18 21:19:26.509178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.615 [2024-04-18 21:19:26.509215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.615 qpair failed and we were unable to recover it. 00:26:10.615 [2024-04-18 21:19:26.509678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.615 [2024-04-18 21:19:26.510087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.615 [2024-04-18 21:19:26.510124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.615 qpair failed and we were unable to recover it. 00:26:10.615 [2024-04-18 21:19:26.510560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.615 [2024-04-18 21:19:26.510929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.615 [2024-04-18 21:19:26.510966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.615 qpair failed and we were unable to recover it. 00:26:10.615 [2024-04-18 21:19:26.511379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.615 [2024-04-18 21:19:26.511803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.615 [2024-04-18 21:19:26.511841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.615 qpair failed and we were unable to recover it. 00:26:10.615 [2024-04-18 21:19:26.512301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.615 [2024-04-18 21:19:26.512725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.615 [2024-04-18 21:19:26.512762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.615 qpair failed and we were unable to recover it. 00:26:10.615 [2024-04-18 21:19:26.513180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.615 [2024-04-18 21:19:26.513606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.615 [2024-04-18 21:19:26.513643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.615 qpair failed and we were unable to recover it. 00:26:10.615 [2024-04-18 21:19:26.514081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.615 [2024-04-18 21:19:26.514497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.615 [2024-04-18 21:19:26.514558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.615 qpair failed and we were unable to recover it. 00:26:10.615 [2024-04-18 21:19:26.515019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.615 [2024-04-18 21:19:26.515440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.615 [2024-04-18 21:19:26.515476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.615 qpair failed and we were unable to recover it. 00:26:10.615 [2024-04-18 21:19:26.515850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.615 [2024-04-18 21:19:26.516265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.615 [2024-04-18 21:19:26.516301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.615 qpair failed and we were unable to recover it. 00:26:10.615 [2024-04-18 21:19:26.516734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.615 [2024-04-18 21:19:26.517156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.615 [2024-04-18 21:19:26.517193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.615 qpair failed and we were unable to recover it. 00:26:10.615 [2024-04-18 21:19:26.517629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.615 [2024-04-18 21:19:26.518050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.615 [2024-04-18 21:19:26.518087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.615 qpair failed and we were unable to recover it. 00:26:10.615 [2024-04-18 21:19:26.518536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.615 [2024-04-18 21:19:26.518875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.615 [2024-04-18 21:19:26.518912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.615 qpair failed and we were unable to recover it. 00:26:10.615 [2024-04-18 21:19:26.519277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.615 [2024-04-18 21:19:26.519699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.615 [2024-04-18 21:19:26.519744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.615 qpair failed and we were unable to recover it. 00:26:10.615 [2024-04-18 21:19:26.520102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.615 [2024-04-18 21:19:26.520445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.615 [2024-04-18 21:19:26.520481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.615 qpair failed and we were unable to recover it. 00:26:10.615 [2024-04-18 21:19:26.520928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.615 [2024-04-18 21:19:26.521349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.615 [2024-04-18 21:19:26.521385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.615 qpair failed and we were unable to recover it. 00:26:10.615 [2024-04-18 21:19:26.521805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.615 [2024-04-18 21:19:26.522114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.615 [2024-04-18 21:19:26.522150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.615 qpair failed and we were unable to recover it. 00:26:10.615 [2024-04-18 21:19:26.522505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.615 [2024-04-18 21:19:26.522949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.615 [2024-04-18 21:19:26.522985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.615 qpair failed and we were unable to recover it. 00:26:10.615 [2024-04-18 21:19:26.523372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.615 [2024-04-18 21:19:26.523791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.615 [2024-04-18 21:19:26.523827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.615 qpair failed and we were unable to recover it. 00:26:10.615 [2024-04-18 21:19:26.524242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.615 [2024-04-18 21:19:26.524665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.615 [2024-04-18 21:19:26.524681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.615 qpair failed and we were unable to recover it. 00:26:10.615 [2024-04-18 21:19:26.525066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.615 [2024-04-18 21:19:26.525433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.615 [2024-04-18 21:19:26.525449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.615 qpair failed and we were unable to recover it. 00:26:10.615 [2024-04-18 21:19:26.525760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.616 [2024-04-18 21:19:26.526088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.616 [2024-04-18 21:19:26.526124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.616 qpair failed and we were unable to recover it. 00:26:10.616 [2024-04-18 21:19:26.526531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.616 [2024-04-18 21:19:26.526911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.616 [2024-04-18 21:19:26.526927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.616 qpair failed and we were unable to recover it. 00:26:10.616 [2024-04-18 21:19:26.527306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.616 [2024-04-18 21:19:26.527614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.616 [2024-04-18 21:19:26.527631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.616 qpair failed and we were unable to recover it. 00:26:10.616 [2024-04-18 21:19:26.528029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.616 [2024-04-18 21:19:26.528424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.616 [2024-04-18 21:19:26.528460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.616 qpair failed and we were unable to recover it. 00:26:10.616 [2024-04-18 21:19:26.528900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.616 [2024-04-18 21:19:26.529253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.529269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.882 qpair failed and we were unable to recover it. 00:26:10.882 [2024-04-18 21:19:26.529634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.529922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.529939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.882 qpair failed and we were unable to recover it. 00:26:10.882 [2024-04-18 21:19:26.530327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.530642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.530659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.882 qpair failed and we were unable to recover it. 00:26:10.882 [2024-04-18 21:19:26.531049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.531344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.531379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.882 qpair failed and we were unable to recover it. 00:26:10.882 [2024-04-18 21:19:26.531825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.532226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.532262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.882 qpair failed and we were unable to recover it. 00:26:10.882 [2024-04-18 21:19:26.532620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.533036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.533053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.882 qpair failed and we were unable to recover it. 00:26:10.882 [2024-04-18 21:19:26.533436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.533831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.533868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.882 qpair failed and we were unable to recover it. 00:26:10.882 [2024-04-18 21:19:26.534257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.534679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.534715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.882 qpair failed and we were unable to recover it. 00:26:10.882 [2024-04-18 21:19:26.535094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.535475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.535520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.882 qpair failed and we were unable to recover it. 00:26:10.882 [2024-04-18 21:19:26.535913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.536248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.536285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.882 qpair failed and we were unable to recover it. 00:26:10.882 [2024-04-18 21:19:26.536641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.537005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.537041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.882 qpair failed and we were unable to recover it. 00:26:10.882 [2024-04-18 21:19:26.537473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.537813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.537830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.882 qpair failed and we were unable to recover it. 00:26:10.882 [2024-04-18 21:19:26.538228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.538646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.538684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.882 qpair failed and we were unable to recover it. 00:26:10.882 [2024-04-18 21:19:26.539146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.539506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.539554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.882 qpair failed and we were unable to recover it. 00:26:10.882 [2024-04-18 21:19:26.539891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.540324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.540360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.882 qpair failed and we were unable to recover it. 00:26:10.882 [2024-04-18 21:19:26.540795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.541069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.541106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.882 qpair failed and we were unable to recover it. 00:26:10.882 [2024-04-18 21:19:26.541567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.541962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.541999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.882 qpair failed and we were unable to recover it. 00:26:10.882 [2024-04-18 21:19:26.542290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.542713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.542752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.882 qpair failed and we were unable to recover it. 00:26:10.882 [2024-04-18 21:19:26.543206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.543599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.543616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.882 qpair failed and we were unable to recover it. 00:26:10.882 [2024-04-18 21:19:26.544014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.544427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.544463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.882 qpair failed and we were unable to recover it. 00:26:10.882 [2024-04-18 21:19:26.544853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.545268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.545305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.882 qpair failed and we were unable to recover it. 00:26:10.882 [2024-04-18 21:19:26.545742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.546153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.546190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.882 qpair failed and we were unable to recover it. 00:26:10.882 [2024-04-18 21:19:26.546550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.546972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.547008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.882 qpair failed and we were unable to recover it. 00:26:10.882 [2024-04-18 21:19:26.547439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.547781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.547818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.882 qpair failed and we were unable to recover it. 00:26:10.882 [2024-04-18 21:19:26.548254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.548670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.548708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.882 qpair failed and we were unable to recover it. 00:26:10.882 [2024-04-18 21:19:26.549118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.549494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.549518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.882 qpair failed and we were unable to recover it. 00:26:10.882 [2024-04-18 21:19:26.549847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.550267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.550302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.882 qpair failed and we were unable to recover it. 00:26:10.882 [2024-04-18 21:19:26.550738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.551065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.551101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.882 qpair failed and we were unable to recover it. 00:26:10.882 [2024-04-18 21:19:26.551501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.551934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.551970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.882 qpair failed and we were unable to recover it. 00:26:10.882 [2024-04-18 21:19:26.552409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.552815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.552833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.882 qpair failed and we were unable to recover it. 00:26:10.882 [2024-04-18 21:19:26.553143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.553538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.553576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.882 qpair failed and we were unable to recover it. 00:26:10.882 [2024-04-18 21:19:26.554290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.554723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.554767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.882 qpair failed and we were unable to recover it. 00:26:10.882 [2024-04-18 21:19:26.555219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.555639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.555678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.882 qpair failed and we were unable to recover it. 00:26:10.882 [2024-04-18 21:19:26.556054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.556416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.556452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.882 qpair failed and we were unable to recover it. 00:26:10.882 [2024-04-18 21:19:26.556843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.557270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.882 [2024-04-18 21:19:26.557307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.557683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.558037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.558054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.558341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.558723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.558739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.559137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.559556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.559594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.559949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.560343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.560379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.560831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.561199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.561244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.561709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.562129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.562165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.562643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.563039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.563076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.563505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.563845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.563862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.564195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.564632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.564670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.565028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.565375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.565411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.565825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.566172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.566188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.566548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.566941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.566978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.567357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.567686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.567723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.568133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.568449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.568466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.568855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.569236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.569280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.569697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.570116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.570152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.570525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.570947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.570984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.571421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.571783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.571821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.572163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.572497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.572544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.572958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.573376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.573412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.573771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.574193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.574230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.574639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.575057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.575093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.575508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.575950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.575986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.576425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.576843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.576881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.577295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.577640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.577682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.578063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.578459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.578495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.578847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.579163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.579200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.579570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.579988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.580025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.580508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.580925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.580941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.581323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.581699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.581737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.582156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.582578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.582616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.583058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.583391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.583427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.583871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.584228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.584265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.584701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.585098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.585135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.585526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.585936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.585952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.586275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.586702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.586740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.587173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.587589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.587626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.588034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.588468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.588504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.588895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.589343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.589380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.589814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.590141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.590157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.590570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.590920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.590958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.591392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.591790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.591827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.592169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.592526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.592544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.592860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.593255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.593291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.593673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.594091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.594128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.594524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.594930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.594967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.595402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.595749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.595811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.596277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.596556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.596594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.596957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.597335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.597371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.597808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.598238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.598274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.598711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.599133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.599170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.599582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.600013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.600050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.883 qpair failed and we were unable to recover it. 00:26:10.883 [2024-04-18 21:19:26.600411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.883 [2024-04-18 21:19:26.600832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.600869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.884 qpair failed and we were unable to recover it. 00:26:10.884 [2024-04-18 21:19:26.601310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.601708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.601745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.884 qpair failed and we were unable to recover it. 00:26:10.884 [2024-04-18 21:19:26.602152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.602531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.602568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.884 qpair failed and we were unable to recover it. 00:26:10.884 [2024-04-18 21:19:26.602930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.603334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.603371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.884 qpair failed and we were unable to recover it. 00:26:10.884 [2024-04-18 21:19:26.603807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.604231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.604269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.884 qpair failed and we were unable to recover it. 00:26:10.884 [2024-04-18 21:19:26.604710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.605114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.605151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.884 qpair failed and we were unable to recover it. 00:26:10.884 [2024-04-18 21:19:26.605588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.606010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.606047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.884 qpair failed and we were unable to recover it. 00:26:10.884 [2024-04-18 21:19:26.606532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.606875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.606911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.884 qpair failed and we were unable to recover it. 00:26:10.884 [2024-04-18 21:19:26.607341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.607692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.607710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.884 qpair failed and we were unable to recover it. 00:26:10.884 [2024-04-18 21:19:26.608181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.608506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.608554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.884 qpair failed and we were unable to recover it. 00:26:10.884 [2024-04-18 21:19:26.608992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.609386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.609422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.884 qpair failed and we were unable to recover it. 00:26:10.884 [2024-04-18 21:19:26.609830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.610132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.610148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.884 qpair failed and we were unable to recover it. 00:26:10.884 [2024-04-18 21:19:26.610455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.610798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.610836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.884 qpair failed and we were unable to recover it. 00:26:10.884 [2024-04-18 21:19:26.611286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.611642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.611689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.884 qpair failed and we were unable to recover it. 00:26:10.884 [2024-04-18 21:19:26.612126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.612546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.612583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.884 qpair failed and we were unable to recover it. 00:26:10.884 [2024-04-18 21:19:26.613094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.613558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.613596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.884 qpair failed and we were unable to recover it. 00:26:10.884 [2024-04-18 21:19:26.613962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.614385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.614421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.884 qpair failed and we were unable to recover it. 00:26:10.884 [2024-04-18 21:19:26.614777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.615029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.615046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.884 qpair failed and we were unable to recover it. 00:26:10.884 [2024-04-18 21:19:26.615419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.615763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.615801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.884 qpair failed and we were unable to recover it. 00:26:10.884 [2024-04-18 21:19:26.616167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.616561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.616598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.884 qpair failed and we were unable to recover it. 00:26:10.884 [2024-04-18 21:19:26.617028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.617394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.617430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.884 qpair failed and we were unable to recover it. 00:26:10.884 [2024-04-18 21:19:26.617908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.618267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.618284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.884 qpair failed and we were unable to recover it. 00:26:10.884 [2024-04-18 21:19:26.618624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.619041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.619077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.884 qpair failed and we were unable to recover it. 00:26:10.884 [2024-04-18 21:19:26.619436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.619862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.619900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.884 qpair failed and we were unable to recover it. 00:26:10.884 [2024-04-18 21:19:26.620254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.620630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.620668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.884 qpair failed and we were unable to recover it. 00:26:10.884 [2024-04-18 21:19:26.621079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.621506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.621557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.884 qpair failed and we were unable to recover it. 00:26:10.884 [2024-04-18 21:19:26.621958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.622313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.622349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.884 qpair failed and we were unable to recover it. 00:26:10.884 [2024-04-18 21:19:26.622762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.623187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.623204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.884 qpair failed and we were unable to recover it. 00:26:10.884 [2024-04-18 21:19:26.623596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.624023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.624040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.884 qpair failed and we were unable to recover it. 00:26:10.884 [2024-04-18 21:19:26.624548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.624940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.624987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.884 qpair failed and we were unable to recover it. 00:26:10.884 [2024-04-18 21:19:26.625355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.625733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.625771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.884 qpair failed and we were unable to recover it. 00:26:10.884 [2024-04-18 21:19:26.626172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.626611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.626648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.884 qpair failed and we were unable to recover it. 00:26:10.884 [2024-04-18 21:19:26.626947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.627344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.627381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.884 qpair failed and we were unable to recover it. 00:26:10.884 [2024-04-18 21:19:26.627762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.628185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.628222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.884 qpair failed and we were unable to recover it. 00:26:10.884 [2024-04-18 21:19:26.628667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.629079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.629116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.884 qpair failed and we were unable to recover it. 00:26:10.884 [2024-04-18 21:19:26.629534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.629963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.630000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.884 qpair failed and we were unable to recover it. 00:26:10.884 [2024-04-18 21:19:26.630306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.630726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.630744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.884 qpair failed and we were unable to recover it. 00:26:10.884 [2024-04-18 21:19:26.631157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.631645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.631682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.884 qpair failed and we were unable to recover it. 00:26:10.884 [2024-04-18 21:19:26.632036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.632340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.632356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.884 qpair failed and we were unable to recover it. 00:26:10.884 [2024-04-18 21:19:26.632729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.633063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.633099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.884 qpair failed and we were unable to recover it. 00:26:10.884 [2024-04-18 21:19:26.633537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.633960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.633996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.884 qpair failed and we were unable to recover it. 00:26:10.884 [2024-04-18 21:19:26.634434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.634782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.634821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.884 qpair failed and we were unable to recover it. 00:26:10.884 [2024-04-18 21:19:26.635198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.635579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.635617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.884 qpair failed and we were unable to recover it. 00:26:10.884 [2024-04-18 21:19:26.636039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.636460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.636496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.884 qpair failed and we were unable to recover it. 00:26:10.884 [2024-04-18 21:19:26.636958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.637333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.637349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.884 qpair failed and we were unable to recover it. 00:26:10.884 [2024-04-18 21:19:26.637731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.638076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.638112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.884 qpair failed and we were unable to recover it. 00:26:10.884 [2024-04-18 21:19:26.638472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.638880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.638917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.884 qpair failed and we were unable to recover it. 00:26:10.884 [2024-04-18 21:19:26.639277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.639648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.639686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.884 qpair failed and we were unable to recover it. 00:26:10.884 [2024-04-18 21:19:26.640129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.640409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.640445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.884 qpair failed and we were unable to recover it. 00:26:10.884 [2024-04-18 21:19:26.640829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.884 [2024-04-18 21:19:26.641242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.641258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.641566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.641968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.641985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.642380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.642723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.642761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.643014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.643314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.643331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.643718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.644008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.644045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.644466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.644764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.644802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.645205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.645532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.645570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.645986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.646393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.646429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.646869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.647292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.647308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.647674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.648013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.648029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.648333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.648654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.648672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.649058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.649455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.649492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.649950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.650292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.650308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.650671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.650903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.650950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.651362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.651784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.651823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.652181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.652505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.652533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.652853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.653266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.653303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.653598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.653997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.654034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.654370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.654760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.654798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.655124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.655478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.655530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.655905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.656298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.656315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.656666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.657076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.657113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.657545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.657904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.657942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.658373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.658750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.658786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.659211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.659482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.659499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.659896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.660225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.660241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.660561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.660965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.661001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.661416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.661747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.661765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.662093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.662524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.662562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.663021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.663390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.663407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.663765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.664126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.664163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.664596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.665003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.665039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.665425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.665849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.665888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.666318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.666622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.666639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.666978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.667295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.667332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.667702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.668129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.668165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.668586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.668984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.669020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.669455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.669862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.669900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.670179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.670423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.670439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.670663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.671064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.671101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.671404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.671752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.671789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.672174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.672599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.672637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.673047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.673460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.673477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.673873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.674296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.674332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.674766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.675133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.675169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.675505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.675837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.675873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.676339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.676672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.676710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.677155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.677606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.677643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.678086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.678479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.678495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.678886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.679252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.679288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.679650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.680016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.680054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.680461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.680844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.680883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.681335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.681690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.681727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.682138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.682462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.682479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.682831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.683163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.683200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.683636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.684061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.885 [2024-04-18 21:19:26.684097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.885 qpair failed and we were unable to recover it. 00:26:10.885 [2024-04-18 21:19:26.684486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.684865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.684903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.685285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.685680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.685718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.686076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.686464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.686500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.686815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.687226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.687242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.687563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.687911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.687947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.688384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.688787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.688824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.689265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.689591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.689628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.689983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.690401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.690437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.690870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.691290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.691328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.691795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.692145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.692182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.692553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.692949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.692994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.693430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.693823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.693861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.694297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.694719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.694756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.695191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.695627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.695664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.696126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.696454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.696504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.696952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.697357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.697373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.697764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.698190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.698225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.698706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.699108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.699145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.699584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.699911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.699964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.700336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.700755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.700793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.701150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.701566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.701604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.701977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.702306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.702343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.702724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.703144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.703180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.703591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.703949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.703986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.704420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.704843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.704880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.705313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.705731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.705769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.706185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.706542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.706580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.707017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.707410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.707446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.707950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.708375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.708412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.708779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.709118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.709135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.709461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.709846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.709884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.710322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.710662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.710699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.711059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.711477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.711524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.711941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.712362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.712398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.712810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.713242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.713278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.713572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.713991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.714027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.714464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.714872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.714909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.715205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.715562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.715599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.716015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.716375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.716411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.716773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.717197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.717233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.717669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.718089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.718124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.718524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.718895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.718931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.719274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.719613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.719651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.720091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.720483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.720529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.720882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.721302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.721338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.721773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.722122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.722158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.722593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.723012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.723049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.723395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.723810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.723847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.724194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.724586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.724603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.724923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.725344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.725380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.725666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.726088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.726124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.726499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.726916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.726953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.727330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.727675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.727712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.728015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.728325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.728362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.728792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.729211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.729260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.886 qpair failed and we were unable to recover it. 00:26:10.886 [2024-04-18 21:19:26.729662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.886 [2024-04-18 21:19:26.730045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.730082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.887 qpair failed and we were unable to recover it. 00:26:10.887 [2024-04-18 21:19:26.730526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.730860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.730896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.887 qpair failed and we were unable to recover it. 00:26:10.887 [2024-04-18 21:19:26.731257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.731670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.731708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.887 qpair failed and we were unable to recover it. 00:26:10.887 [2024-04-18 21:19:26.732066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.732489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.732552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.887 qpair failed and we were unable to recover it. 00:26:10.887 [2024-04-18 21:19:26.732965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.733386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.733422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.887 qpair failed and we were unable to recover it. 00:26:10.887 [2024-04-18 21:19:26.733844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.734262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.734278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.887 qpair failed and we were unable to recover it. 00:26:10.887 [2024-04-18 21:19:26.734609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.734939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.734982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.887 qpair failed and we were unable to recover it. 00:26:10.887 [2024-04-18 21:19:26.735442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.735869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.735906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.887 qpair failed and we were unable to recover it. 00:26:10.887 [2024-04-18 21:19:26.736333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.736710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.736727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.887 qpair failed and we were unable to recover it. 00:26:10.887 [2024-04-18 21:19:26.737105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.737487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.737532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.887 qpair failed and we were unable to recover it. 00:26:10.887 [2024-04-18 21:19:26.737998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.738339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.738376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.887 qpair failed and we were unable to recover it. 00:26:10.887 [2024-04-18 21:19:26.738813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.739240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.739276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.887 qpair failed and we were unable to recover it. 00:26:10.887 [2024-04-18 21:19:26.739648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.740073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.740110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.887 qpair failed and we were unable to recover it. 00:26:10.887 [2024-04-18 21:19:26.740573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.740970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.741006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.887 qpair failed and we were unable to recover it. 00:26:10.887 [2024-04-18 21:19:26.741458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.741819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.741856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.887 qpair failed and we were unable to recover it. 00:26:10.887 [2024-04-18 21:19:26.742290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.742711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.742749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.887 qpair failed and we were unable to recover it. 00:26:10.887 [2024-04-18 21:19:26.743164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.743531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.743576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.887 qpair failed and we were unable to recover it. 00:26:10.887 [2024-04-18 21:19:26.743933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.744293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.744330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.887 qpair failed and we were unable to recover it. 00:26:10.887 [2024-04-18 21:19:26.744780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.745087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.745123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.887 qpair failed and we were unable to recover it. 00:26:10.887 [2024-04-18 21:19:26.745553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.745813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.745849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.887 qpair failed and we were unable to recover it. 00:26:10.887 [2024-04-18 21:19:26.746305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.746693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.746730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.887 qpair failed and we were unable to recover it. 00:26:10.887 [2024-04-18 21:19:26.747078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.747404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.747420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.887 qpair failed and we were unable to recover it. 00:26:10.887 [2024-04-18 21:19:26.747853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.748285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.748323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.887 qpair failed and we were unable to recover it. 00:26:10.887 [2024-04-18 21:19:26.748649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.748977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.749014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.887 qpair failed and we were unable to recover it. 00:26:10.887 [2024-04-18 21:19:26.749442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.749855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.749894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.887 qpair failed and we were unable to recover it. 00:26:10.887 [2024-04-18 21:19:26.750330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.750671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.750708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.887 qpair failed and we were unable to recover it. 00:26:10.887 [2024-04-18 21:19:26.751067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.751464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.751500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.887 qpair failed and we were unable to recover it. 00:26:10.887 [2024-04-18 21:19:26.751956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.752294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.752331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.887 qpair failed and we were unable to recover it. 00:26:10.887 [2024-04-18 21:19:26.752671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.753069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.753106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.887 qpair failed and we were unable to recover it. 00:26:10.887 [2024-04-18 21:19:26.753539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.753961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.753997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.887 qpair failed and we were unable to recover it. 00:26:10.887 [2024-04-18 21:19:26.754417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.754742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.754780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.887 qpair failed and we were unable to recover it. 00:26:10.887 [2024-04-18 21:19:26.755218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.755634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.755671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.887 qpair failed and we were unable to recover it. 00:26:10.887 [2024-04-18 21:19:26.756108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.756507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.756571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.887 qpair failed and we were unable to recover it. 00:26:10.887 [2024-04-18 21:19:26.756983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.757393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.757409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.887 qpair failed and we were unable to recover it. 00:26:10.887 [2024-04-18 21:19:26.757722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.758092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.758128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.887 qpair failed and we were unable to recover it. 00:26:10.887 [2024-04-18 21:19:26.758561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.758982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.759019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.887 qpair failed and we were unable to recover it. 00:26:10.887 [2024-04-18 21:19:26.759449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.759877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.759915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.887 qpair failed and we were unable to recover it. 00:26:10.887 [2024-04-18 21:19:26.760336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.760683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.760700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.887 qpair failed and we were unable to recover it. 00:26:10.887 [2024-04-18 21:19:26.761091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.761374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.761391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.887 qpair failed and we were unable to recover it. 00:26:10.887 [2024-04-18 21:19:26.761782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.762206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.762243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.887 qpair failed and we were unable to recover it. 00:26:10.887 [2024-04-18 21:19:26.762705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.763123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.763160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.887 qpair failed and we were unable to recover it. 00:26:10.887 [2024-04-18 21:19:26.763567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.763961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.763998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.887 qpair failed and we were unable to recover it. 00:26:10.887 [2024-04-18 21:19:26.764400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.764849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.764887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.887 qpair failed and we were unable to recover it. 00:26:10.887 [2024-04-18 21:19:26.765225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.765542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.765579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.887 qpair failed and we were unable to recover it. 00:26:10.887 [2024-04-18 21:19:26.765991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.766449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.766465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.887 qpair failed and we were unable to recover it. 00:26:10.887 [2024-04-18 21:19:26.766786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.767150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.767186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.887 qpair failed and we were unable to recover it. 00:26:10.887 [2024-04-18 21:19:26.767616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.767905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.767942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.887 qpair failed and we were unable to recover it. 00:26:10.887 [2024-04-18 21:19:26.768379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.768810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.768848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.887 qpair failed and we were unable to recover it. 00:26:10.887 [2024-04-18 21:19:26.769287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.769701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.769738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.887 qpair failed and we were unable to recover it. 00:26:10.887 [2024-04-18 21:19:26.770175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.887 [2024-04-18 21:19:26.770591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.770629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.888 qpair failed and we were unable to recover it. 00:26:10.888 [2024-04-18 21:19:26.771066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.771473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.771520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.888 qpair failed and we were unable to recover it. 00:26:10.888 [2024-04-18 21:19:26.771936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.772360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.772396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.888 qpair failed and we were unable to recover it. 00:26:10.888 [2024-04-18 21:19:26.772875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.773247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.773284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.888 qpair failed and we were unable to recover it. 00:26:10.888 [2024-04-18 21:19:26.773755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.774100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.774136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.888 qpair failed and we were unable to recover it. 00:26:10.888 [2024-04-18 21:19:26.774573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.774949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.774985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.888 qpair failed and we were unable to recover it. 00:26:10.888 [2024-04-18 21:19:26.775342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.775604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.775621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.888 qpair failed and we were unable to recover it. 00:26:10.888 [2024-04-18 21:19:26.776013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.776274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.776310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.888 qpair failed and we were unable to recover it. 00:26:10.888 [2024-04-18 21:19:26.776783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.777216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.777265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.888 qpair failed and we were unable to recover it. 00:26:10.888 [2024-04-18 21:19:26.777650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.777945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.777982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.888 qpair failed and we were unable to recover it. 00:26:10.888 [2024-04-18 21:19:26.778387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.778810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.778847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.888 qpair failed and we were unable to recover it. 00:26:10.888 [2024-04-18 21:19:26.779135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.779558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.779595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.888 qpair failed and we were unable to recover it. 00:26:10.888 [2024-04-18 21:19:26.779956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.780309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.780345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.888 qpair failed and we were unable to recover it. 00:26:10.888 [2024-04-18 21:19:26.780738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.781158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.781194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.888 qpair failed and we were unable to recover it. 00:26:10.888 [2024-04-18 21:19:26.781605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.781937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.781973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.888 qpair failed and we were unable to recover it. 00:26:10.888 [2024-04-18 21:19:26.782317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.782724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.782763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.888 qpair failed and we were unable to recover it. 00:26:10.888 [2024-04-18 21:19:26.783124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.783544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.783581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.888 qpair failed and we were unable to recover it. 00:26:10.888 [2024-04-18 21:19:26.783994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.784428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.784464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.888 qpair failed and we were unable to recover it. 00:26:10.888 [2024-04-18 21:19:26.784912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.785192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.785236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.888 qpair failed and we were unable to recover it. 00:26:10.888 [2024-04-18 21:19:26.785594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.785917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.785954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.888 qpair failed and we were unable to recover it. 00:26:10.888 [2024-04-18 21:19:26.786249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.786568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.786585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.888 qpair failed and we were unable to recover it. 00:26:10.888 [2024-04-18 21:19:26.786935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.787291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.787327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.888 qpair failed and we were unable to recover it. 00:26:10.888 [2024-04-18 21:19:26.787843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.788290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.788326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.888 qpair failed and we were unable to recover it. 00:26:10.888 [2024-04-18 21:19:26.788768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.789186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.789222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.888 qpair failed and we were unable to recover it. 00:26:10.888 [2024-04-18 21:19:26.789633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.790068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.790104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.888 qpair failed and we were unable to recover it. 00:26:10.888 [2024-04-18 21:19:26.790456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.790910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.790949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.888 qpair failed and we were unable to recover it. 00:26:10.888 [2024-04-18 21:19:26.791404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.791800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.791837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.888 qpair failed and we were unable to recover it. 00:26:10.888 [2024-04-18 21:19:26.792281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.792621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.792657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.888 qpair failed and we were unable to recover it. 00:26:10.888 [2024-04-18 21:19:26.793018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.793413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.793450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.888 qpair failed and we were unable to recover it. 00:26:10.888 [2024-04-18 21:19:26.793867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.794272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.794309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.888 qpair failed and we were unable to recover it. 00:26:10.888 [2024-04-18 21:19:26.794772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.795158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.795195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.888 qpair failed and we were unable to recover it. 00:26:10.888 [2024-04-18 21:19:26.795606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.796011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.796048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.888 qpair failed and we were unable to recover it. 00:26:10.888 [2024-04-18 21:19:26.796464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.796896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.796934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.888 qpair failed and we were unable to recover it. 00:26:10.888 [2024-04-18 21:19:26.797369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.797729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.797747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.888 qpair failed and we were unable to recover it. 00:26:10.888 [2024-04-18 21:19:26.798129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.798496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.798549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.888 qpair failed and we were unable to recover it. 00:26:10.888 [2024-04-18 21:19:26.798986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.799360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.799396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.888 qpair failed and we were unable to recover it. 00:26:10.888 [2024-04-18 21:19:26.799843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.800242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.800278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.888 qpair failed and we were unable to recover it. 00:26:10.888 [2024-04-18 21:19:26.800742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.801082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.801098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.888 qpair failed and we were unable to recover it. 00:26:10.888 [2024-04-18 21:19:26.801476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.801852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.801869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.888 qpair failed and we were unable to recover it. 00:26:10.888 [2024-04-18 21:19:26.802264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.802615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.802653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.888 qpair failed and we were unable to recover it. 00:26:10.888 [2024-04-18 21:19:26.803113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.803525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.803544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.888 qpair failed and we were unable to recover it. 00:26:10.888 [2024-04-18 21:19:26.803904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.804334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.804369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.888 qpair failed and we were unable to recover it. 00:26:10.888 [2024-04-18 21:19:26.804809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.805218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.888 [2024-04-18 21:19:26.805256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:10.888 qpair failed and we were unable to recover it. 00:26:10.888 [2024-04-18 21:19:26.805685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.159 [2024-04-18 21:19:26.805941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.159 [2024-04-18 21:19:26.805958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.159 qpair failed and we were unable to recover it. 00:26:11.159 [2024-04-18 21:19:26.806289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.159 [2024-04-18 21:19:26.806604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.159 [2024-04-18 21:19:26.806620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.159 qpair failed and we were unable to recover it. 00:26:11.159 [2024-04-18 21:19:26.807011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.159 [2024-04-18 21:19:26.807314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.159 [2024-04-18 21:19:26.807330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.159 qpair failed and we were unable to recover it. 00:26:11.159 [2024-04-18 21:19:26.807722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.159 [2024-04-18 21:19:26.808094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.159 [2024-04-18 21:19:26.808130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.159 qpair failed and we were unable to recover it. 00:26:11.159 [2024-04-18 21:19:26.808528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.159 [2024-04-18 21:19:26.808876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.159 [2024-04-18 21:19:26.808913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.159 qpair failed and we were unable to recover it. 00:26:11.159 [2024-04-18 21:19:26.809350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.159 [2024-04-18 21:19:26.809764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.159 [2024-04-18 21:19:26.809785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.159 qpair failed and we were unable to recover it. 00:26:11.159 [2024-04-18 21:19:26.810083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.159 [2024-04-18 21:19:26.810537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.159 [2024-04-18 21:19:26.810577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.159 qpair failed and we were unable to recover it. 00:26:11.159 [2024-04-18 21:19:26.811003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.159 [2024-04-18 21:19:26.811401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.159 [2024-04-18 21:19:26.811438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.159 qpair failed and we were unable to recover it. 00:26:11.159 [2024-04-18 21:19:26.811767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.159 [2024-04-18 21:19:26.812115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.159 [2024-04-18 21:19:26.812152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.159 qpair failed and we were unable to recover it. 00:26:11.159 [2024-04-18 21:19:26.812589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.159 [2024-04-18 21:19:26.812934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.159 [2024-04-18 21:19:26.812971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.159 qpair failed and we were unable to recover it. 00:26:11.159 [2024-04-18 21:19:26.813328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.159 [2024-04-18 21:19:26.813722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.159 [2024-04-18 21:19:26.813739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.159 qpair failed and we were unable to recover it. 00:26:11.159 [2024-04-18 21:19:26.813996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.159 [2024-04-18 21:19:26.814314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.159 [2024-04-18 21:19:26.814331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.159 qpair failed and we were unable to recover it. 00:26:11.159 [2024-04-18 21:19:26.814724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.159 [2024-04-18 21:19:26.815154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.159 [2024-04-18 21:19:26.815190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.159 qpair failed and we were unable to recover it. 00:26:11.159 [2024-04-18 21:19:26.815539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.159 [2024-04-18 21:19:26.815775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.159 [2024-04-18 21:19:26.815791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.159 qpair failed and we were unable to recover it. 00:26:11.159 [2024-04-18 21:19:26.816088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.159 [2024-04-18 21:19:26.816415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.816452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.160 qpair failed and we were unable to recover it. 00:26:11.160 [2024-04-18 21:19:26.816808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.817215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.817251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.160 qpair failed and we were unable to recover it. 00:26:11.160 [2024-04-18 21:19:26.817600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.817982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.817998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.160 qpair failed and we were unable to recover it. 00:26:11.160 [2024-04-18 21:19:26.818262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.818535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.818574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.160 qpair failed and we were unable to recover it. 00:26:11.160 [2024-04-18 21:19:26.818870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.819158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.819206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.160 qpair failed and we were unable to recover it. 00:26:11.160 [2024-04-18 21:19:26.819583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.819980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.820016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.160 qpair failed and we were unable to recover it. 00:26:11.160 [2024-04-18 21:19:26.820461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.820877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.820895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.160 qpair failed and we were unable to recover it. 00:26:11.160 [2024-04-18 21:19:26.821163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.821528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.821545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.160 qpair failed and we were unable to recover it. 00:26:11.160 [2024-04-18 21:19:26.821931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.822293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.822309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.160 qpair failed and we were unable to recover it. 00:26:11.160 [2024-04-18 21:19:26.822566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.822942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.822979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.160 qpair failed and we were unable to recover it. 00:26:11.160 [2024-04-18 21:19:26.823418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.823813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.823850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.160 qpair failed and we were unable to recover it. 00:26:11.160 [2024-04-18 21:19:26.824222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.824634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.824650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.160 qpair failed and we were unable to recover it. 00:26:11.160 [2024-04-18 21:19:26.825044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.825371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.825416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.160 qpair failed and we were unable to recover it. 00:26:11.160 [2024-04-18 21:19:26.825822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.826233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.826269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.160 qpair failed and we were unable to recover it. 00:26:11.160 [2024-04-18 21:19:26.826690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.827116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.827154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.160 qpair failed and we were unable to recover it. 00:26:11.160 [2024-04-18 21:19:26.827576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.827998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.828035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.160 qpair failed and we were unable to recover it. 00:26:11.160 [2024-04-18 21:19:26.828559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.828980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.829017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.160 qpair failed and we were unable to recover it. 00:26:11.160 [2024-04-18 21:19:26.829459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.829759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.829777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.160 qpair failed and we were unable to recover it. 00:26:11.160 [2024-04-18 21:19:26.830144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.830467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.830503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.160 qpair failed and we were unable to recover it. 00:26:11.160 [2024-04-18 21:19:26.830826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.831174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.831210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.160 qpair failed and we were unable to recover it. 00:26:11.160 [2024-04-18 21:19:26.831590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.831942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.831979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.160 qpair failed and we were unable to recover it. 00:26:11.160 [2024-04-18 21:19:26.832435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.832711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.832729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.160 qpair failed and we were unable to recover it. 00:26:11.160 [2024-04-18 21:19:26.833069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.833437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.833472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.160 qpair failed and we were unable to recover it. 00:26:11.160 [2024-04-18 21:19:26.833845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.834262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.834298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.160 qpair failed and we were unable to recover it. 00:26:11.160 [2024-04-18 21:19:26.834735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.835086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.835122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.160 qpair failed and we were unable to recover it. 00:26:11.160 [2024-04-18 21:19:26.835465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.835848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.835886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.160 qpair failed and we were unable to recover it. 00:26:11.160 [2024-04-18 21:19:26.836325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.836768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.836785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.160 qpair failed and we were unable to recover it. 00:26:11.160 [2024-04-18 21:19:26.837101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.837465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.837501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.160 qpair failed and we were unable to recover it. 00:26:11.160 [2024-04-18 21:19:26.837896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.838230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.838266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.160 qpair failed and we were unable to recover it. 00:26:11.160 [2024-04-18 21:19:26.838699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.160 [2024-04-18 21:19:26.839175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.839211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.161 qpair failed and we were unable to recover it. 00:26:11.161 [2024-04-18 21:19:26.839697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.840061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.840097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.161 qpair failed and we were unable to recover it. 00:26:11.161 [2024-04-18 21:19:26.840462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.840861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.840898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.161 qpair failed and we were unable to recover it. 00:26:11.161 [2024-04-18 21:19:26.841317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.841739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.841755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.161 qpair failed and we were unable to recover it. 00:26:11.161 [2024-04-18 21:19:26.842141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.842442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.842478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.161 qpair failed and we were unable to recover it. 00:26:11.161 [2024-04-18 21:19:26.842868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.843148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.843184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.161 qpair failed and we were unable to recover it. 00:26:11.161 [2024-04-18 21:19:26.843612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.844004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.844041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.161 qpair failed and we were unable to recover it. 00:26:11.161 [2024-04-18 21:19:26.844427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.844818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.844836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.161 qpair failed and we were unable to recover it. 00:26:11.161 [2024-04-18 21:19:26.845218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.845593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.845630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.161 qpair failed and we were unable to recover it. 00:26:11.161 [2024-04-18 21:19:26.846074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.846467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.846483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.161 qpair failed and we were unable to recover it. 00:26:11.161 [2024-04-18 21:19:26.846853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.847210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.847247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.161 qpair failed and we were unable to recover it. 00:26:11.161 [2024-04-18 21:19:26.847620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.848015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.848031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.161 qpair failed and we were unable to recover it. 00:26:11.161 [2024-04-18 21:19:26.848423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.848844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.848881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.161 qpair failed and we were unable to recover it. 00:26:11.161 [2024-04-18 21:19:26.849314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.849653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.849690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.161 qpair failed and we were unable to recover it. 00:26:11.161 [2024-04-18 21:19:26.849964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.850311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.850347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.161 qpair failed and we were unable to recover it. 00:26:11.161 [2024-04-18 21:19:26.850697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.851080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.851117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.161 qpair failed and we were unable to recover it. 00:26:11.161 [2024-04-18 21:19:26.851466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.851829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.851867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.161 qpair failed and we were unable to recover it. 00:26:11.161 [2024-04-18 21:19:26.852210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.852560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.852577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.161 qpair failed and we were unable to recover it. 00:26:11.161 [2024-04-18 21:19:26.852867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.853166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.853203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.161 qpair failed and we were unable to recover it. 00:26:11.161 [2024-04-18 21:19:26.853615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.853990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.854027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.161 qpair failed and we were unable to recover it. 00:26:11.161 [2024-04-18 21:19:26.854462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.854946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.854986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.161 qpair failed and we were unable to recover it. 00:26:11.161 [2024-04-18 21:19:26.855426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.855857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.855895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.161 qpair failed and we were unable to recover it. 00:26:11.161 [2024-04-18 21:19:26.856093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.856528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.856566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.161 qpair failed and we were unable to recover it. 00:26:11.161 [2024-04-18 21:19:26.856992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.857409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.857445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.161 qpair failed and we were unable to recover it. 00:26:11.161 [2024-04-18 21:19:26.857929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.858213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.858250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.161 qpair failed and we were unable to recover it. 00:26:11.161 [2024-04-18 21:19:26.858638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.858965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.859002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.161 qpair failed and we were unable to recover it. 00:26:11.161 [2024-04-18 21:19:26.859385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.859733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.859771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.161 qpair failed and we were unable to recover it. 00:26:11.161 [2024-04-18 21:19:26.860067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.860538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.860576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.161 qpair failed and we were unable to recover it. 00:26:11.161 [2024-04-18 21:19:26.861025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.861381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.161 [2024-04-18 21:19:26.861417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.161 qpair failed and we were unable to recover it. 00:26:11.162 [2024-04-18 21:19:26.861821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.862169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.862205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.162 qpair failed and we were unable to recover it. 00:26:11.162 [2024-04-18 21:19:26.862560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.862865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.862881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.162 qpair failed and we were unable to recover it. 00:26:11.162 [2024-04-18 21:19:26.863119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.863429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.863445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.162 qpair failed and we were unable to recover it. 00:26:11.162 [2024-04-18 21:19:26.863779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.864158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.864175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.162 qpair failed and we were unable to recover it. 00:26:11.162 [2024-04-18 21:19:26.864492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.864874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.864911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.162 qpair failed and we were unable to recover it. 00:26:11.162 [2024-04-18 21:19:26.865259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.865652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.865697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.162 qpair failed and we were unable to recover it. 00:26:11.162 [2024-04-18 21:19:26.865995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.866334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.866375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.162 qpair failed and we were unable to recover it. 00:26:11.162 [2024-04-18 21:19:26.866539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.866846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.866882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.162 qpair failed and we were unable to recover it. 00:26:11.162 [2024-04-18 21:19:26.867252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.867594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.867611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.162 qpair failed and we were unable to recover it. 00:26:11.162 [2024-04-18 21:19:26.868029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.868362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.868398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.162 qpair failed and we were unable to recover it. 00:26:11.162 [2024-04-18 21:19:26.868703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.868903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.868919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.162 qpair failed and we were unable to recover it. 00:26:11.162 [2024-04-18 21:19:26.869176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.869477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.869493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.162 qpair failed and we were unable to recover it. 00:26:11.162 [2024-04-18 21:19:26.869707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.870072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.870089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.162 qpair failed and we were unable to recover it. 00:26:11.162 [2024-04-18 21:19:26.870399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.870718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.870756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.162 qpair failed and we were unable to recover it. 00:26:11.162 [2024-04-18 21:19:26.871033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.871432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.871468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.162 qpair failed and we were unable to recover it. 00:26:11.162 [2024-04-18 21:19:26.871863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.872170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.872214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.162 qpair failed and we were unable to recover it. 00:26:11.162 [2024-04-18 21:19:26.872592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.872982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.872998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.162 qpair failed and we were unable to recover it. 00:26:11.162 [2024-04-18 21:19:26.873304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.873591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.873608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.162 qpair failed and we were unable to recover it. 00:26:11.162 [2024-04-18 21:19:26.873916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.874196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.874234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.162 qpair failed and we were unable to recover it. 00:26:11.162 [2024-04-18 21:19:26.874436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.874736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.874753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.162 qpair failed and we were unable to recover it. 00:26:11.162 [2024-04-18 21:19:26.875117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.875472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.875508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.162 qpair failed and we were unable to recover it. 00:26:11.162 [2024-04-18 21:19:26.875824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.876131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.876167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.162 qpair failed and we were unable to recover it. 00:26:11.162 [2024-04-18 21:19:26.876593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.876931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.876967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.162 qpair failed and we were unable to recover it. 00:26:11.162 [2024-04-18 21:19:26.877398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.877805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.877822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.162 qpair failed and we were unable to recover it. 00:26:11.162 [2024-04-18 21:19:26.878131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.878439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.878475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.162 qpair failed and we were unable to recover it. 00:26:11.162 [2024-04-18 21:19:26.878969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.879348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.879385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.162 qpair failed and we were unable to recover it. 00:26:11.162 [2024-04-18 21:19:26.879738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.880074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.880110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.162 qpair failed and we were unable to recover it. 00:26:11.162 [2024-04-18 21:19:26.880403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.880730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.162 [2024-04-18 21:19:26.880768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.162 qpair failed and we were unable to recover it. 00:26:11.162 [2024-04-18 21:19:26.881119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.881472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.881508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.163 qpair failed and we were unable to recover it. 00:26:11.163 [2024-04-18 21:19:26.881990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.882408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.882444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.163 qpair failed and we were unable to recover it. 00:26:11.163 [2024-04-18 21:19:26.882871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.883162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.883178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.163 qpair failed and we were unable to recover it. 00:26:11.163 [2024-04-18 21:19:26.883409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.883788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.883825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.163 qpair failed and we were unable to recover it. 00:26:11.163 [2024-04-18 21:19:26.884275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.884616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.884653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.163 qpair failed and we were unable to recover it. 00:26:11.163 [2024-04-18 21:19:26.884988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.885311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.885347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.163 qpair failed and we were unable to recover it. 00:26:11.163 [2024-04-18 21:19:26.885756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.886090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.886127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.163 qpair failed and we were unable to recover it. 00:26:11.163 [2024-04-18 21:19:26.886476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.886832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.886850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.163 qpair failed and we were unable to recover it. 00:26:11.163 [2024-04-18 21:19:26.887104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.887469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.887485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.163 qpair failed and we were unable to recover it. 00:26:11.163 [2024-04-18 21:19:26.887808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.888098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.888115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.163 qpair failed and we were unable to recover it. 00:26:11.163 [2024-04-18 21:19:26.888478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.888855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.888892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.163 qpair failed and we were unable to recover it. 00:26:11.163 [2024-04-18 21:19:26.889189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.889576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.889614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.163 qpair failed and we were unable to recover it. 00:26:11.163 [2024-04-18 21:19:26.889931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.890337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.890354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.163 qpair failed and we were unable to recover it. 00:26:11.163 [2024-04-18 21:19:26.890673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.891045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.891061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.163 qpair failed and we were unable to recover it. 00:26:11.163 [2024-04-18 21:19:26.891307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.891550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.891587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.163 qpair failed and we were unable to recover it. 00:26:11.163 [2024-04-18 21:19:26.891998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.892296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.892332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.163 qpair failed and we were unable to recover it. 00:26:11.163 [2024-04-18 21:19:26.892747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.892983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.893019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.163 qpair failed and we were unable to recover it. 00:26:11.163 [2024-04-18 21:19:26.893363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.893711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.893749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.163 qpair failed and we were unable to recover it. 00:26:11.163 [2024-04-18 21:19:26.894167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.894570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.894616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.163 qpair failed and we were unable to recover it. 00:26:11.163 [2024-04-18 21:19:26.895019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.895369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.895405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.163 qpair failed and we were unable to recover it. 00:26:11.163 [2024-04-18 21:19:26.895776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.896097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.896134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.163 qpair failed and we were unable to recover it. 00:26:11.163 [2024-04-18 21:19:26.896539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.896962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.896998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.163 qpair failed and we were unable to recover it. 00:26:11.163 [2024-04-18 21:19:26.897348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.897615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.897653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.163 qpair failed and we were unable to recover it. 00:26:11.163 [2024-04-18 21:19:26.898064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.898413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.898449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.163 qpair failed and we were unable to recover it. 00:26:11.163 [2024-04-18 21:19:26.898806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.899102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.899118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.163 qpair failed and we were unable to recover it. 00:26:11.163 [2024-04-18 21:19:26.899413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.899779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.899816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.163 qpair failed and we were unable to recover it. 00:26:11.163 [2024-04-18 21:19:26.900242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.900528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.900565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.163 qpair failed and we were unable to recover it. 00:26:11.163 [2024-04-18 21:19:26.900993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.901254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.901290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.163 qpair failed and we were unable to recover it. 00:26:11.163 [2024-04-18 21:19:26.901703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.902105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.163 [2024-04-18 21:19:26.902141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.164 qpair failed and we were unable to recover it. 00:26:11.164 [2024-04-18 21:19:26.902564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.164 [2024-04-18 21:19:26.902908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.164 [2024-04-18 21:19:26.902945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.164 qpair failed and we were unable to recover it. 00:26:11.164 [2024-04-18 21:19:26.903289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.164 [2024-04-18 21:19:26.903557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.164 [2024-04-18 21:19:26.903574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.164 qpair failed and we were unable to recover it. 00:26:11.164 [2024-04-18 21:19:26.903946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.164 [2024-04-18 21:19:26.904352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.164 [2024-04-18 21:19:26.904388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.164 qpair failed and we were unable to recover it. 00:26:11.164 [2024-04-18 21:19:26.904736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.164 [2024-04-18 21:19:26.904971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.164 [2024-04-18 21:19:26.904987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.164 qpair failed and we were unable to recover it. 00:26:11.164 [2024-04-18 21:19:26.905269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.164 [2024-04-18 21:19:26.905605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.164 [2024-04-18 21:19:26.905643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.164 qpair failed and we were unable to recover it. 00:26:11.164 [2024-04-18 21:19:26.905993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.164 [2024-04-18 21:19:26.906424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.164 [2024-04-18 21:19:26.906460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.164 qpair failed and we were unable to recover it. 00:26:11.164 [2024-04-18 21:19:26.906856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.164 [2024-04-18 21:19:26.907126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.164 [2024-04-18 21:19:26.907162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.164 qpair failed and we were unable to recover it. 00:26:11.164 [2024-04-18 21:19:26.907559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.164 [2024-04-18 21:19:26.907875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.164 [2024-04-18 21:19:26.907912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.164 qpair failed and we were unable to recover it. 00:26:11.164 [2024-04-18 21:19:26.908269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.164 [2024-04-18 21:19:26.908674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.164 [2024-04-18 21:19:26.908690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.164 qpair failed and we were unable to recover it. 00:26:11.164 [2024-04-18 21:19:26.908925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.164 [2024-04-18 21:19:26.909194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.164 [2024-04-18 21:19:26.909236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.164 qpair failed and we were unable to recover it. 00:26:11.164 [2024-04-18 21:19:26.909589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.164 [2024-04-18 21:19:26.910005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.164 [2024-04-18 21:19:26.910041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.164 qpair failed and we were unable to recover it. 00:26:11.164 [2024-04-18 21:19:26.910410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.164 [2024-04-18 21:19:26.910830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.164 [2024-04-18 21:19:26.910868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.164 qpair failed and we were unable to recover it. 00:26:11.164 [2024-04-18 21:19:26.911202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.164 [2024-04-18 21:19:26.911589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.164 [2024-04-18 21:19:26.911627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.164 qpair failed and we were unable to recover it. 00:26:11.164 [2024-04-18 21:19:26.911973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.164 [2024-04-18 21:19:26.912346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.164 [2024-04-18 21:19:26.912383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.164 qpair failed and we were unable to recover it. 00:26:11.164 [2024-04-18 21:19:26.912776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.164 [2024-04-18 21:19:26.913109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.164 [2024-04-18 21:19:26.913146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.164 qpair failed and we were unable to recover it. 00:26:11.164 [2024-04-18 21:19:26.913556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.164 [2024-04-18 21:19:26.913883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.164 [2024-04-18 21:19:26.913920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.164 qpair failed and we were unable to recover it. 00:26:11.164 [2024-04-18 21:19:26.914258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.164 [2024-04-18 21:19:26.914649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.164 [2024-04-18 21:19:26.914665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.164 qpair failed and we were unable to recover it. 00:26:11.164 [2024-04-18 21:19:26.914965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.164 [2024-04-18 21:19:26.915276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.164 [2024-04-18 21:19:26.915312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.164 qpair failed and we were unable to recover it. 00:26:11.164 [2024-04-18 21:19:26.915741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.164 [2024-04-18 21:19:26.916072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.164 [2024-04-18 21:19:26.916089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.164 qpair failed and we were unable to recover it. 00:26:11.164 [2024-04-18 21:19:26.916463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.164 [2024-04-18 21:19:26.916859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.164 [2024-04-18 21:19:26.916896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.164 qpair failed and we were unable to recover it. 00:26:11.164 [2024-04-18 21:19:26.917273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.164 [2024-04-18 21:19:26.917714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.164 [2024-04-18 21:19:26.917752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.164 qpair failed and we were unable to recover it. 00:26:11.164 [2024-04-18 21:19:26.918104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.164 [2024-04-18 21:19:26.918499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.164 [2024-04-18 21:19:26.918549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.164 qpair failed and we were unable to recover it. 00:26:11.164 [2024-04-18 21:19:26.918907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.164 [2024-04-18 21:19:26.919227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.164 [2024-04-18 21:19:26.919263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.164 qpair failed and we were unable to recover it. 00:26:11.164 [2024-04-18 21:19:26.919693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.920060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.920096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.165 qpair failed and we were unable to recover it. 00:26:11.165 [2024-04-18 21:19:26.920447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.920779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.920796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.165 qpair failed and we were unable to recover it. 00:26:11.165 [2024-04-18 21:19:26.921180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.921589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.921627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.165 qpair failed and we were unable to recover it. 00:26:11.165 [2024-04-18 21:19:26.922003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.922348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.922384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.165 qpair failed and we were unable to recover it. 00:26:11.165 [2024-04-18 21:19:26.922809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.923131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.923167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.165 qpair failed and we were unable to recover it. 00:26:11.165 [2024-04-18 21:19:26.923458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.923804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.923842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.165 qpair failed and we were unable to recover it. 00:26:11.165 [2024-04-18 21:19:26.924269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.924539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.924577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.165 qpair failed and we were unable to recover it. 00:26:11.165 [2024-04-18 21:19:26.924936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.925252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.925288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.165 qpair failed and we were unable to recover it. 00:26:11.165 [2024-04-18 21:19:26.925653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.925985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.926000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.165 qpair failed and we were unable to recover it. 00:26:11.165 [2024-04-18 21:19:26.926370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.926722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.926759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.165 qpair failed and we were unable to recover it. 00:26:11.165 [2024-04-18 21:19:26.927105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.927523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.927560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.165 qpair failed and we were unable to recover it. 00:26:11.165 [2024-04-18 21:19:26.927894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.928241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.928277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.165 qpair failed and we were unable to recover it. 00:26:11.165 [2024-04-18 21:19:26.928702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.929117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.929152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.165 qpair failed and we were unable to recover it. 00:26:11.165 [2024-04-18 21:19:26.929421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.929840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.929878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.165 qpair failed and we were unable to recover it. 00:26:11.165 [2024-04-18 21:19:26.930165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.930577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.930615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.165 qpair failed and we were unable to recover it. 00:26:11.165 [2024-04-18 21:19:26.931046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.931316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.931352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.165 qpair failed and we were unable to recover it. 00:26:11.165 [2024-04-18 21:19:26.931708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.932056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.932099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.165 qpair failed and we were unable to recover it. 00:26:11.165 [2024-04-18 21:19:26.932508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.932840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.932855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.165 qpair failed and we were unable to recover it. 00:26:11.165 [2024-04-18 21:19:26.933248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.933546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.933584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.165 qpair failed and we were unable to recover it. 00:26:11.165 [2024-04-18 21:19:26.933937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.934320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.934356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.165 qpair failed and we were unable to recover it. 00:26:11.165 [2024-04-18 21:19:26.934771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.935052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.935068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.165 qpair failed and we were unable to recover it. 00:26:11.165 [2024-04-18 21:19:26.935360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.935732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.935769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.165 qpair failed and we were unable to recover it. 00:26:11.165 [2024-04-18 21:19:26.936194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.936574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.936612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.165 qpair failed and we were unable to recover it. 00:26:11.165 [2024-04-18 21:19:26.937019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.937196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.937233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.165 qpair failed and we were unable to recover it. 00:26:11.165 [2024-04-18 21:19:26.937539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.937945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.937982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.165 qpair failed and we were unable to recover it. 00:26:11.165 [2024-04-18 21:19:26.938277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.938596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.938634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.165 qpair failed and we were unable to recover it. 00:26:11.165 [2024-04-18 21:19:26.939008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.939334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.939370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.165 qpair failed and we were unable to recover it. 00:26:11.165 [2024-04-18 21:19:26.939716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.940124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.940140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.165 qpair failed and we were unable to recover it. 00:26:11.165 [2024-04-18 21:19:26.940537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.165 [2024-04-18 21:19:26.940891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.940927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.166 qpair failed and we were unable to recover it. 00:26:11.166 [2024-04-18 21:19:26.941327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.941654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.941691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.166 qpair failed and we were unable to recover it. 00:26:11.166 [2024-04-18 21:19:26.942119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.942473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.942523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.166 qpair failed and we were unable to recover it. 00:26:11.166 [2024-04-18 21:19:26.942810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.943182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.943219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.166 qpair failed and we were unable to recover it. 00:26:11.166 [2024-04-18 21:19:26.943647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.944000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.944037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.166 qpair failed and we were unable to recover it. 00:26:11.166 [2024-04-18 21:19:26.944398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.944678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.944715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.166 qpair failed and we were unable to recover it. 00:26:11.166 [2024-04-18 21:19:26.945131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.945423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.945438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.166 qpair failed and we were unable to recover it. 00:26:11.166 [2024-04-18 21:19:26.945767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.946099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.946135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.166 qpair failed and we were unable to recover it. 00:26:11.166 [2024-04-18 21:19:26.946483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.946841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.946877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.166 qpair failed and we were unable to recover it. 00:26:11.166 [2024-04-18 21:19:26.947314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.947673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.947716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.166 qpair failed and we were unable to recover it. 00:26:11.166 [2024-04-18 21:19:26.948014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.948342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.948379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.166 qpair failed and we were unable to recover it. 00:26:11.166 [2024-04-18 21:19:26.948797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.949119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.949135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.166 qpair failed and we were unable to recover it. 00:26:11.166 [2024-04-18 21:19:26.949380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.949592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.949608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.166 qpair failed and we were unable to recover it. 00:26:11.166 [2024-04-18 21:19:26.949913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.950278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.950314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.166 qpair failed and we were unable to recover it. 00:26:11.166 [2024-04-18 21:19:26.950737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.951074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.951110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.166 qpair failed and we were unable to recover it. 00:26:11.166 [2024-04-18 21:19:26.951473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.951771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.951808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.166 qpair failed and we were unable to recover it. 00:26:11.166 [2024-04-18 21:19:26.952156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.952483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.952531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.166 qpair failed and we were unable to recover it. 00:26:11.166 [2024-04-18 21:19:26.952979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.953239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.953275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.166 qpair failed and we were unable to recover it. 00:26:11.166 [2024-04-18 21:19:26.953623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.953958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.953995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.166 qpair failed and we were unable to recover it. 00:26:11.166 [2024-04-18 21:19:26.954356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.954734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.954751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.166 qpair failed and we were unable to recover it. 00:26:11.166 [2024-04-18 21:19:26.955158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.955418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.955454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.166 qpair failed and we were unable to recover it. 00:26:11.166 [2024-04-18 21:19:26.955819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.956256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.956292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.166 qpair failed and we were unable to recover it. 00:26:11.166 [2024-04-18 21:19:26.956713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.957119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.957155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.166 qpair failed and we were unable to recover it. 00:26:11.166 [2024-04-18 21:19:26.957494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.957825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.957841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.166 qpair failed and we were unable to recover it. 00:26:11.166 [2024-04-18 21:19:26.958078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.958380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.958416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.166 qpair failed and we were unable to recover it. 00:26:11.166 [2024-04-18 21:19:26.958814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.959216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.959232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.166 qpair failed and we were unable to recover it. 00:26:11.166 [2024-04-18 21:19:26.959456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.959679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.959695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.166 qpair failed and we were unable to recover it. 00:26:11.166 [2024-04-18 21:19:26.960008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.960294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.960310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.166 qpair failed and we were unable to recover it. 00:26:11.166 [2024-04-18 21:19:26.960600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.960966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.961002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.166 qpair failed and we were unable to recover it. 00:26:11.166 [2024-04-18 21:19:26.961334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.961728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.166 [2024-04-18 21:19:26.961765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.167 qpair failed and we were unable to recover it. 00:26:11.167 [2024-04-18 21:19:26.962113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.962474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.962525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.167 qpair failed and we were unable to recover it. 00:26:11.167 [2024-04-18 21:19:26.962904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.963229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.963265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.167 qpair failed and we were unable to recover it. 00:26:11.167 [2024-04-18 21:19:26.963557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.963942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.963979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.167 qpair failed and we were unable to recover it. 00:26:11.167 [2024-04-18 21:19:26.964271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.964539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.964577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.167 qpair failed and we were unable to recover it. 00:26:11.167 [2024-04-18 21:19:26.964862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.965264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.965300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.167 qpair failed and we were unable to recover it. 00:26:11.167 [2024-04-18 21:19:26.965724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.966055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.966092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.167 qpair failed and we were unable to recover it. 00:26:11.167 [2024-04-18 21:19:26.966488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.966818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.966834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.167 qpair failed and we were unable to recover it. 00:26:11.167 [2024-04-18 21:19:26.967186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.967613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.967650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.167 qpair failed and we were unable to recover it. 00:26:11.167 [2024-04-18 21:19:26.967996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.968342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.968378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.167 qpair failed and we were unable to recover it. 00:26:11.167 [2024-04-18 21:19:26.968788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.969118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.969154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.167 qpair failed and we were unable to recover it. 00:26:11.167 [2024-04-18 21:19:26.969548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.969940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.969977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.167 qpair failed and we were unable to recover it. 00:26:11.167 [2024-04-18 21:19:26.970374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.970754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.970791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.167 qpair failed and we were unable to recover it. 00:26:11.167 [2024-04-18 21:19:26.971114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.971397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.971434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.167 qpair failed and we were unable to recover it. 00:26:11.167 [2024-04-18 21:19:26.971645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.971813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.971830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.167 qpair failed and we were unable to recover it. 00:26:11.167 [2024-04-18 21:19:26.972204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.972557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.972596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.167 qpair failed and we were unable to recover it. 00:26:11.167 [2024-04-18 21:19:26.972937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.973324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.973377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.167 qpair failed and we were unable to recover it. 00:26:11.167 [2024-04-18 21:19:26.973664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.974021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.974057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.167 qpair failed and we were unable to recover it. 00:26:11.167 [2024-04-18 21:19:26.974427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.974744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.974789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.167 qpair failed and we were unable to recover it. 00:26:11.167 [2024-04-18 21:19:26.975053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.975396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.975432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.167 qpair failed and we were unable to recover it. 00:26:11.167 [2024-04-18 21:19:26.975787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.976188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.976224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.167 qpair failed and we were unable to recover it. 00:26:11.167 [2024-04-18 21:19:26.976566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.976958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.977003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.167 qpair failed and we were unable to recover it. 00:26:11.167 [2024-04-18 21:19:26.977238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.977605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.977642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.167 qpair failed and we were unable to recover it. 00:26:11.167 [2024-04-18 21:19:26.978014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.978367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.978403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.167 qpair failed and we were unable to recover it. 00:26:11.167 [2024-04-18 21:19:26.978690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.979100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.979136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.167 qpair failed and we were unable to recover it. 00:26:11.167 [2024-04-18 21:19:26.979559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.979910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.979946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.167 qpair failed and we were unable to recover it. 00:26:11.167 [2024-04-18 21:19:26.980348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.980689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.980727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.167 qpair failed and we were unable to recover it. 00:26:11.167 [2024-04-18 21:19:26.981049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.981284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.981300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.167 qpair failed and we were unable to recover it. 00:26:11.167 [2024-04-18 21:19:26.981676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.982019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.982056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.167 qpair failed and we were unable to recover it. 00:26:11.167 [2024-04-18 21:19:26.982385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.982765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.167 [2024-04-18 21:19:26.982813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.167 qpair failed and we were unable to recover it. 00:26:11.167 [2024-04-18 21:19:26.983113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.168 [2024-04-18 21:19:26.983399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.168 [2024-04-18 21:19:26.983415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.168 qpair failed and we were unable to recover it. 00:26:11.168 [2024-04-18 21:19:26.983725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.168 [2024-04-18 21:19:26.984084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.168 [2024-04-18 21:19:26.984126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.168 qpair failed and we were unable to recover it. 00:26:11.168 [2024-04-18 21:19:26.984459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.168 [2024-04-18 21:19:26.984910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.168 [2024-04-18 21:19:26.984947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.168 qpair failed and we were unable to recover it. 00:26:11.168 [2024-04-18 21:19:26.985299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.168 [2024-04-18 21:19:26.985620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.168 [2024-04-18 21:19:26.985658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.168 qpair failed and we were unable to recover it. 00:26:11.168 [2024-04-18 21:19:26.985984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.168 [2024-04-18 21:19:26.986366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.168 [2024-04-18 21:19:26.986401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.168 qpair failed and we were unable to recover it. 00:26:11.168 [2024-04-18 21:19:26.986743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.168 [2024-04-18 21:19:26.987151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.168 [2024-04-18 21:19:26.987188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.168 qpair failed and we were unable to recover it. 00:26:11.168 [2024-04-18 21:19:26.987602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.168 [2024-04-18 21:19:26.988006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.168 [2024-04-18 21:19:26.988042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.168 qpair failed and we were unable to recover it. 00:26:11.168 [2024-04-18 21:19:26.988391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.168 [2024-04-18 21:19:26.988773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.168 [2024-04-18 21:19:26.988810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.168 qpair failed and we were unable to recover it. 00:26:11.168 [2024-04-18 21:19:26.989136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.168 [2024-04-18 21:19:26.989503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.168 [2024-04-18 21:19:26.989566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.168 qpair failed and we were unable to recover it. 00:26:11.168 [2024-04-18 21:19:26.989899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.168 [2024-04-18 21:19:26.990300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.168 [2024-04-18 21:19:26.990316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.168 qpair failed and we were unable to recover it. 00:26:11.168 [2024-04-18 21:19:26.990640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.168 [2024-04-18 21:19:26.990936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.168 [2024-04-18 21:19:26.990972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.168 qpair failed and we were unable to recover it. 00:26:11.168 [2024-04-18 21:19:26.991267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.168 [2024-04-18 21:19:26.991647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.168 [2024-04-18 21:19:26.991685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.168 qpair failed and we were unable to recover it. 00:26:11.168 [2024-04-18 21:19:26.992029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.168 [2024-04-18 21:19:26.992250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.168 [2024-04-18 21:19:26.992286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.168 qpair failed and we were unable to recover it. 00:26:11.168 [2024-04-18 21:19:26.992615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.168 [2024-04-18 21:19:26.993021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.168 [2024-04-18 21:19:26.993057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.168 qpair failed and we were unable to recover it. 00:26:11.168 [2024-04-18 21:19:26.993417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.168 [2024-04-18 21:19:26.993732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.168 [2024-04-18 21:19:26.993770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.168 qpair failed and we were unable to recover it. 00:26:11.168 [2024-04-18 21:19:26.994126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.168 [2024-04-18 21:19:26.994390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.168 [2024-04-18 21:19:26.994426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.168 qpair failed and we were unable to recover it. 00:26:11.168 [2024-04-18 21:19:26.994871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.168 [2024-04-18 21:19:26.995275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.168 [2024-04-18 21:19:26.995312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.168 qpair failed and we were unable to recover it. 00:26:11.168 [2024-04-18 21:19:26.995651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.168 [2024-04-18 21:19:26.995982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.168 [2024-04-18 21:19:26.996018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.168 qpair failed and we were unable to recover it. 00:26:11.168 [2024-04-18 21:19:26.996310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.168 [2024-04-18 21:19:26.996713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.168 [2024-04-18 21:19:26.996749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.168 qpair failed and we were unable to recover it. 00:26:11.168 [2024-04-18 21:19:26.996959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.168 [2024-04-18 21:19:26.997196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.168 [2024-04-18 21:19:26.997231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.168 qpair failed and we were unable to recover it. 00:26:11.168 [2024-04-18 21:19:26.997690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.168 [2024-04-18 21:19:26.998085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.168 [2024-04-18 21:19:26.998101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.168 qpair failed and we were unable to recover it. 00:26:11.168 [2024-04-18 21:19:26.998413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.168 [2024-04-18 21:19:26.998780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.168 [2024-04-18 21:19:26.998817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.168 qpair failed and we were unable to recover it. 00:26:11.168 [2024-04-18 21:19:26.999112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.168 [2024-04-18 21:19:26.999539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.168 [2024-04-18 21:19:26.999577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.168 qpair failed and we were unable to recover it. 00:26:11.168 [2024-04-18 21:19:27.000016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.168 [2024-04-18 21:19:27.000370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.168 [2024-04-18 21:19:27.000386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.168 qpair failed and we were unable to recover it. 00:26:11.168 [2024-04-18 21:19:27.000668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.168 [2024-04-18 21:19:27.000956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.168 [2024-04-18 21:19:27.000992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.168 qpair failed and we were unable to recover it. 00:26:11.169 [2024-04-18 21:19:27.001282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.001612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.001649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.169 qpair failed and we were unable to recover it. 00:26:11.169 [2024-04-18 21:19:27.001923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.002302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.002338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.169 qpair failed and we were unable to recover it. 00:26:11.169 [2024-04-18 21:19:27.002738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.003083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.003123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.169 qpair failed and we were unable to recover it. 00:26:11.169 [2024-04-18 21:19:27.003483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.003896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.003934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.169 qpair failed and we were unable to recover it. 00:26:11.169 [2024-04-18 21:19:27.004361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.004676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.004713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.169 qpair failed and we were unable to recover it. 00:26:11.169 [2024-04-18 21:19:27.005134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.005422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.005438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.169 qpair failed and we were unable to recover it. 00:26:11.169 [2024-04-18 21:19:27.005806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.006095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.006131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.169 qpair failed and we were unable to recover it. 00:26:11.169 [2024-04-18 21:19:27.006485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.006796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.006833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.169 qpair failed and we were unable to recover it. 00:26:11.169 [2024-04-18 21:19:27.007188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.007569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.007606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.169 qpair failed and we were unable to recover it. 00:26:11.169 [2024-04-18 21:19:27.007955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.008309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.008345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.169 qpair failed and we were unable to recover it. 00:26:11.169 [2024-04-18 21:19:27.008583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.008903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.008939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.169 qpair failed and we were unable to recover it. 00:26:11.169 [2024-04-18 21:19:27.009283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.009544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.009581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.169 qpair failed and we were unable to recover it. 00:26:11.169 [2024-04-18 21:19:27.010001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.010324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.010340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.169 qpair failed and we were unable to recover it. 00:26:11.169 [2024-04-18 21:19:27.010689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.010958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.010995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.169 qpair failed and we were unable to recover it. 00:26:11.169 [2024-04-18 21:19:27.011336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.011660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.011716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.169 qpair failed and we were unable to recover it. 00:26:11.169 [2024-04-18 21:19:27.012014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.012245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.012260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.169 qpair failed and we were unable to recover it. 00:26:11.169 [2024-04-18 21:19:27.012475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.012768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.012806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.169 qpair failed and we were unable to recover it. 00:26:11.169 [2024-04-18 21:19:27.013226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.013629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.013666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.169 qpair failed and we were unable to recover it. 00:26:11.169 [2024-04-18 21:19:27.013945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.014276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.014311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.169 qpair failed and we were unable to recover it. 00:26:11.169 [2024-04-18 21:19:27.014730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.015121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.015158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.169 qpair failed and we were unable to recover it. 00:26:11.169 [2024-04-18 21:19:27.015430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.015749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.015787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.169 qpair failed and we were unable to recover it. 00:26:11.169 [2024-04-18 21:19:27.016085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.016365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.016381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.169 qpair failed and we were unable to recover it. 00:26:11.169 [2024-04-18 21:19:27.016694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.017063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.017099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.169 qpair failed and we were unable to recover it. 00:26:11.169 [2024-04-18 21:19:27.017506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.017918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.017960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.169 qpair failed and we were unable to recover it. 00:26:11.169 [2024-04-18 21:19:27.018200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.018526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.018563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.169 qpair failed and we were unable to recover it. 00:26:11.169 [2024-04-18 21:19:27.018960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.019200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.019216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.169 qpair failed and we were unable to recover it. 00:26:11.169 [2024-04-18 21:19:27.019454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.019735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.019751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.169 qpair failed and we were unable to recover it. 00:26:11.169 [2024-04-18 21:19:27.020063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.020427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.020470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.169 qpair failed and we were unable to recover it. 00:26:11.169 [2024-04-18 21:19:27.020833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.169 [2024-04-18 21:19:27.021144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.021180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.170 qpair failed and we were unable to recover it. 00:26:11.170 [2024-04-18 21:19:27.021609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.021934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.021973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.170 qpair failed and we were unable to recover it. 00:26:11.170 [2024-04-18 21:19:27.022331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.022663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.022700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.170 qpair failed and we were unable to recover it. 00:26:11.170 [2024-04-18 21:19:27.023037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.023389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.023426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.170 qpair failed and we were unable to recover it. 00:26:11.170 [2024-04-18 21:19:27.023695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.023915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.023952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.170 qpair failed and we were unable to recover it. 00:26:11.170 [2024-04-18 21:19:27.024316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.024630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.024667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.170 qpair failed and we were unable to recover it. 00:26:11.170 [2024-04-18 21:19:27.025019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.025293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.025330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.170 qpair failed and we were unable to recover it. 00:26:11.170 [2024-04-18 21:19:27.025745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.026147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.026183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.170 qpair failed and we were unable to recover it. 00:26:11.170 [2024-04-18 21:19:27.026537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.026893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.026908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.170 qpair failed and we were unable to recover it. 00:26:11.170 [2024-04-18 21:19:27.027136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.027427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.027449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.170 qpair failed and we were unable to recover it. 00:26:11.170 [2024-04-18 21:19:27.027833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.028115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.028152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.170 qpair failed and we were unable to recover it. 00:26:11.170 [2024-04-18 21:19:27.028585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.029018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.029054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.170 qpair failed and we were unable to recover it. 00:26:11.170 [2024-04-18 21:19:27.029487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.029894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.029932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.170 qpair failed and we were unable to recover it. 00:26:11.170 [2024-04-18 21:19:27.030265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.030620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.030637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.170 qpair failed and we were unable to recover it. 00:26:11.170 [2024-04-18 21:19:27.030917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.031248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.031264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.170 qpair failed and we were unable to recover it. 00:26:11.170 [2024-04-18 21:19:27.031489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.031865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.031902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.170 qpair failed and we were unable to recover it. 00:26:11.170 [2024-04-18 21:19:27.032301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.032564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.032601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.170 qpair failed and we were unable to recover it. 00:26:11.170 [2024-04-18 21:19:27.032878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.033258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.033294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.170 qpair failed and we were unable to recover it. 00:26:11.170 [2024-04-18 21:19:27.033718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.034044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.034080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.170 qpair failed and we were unable to recover it. 00:26:11.170 [2024-04-18 21:19:27.034434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.034702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.034740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.170 qpair failed and we were unable to recover it. 00:26:11.170 [2024-04-18 21:19:27.035090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.035377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.035413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.170 qpair failed and we were unable to recover it. 00:26:11.170 [2024-04-18 21:19:27.035810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.036124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.036160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.170 qpair failed and we were unable to recover it. 00:26:11.170 [2024-04-18 21:19:27.036582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.036914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.036930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.170 qpair failed and we were unable to recover it. 00:26:11.170 [2024-04-18 21:19:27.037294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.037562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.037579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.170 qpair failed and we were unable to recover it. 00:26:11.170 [2024-04-18 21:19:27.037957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.038337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.038374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.170 qpair failed and we were unable to recover it. 00:26:11.170 [2024-04-18 21:19:27.038673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.038990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.039042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.170 qpair failed and we were unable to recover it. 00:26:11.170 [2024-04-18 21:19:27.039341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.039643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.039681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.170 qpair failed and we were unable to recover it. 00:26:11.170 [2024-04-18 21:19:27.039958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.040337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.040373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.170 qpair failed and we were unable to recover it. 00:26:11.170 [2024-04-18 21:19:27.040765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.041153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.170 [2024-04-18 21:19:27.041189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.170 qpair failed and we were unable to recover it. 00:26:11.171 [2024-04-18 21:19:27.041444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.041801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.041838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.171 qpair failed and we were unable to recover it. 00:26:11.171 [2024-04-18 21:19:27.042239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.042643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.042681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.171 qpair failed and we were unable to recover it. 00:26:11.171 [2024-04-18 21:19:27.043104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.043506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.043554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.171 qpair failed and we were unable to recover it. 00:26:11.171 [2024-04-18 21:19:27.043793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.044221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.044256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.171 qpair failed and we were unable to recover it. 00:26:11.171 [2024-04-18 21:19:27.044660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.045089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.045125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.171 qpair failed and we were unable to recover it. 00:26:11.171 [2024-04-18 21:19:27.045403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.045733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.045771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.171 qpair failed and we were unable to recover it. 00:26:11.171 [2024-04-18 21:19:27.046062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.046400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.046436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.171 qpair failed and we were unable to recover it. 00:26:11.171 [2024-04-18 21:19:27.046670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.047002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.047039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.171 qpair failed and we were unable to recover it. 00:26:11.171 [2024-04-18 21:19:27.047429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.047768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.047785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.171 qpair failed and we were unable to recover it. 00:26:11.171 [2024-04-18 21:19:27.048133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.048477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.048522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.171 qpair failed and we were unable to recover it. 00:26:11.171 [2024-04-18 21:19:27.048935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.049246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.049282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.171 qpair failed and we were unable to recover it. 00:26:11.171 [2024-04-18 21:19:27.049677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.050012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.050028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.171 qpair failed and we were unable to recover it. 00:26:11.171 [2024-04-18 21:19:27.050320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.050615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.050652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.171 qpair failed and we were unable to recover it. 00:26:11.171 [2024-04-18 21:19:27.050932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.051332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.051368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.171 qpair failed and we were unable to recover it. 00:26:11.171 [2024-04-18 21:19:27.051652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.051892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.051909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.171 qpair failed and we were unable to recover it. 00:26:11.171 [2024-04-18 21:19:27.052144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.052481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.052497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.171 qpair failed and we were unable to recover it. 00:26:11.171 [2024-04-18 21:19:27.052878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.053244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.053281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.171 qpair failed and we were unable to recover it. 00:26:11.171 [2024-04-18 21:19:27.053610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.053928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.053963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.171 qpair failed and we were unable to recover it. 00:26:11.171 [2024-04-18 21:19:27.054329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.054680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.054717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.171 qpair failed and we were unable to recover it. 00:26:11.171 [2024-04-18 21:19:27.055048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.055358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.055374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.171 qpair failed and we were unable to recover it. 00:26:11.171 [2024-04-18 21:19:27.055699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.056035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.056071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.171 qpair failed and we were unable to recover it. 00:26:11.171 [2024-04-18 21:19:27.056351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.056686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.056732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.171 qpair failed and we were unable to recover it. 00:26:11.171 [2024-04-18 21:19:27.057087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.057343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.057379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.171 qpair failed and we were unable to recover it. 00:26:11.171 [2024-04-18 21:19:27.057669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.057997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.058034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.171 qpair failed and we were unable to recover it. 00:26:11.171 [2024-04-18 21:19:27.058374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.058667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.058705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.171 qpair failed and we were unable to recover it. 00:26:11.171 [2024-04-18 21:19:27.059071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.059459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.059495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.171 qpair failed and we were unable to recover it. 00:26:11.171 [2024-04-18 21:19:27.059905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.060230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.060267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.171 qpair failed and we were unable to recover it. 00:26:11.171 [2024-04-18 21:19:27.060617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.061050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.061086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.171 qpair failed and we were unable to recover it. 00:26:11.171 [2024-04-18 21:19:27.061415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.061687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.171 [2024-04-18 21:19:27.061724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.171 qpair failed and we were unable to recover it. 00:26:11.172 [2024-04-18 21:19:27.062089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.172 [2024-04-18 21:19:27.062348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.172 [2024-04-18 21:19:27.062384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.172 qpair failed and we were unable to recover it. 00:26:11.172 [2024-04-18 21:19:27.062669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.172 [2024-04-18 21:19:27.062944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.172 [2024-04-18 21:19:27.062980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.172 qpair failed and we were unable to recover it. 00:26:11.172 [2024-04-18 21:19:27.063319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.172 [2024-04-18 21:19:27.063725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.172 [2024-04-18 21:19:27.063778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.172 qpair failed and we were unable to recover it. 00:26:11.172 [2024-04-18 21:19:27.064092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.172 [2024-04-18 21:19:27.064413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.172 [2024-04-18 21:19:27.064450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.172 qpair failed and we were unable to recover it. 00:26:11.172 [2024-04-18 21:19:27.064880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.172 [2024-04-18 21:19:27.065263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.172 [2024-04-18 21:19:27.065300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.172 qpair failed and we were unable to recover it. 00:26:11.172 [2024-04-18 21:19:27.065652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.172 [2024-04-18 21:19:27.065985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.172 [2024-04-18 21:19:27.066021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.172 qpair failed and we were unable to recover it. 00:26:11.172 [2024-04-18 21:19:27.066350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.172 [2024-04-18 21:19:27.066620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.172 [2024-04-18 21:19:27.066658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.172 qpair failed and we were unable to recover it. 00:26:11.172 [2024-04-18 21:19:27.067103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.172 [2024-04-18 21:19:27.067421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.172 [2024-04-18 21:19:27.067437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.172 qpair failed and we were unable to recover it. 00:26:11.172 [2024-04-18 21:19:27.067665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.172 [2024-04-18 21:19:27.068031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.172 [2024-04-18 21:19:27.068047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.172 qpair failed and we were unable to recover it. 00:26:11.172 [2024-04-18 21:19:27.068166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.172 [2024-04-18 21:19:27.068392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.172 [2024-04-18 21:19:27.068428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.172 qpair failed and we were unable to recover it. 00:26:11.172 [2024-04-18 21:19:27.068765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.172 [2024-04-18 21:19:27.069076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.172 [2024-04-18 21:19:27.069092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.172 qpair failed and we were unable to recover it. 00:26:11.172 [2024-04-18 21:19:27.069387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.172 [2024-04-18 21:19:27.069730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.172 [2024-04-18 21:19:27.069768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.172 qpair failed and we were unable to recover it. 00:26:11.172 [2024-04-18 21:19:27.070196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.172 [2024-04-18 21:19:27.070603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.172 [2024-04-18 21:19:27.070640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.172 qpair failed and we were unable to recover it. 00:26:11.172 [2024-04-18 21:19:27.071096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.172 [2024-04-18 21:19:27.071492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.172 [2024-04-18 21:19:27.071538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.172 qpair failed and we were unable to recover it. 00:26:11.172 [2024-04-18 21:19:27.071935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.172 [2024-04-18 21:19:27.072202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.172 [2024-04-18 21:19:27.072217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.172 qpair failed and we were unable to recover it. 00:26:11.172 [2024-04-18 21:19:27.072431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.172 [2024-04-18 21:19:27.072811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.172 [2024-04-18 21:19:27.072849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.172 qpair failed and we were unable to recover it. 00:26:11.172 [2024-04-18 21:19:27.073269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.172 [2024-04-18 21:19:27.073665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.172 [2024-04-18 21:19:27.073703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.172 qpair failed and we were unable to recover it. 00:26:11.172 [2024-04-18 21:19:27.074109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.172 [2024-04-18 21:19:27.074453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.172 [2024-04-18 21:19:27.074469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.172 qpair failed and we were unable to recover it. 00:26:11.172 [2024-04-18 21:19:27.074826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.172 [2024-04-18 21:19:27.075112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.172 [2024-04-18 21:19:27.075147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.172 qpair failed and we were unable to recover it. 00:26:11.172 [2024-04-18 21:19:27.075443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.172 [2024-04-18 21:19:27.075780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.172 [2024-04-18 21:19:27.075797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.172 qpair failed and we were unable to recover it. 00:26:11.172 [2024-04-18 21:19:27.076021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.172 [2024-04-18 21:19:27.076328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.172 [2024-04-18 21:19:27.076344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.172 qpair failed and we were unable to recover it. 00:26:11.172 [2024-04-18 21:19:27.076579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.172 [2024-04-18 21:19:27.076869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.172 [2024-04-18 21:19:27.076885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.172 qpair failed and we were unable to recover it. 00:26:11.172 [2024-04-18 21:19:27.077305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.172 [2024-04-18 21:19:27.077645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.172 [2024-04-18 21:19:27.077683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.172 qpair failed and we were unable to recover it. 00:26:11.172 [2024-04-18 21:19:27.077992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.172 [2024-04-18 21:19:27.078310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.172 [2024-04-18 21:19:27.078327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.172 qpair failed and we were unable to recover it. 00:26:11.440 [2024-04-18 21:19:27.078623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.440 [2024-04-18 21:19:27.079007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.440 [2024-04-18 21:19:27.079024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.440 qpair failed and we were unable to recover it. 00:26:11.440 [2024-04-18 21:19:27.079328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.440 [2024-04-18 21:19:27.079635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.440 [2024-04-18 21:19:27.079652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.440 qpair failed and we were unable to recover it. 00:26:11.440 [2024-04-18 21:19:27.079894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.440 [2024-04-18 21:19:27.080147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.440 [2024-04-18 21:19:27.080184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.440 qpair failed and we were unable to recover it. 00:26:11.440 [2024-04-18 21:19:27.080537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.440 [2024-04-18 21:19:27.080792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.440 [2024-04-18 21:19:27.080809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.440 qpair failed and we were unable to recover it. 00:26:11.440 [2024-04-18 21:19:27.081197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.440 [2024-04-18 21:19:27.081468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.440 [2024-04-18 21:19:27.081506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.440 qpair failed and we were unable to recover it. 00:26:11.440 [2024-04-18 21:19:27.081849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.440 [2024-04-18 21:19:27.082235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.440 [2024-04-18 21:19:27.082271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.440 qpair failed and we were unable to recover it. 00:26:11.440 [2024-04-18 21:19:27.082764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.440 [2024-04-18 21:19:27.083040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.440 [2024-04-18 21:19:27.083077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.440 qpair failed and we were unable to recover it. 00:26:11.440 [2024-04-18 21:19:27.083500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.440 [2024-04-18 21:19:27.083855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.440 [2024-04-18 21:19:27.083892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.440 qpair failed and we were unable to recover it. 00:26:11.440 [2024-04-18 21:19:27.084308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.440 [2024-04-18 21:19:27.084654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.440 [2024-04-18 21:19:27.084692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.440 qpair failed and we were unable to recover it. 00:26:11.440 [2024-04-18 21:19:27.085032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.440 [2024-04-18 21:19:27.085305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.440 [2024-04-18 21:19:27.085341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.440 qpair failed and we were unable to recover it. 00:26:11.440 [2024-04-18 21:19:27.085673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.440 [2024-04-18 21:19:27.086010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.440 [2024-04-18 21:19:27.086047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.440 qpair failed and we were unable to recover it. 00:26:11.440 [2024-04-18 21:19:27.086501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.440 [2024-04-18 21:19:27.086744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.440 [2024-04-18 21:19:27.086781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.440 qpair failed and we were unable to recover it. 00:26:11.440 [2024-04-18 21:19:27.087057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.440 [2024-04-18 21:19:27.087423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.440 [2024-04-18 21:19:27.087459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.440 qpair failed and we were unable to recover it. 00:26:11.440 [2024-04-18 21:19:27.087806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.440 [2024-04-18 21:19:27.088215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.440 [2024-04-18 21:19:27.088251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.440 qpair failed and we were unable to recover it. 00:26:11.440 [2024-04-18 21:19:27.088613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.440 [2024-04-18 21:19:27.088967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.440 [2024-04-18 21:19:27.089004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.440 qpair failed and we were unable to recover it. 00:26:11.440 [2024-04-18 21:19:27.089307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.440 [2024-04-18 21:19:27.089641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.440 [2024-04-18 21:19:27.089679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.440 qpair failed and we were unable to recover it. 00:26:11.440 [2024-04-18 21:19:27.090050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.440 [2024-04-18 21:19:27.090433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.440 [2024-04-18 21:19:27.090471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.440 qpair failed and we were unable to recover it. 00:26:11.440 [2024-04-18 21:19:27.090917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.440 [2024-04-18 21:19:27.091189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.440 [2024-04-18 21:19:27.091226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.440 qpair failed and we were unable to recover it. 00:26:11.440 [2024-04-18 21:19:27.091460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.440 [2024-04-18 21:19:27.091818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.440 [2024-04-18 21:19:27.091856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.440 qpair failed and we were unable to recover it. 00:26:11.440 [2024-04-18 21:19:27.092204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.440 [2024-04-18 21:19:27.092567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.440 [2024-04-18 21:19:27.092605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.440 qpair failed and we were unable to recover it. 00:26:11.440 [2024-04-18 21:19:27.092884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.440 [2024-04-18 21:19:27.093142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.440 [2024-04-18 21:19:27.093158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.440 qpair failed and we were unable to recover it. 00:26:11.440 [2024-04-18 21:19:27.093411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.440 [2024-04-18 21:19:27.093855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.440 [2024-04-18 21:19:27.093878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.440 qpair failed and we were unable to recover it. 00:26:11.440 [2024-04-18 21:19:27.094016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.440 [2024-04-18 21:19:27.094246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.440 [2024-04-18 21:19:27.094263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.440 qpair failed and we were unable to recover it. 00:26:11.440 [2024-04-18 21:19:27.094700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.440 [2024-04-18 21:19:27.094908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.440 [2024-04-18 21:19:27.094925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.440 qpair failed and we were unable to recover it. 00:26:11.440 [2024-04-18 21:19:27.095207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.095427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.095445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.441 qpair failed and we were unable to recover it. 00:26:11.441 [2024-04-18 21:19:27.095744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.095973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.095990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.441 qpair failed and we were unable to recover it. 00:26:11.441 [2024-04-18 21:19:27.096206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.096493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.096519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.441 qpair failed and we were unable to recover it. 00:26:11.441 [2024-04-18 21:19:27.096751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.097054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.097070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.441 qpair failed and we were unable to recover it. 00:26:11.441 [2024-04-18 21:19:27.097219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.097493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.097518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.441 qpair failed and we were unable to recover it. 00:26:11.441 [2024-04-18 21:19:27.097869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.098157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.098177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.441 qpair failed and we were unable to recover it. 00:26:11.441 [2024-04-18 21:19:27.098470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.098811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.098828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.441 qpair failed and we were unable to recover it. 00:26:11.441 [2024-04-18 21:19:27.099126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.099413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.099429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.441 qpair failed and we were unable to recover it. 00:26:11.441 [2024-04-18 21:19:27.099730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.100008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.100024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.441 qpair failed and we were unable to recover it. 00:26:11.441 [2024-04-18 21:19:27.100376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.100662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.100679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.441 qpair failed and we were unable to recover it. 00:26:11.441 [2024-04-18 21:19:27.100846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.101132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.101148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.441 qpair failed and we were unable to recover it. 00:26:11.441 [2024-04-18 21:19:27.101439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.101714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.101731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.441 qpair failed and we were unable to recover it. 00:26:11.441 [2024-04-18 21:19:27.102127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.102470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.102486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.441 qpair failed and we were unable to recover it. 00:26:11.441 [2024-04-18 21:19:27.102796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.103185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.103221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.441 qpair failed and we were unable to recover it. 00:26:11.441 [2024-04-18 21:19:27.103551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.103887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.103924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.441 qpair failed and we were unable to recover it. 00:26:11.441 [2024-04-18 21:19:27.104158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.104472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.104489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.441 qpair failed and we were unable to recover it. 00:26:11.441 [2024-04-18 21:19:27.104851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.105125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.105142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.441 qpair failed and we were unable to recover it. 00:26:11.441 [2024-04-18 21:19:27.105426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.105699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.105716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.441 qpair failed and we were unable to recover it. 00:26:11.441 [2024-04-18 21:19:27.105928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.106209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.106226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.441 qpair failed and we were unable to recover it. 00:26:11.441 [2024-04-18 21:19:27.106460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.106926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.106964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.441 qpair failed and we were unable to recover it. 00:26:11.441 [2024-04-18 21:19:27.107253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.107577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.107615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.441 qpair failed and we were unable to recover it. 00:26:11.441 [2024-04-18 21:19:27.107894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.108146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.108162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.441 qpair failed and we were unable to recover it. 00:26:11.441 [2024-04-18 21:19:27.108476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.108698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.108715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.441 qpair failed and we were unable to recover it. 00:26:11.441 [2024-04-18 21:19:27.109027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.109264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.109280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.441 qpair failed and we were unable to recover it. 00:26:11.441 [2024-04-18 21:19:27.109581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.109937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.109973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.441 qpair failed and we were unable to recover it. 00:26:11.441 [2024-04-18 21:19:27.110246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.110563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.110601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.441 qpair failed and we were unable to recover it. 00:26:11.441 [2024-04-18 21:19:27.110988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.111341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.111378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.441 qpair failed and we were unable to recover it. 00:26:11.441 [2024-04-18 21:19:27.111742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.112035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.112052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.441 qpair failed and we were unable to recover it. 00:26:11.441 [2024-04-18 21:19:27.112302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.112620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.441 [2024-04-18 21:19:27.112658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.442 qpair failed and we were unable to recover it. 00:26:11.442 [2024-04-18 21:19:27.112954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.113206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.113222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.442 qpair failed and we were unable to recover it. 00:26:11.442 [2024-04-18 21:19:27.113466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.113746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.113763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.442 qpair failed and we were unable to recover it. 00:26:11.442 [2024-04-18 21:19:27.114049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.114261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.114278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.442 qpair failed and we were unable to recover it. 00:26:11.442 [2024-04-18 21:19:27.114626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.114945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.114981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.442 qpair failed and we were unable to recover it. 00:26:11.442 [2024-04-18 21:19:27.115269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.115607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.115644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.442 qpair failed and we were unable to recover it. 00:26:11.442 [2024-04-18 21:19:27.115940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.116287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.116330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.442 qpair failed and we were unable to recover it. 00:26:11.442 [2024-04-18 21:19:27.116611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.116864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.116901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.442 qpair failed and we were unable to recover it. 00:26:11.442 [2024-04-18 21:19:27.117183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.117449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.117487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.442 qpair failed and we were unable to recover it. 00:26:11.442 [2024-04-18 21:19:27.117837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.118160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.118177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.442 qpair failed and we were unable to recover it. 00:26:11.442 [2024-04-18 21:19:27.118557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.118871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.118908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.442 qpair failed and we were unable to recover it. 00:26:11.442 [2024-04-18 21:19:27.119123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.119500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.119525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.442 qpair failed and we were unable to recover it. 00:26:11.442 [2024-04-18 21:19:27.119786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.119907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.119923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.442 qpair failed and we were unable to recover it. 00:26:11.442 [2024-04-18 21:19:27.120295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.120569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.120606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.442 qpair failed and we were unable to recover it. 00:26:11.442 [2024-04-18 21:19:27.120871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.121275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.121312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.442 qpair failed and we were unable to recover it. 00:26:11.442 [2024-04-18 21:19:27.121483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.121878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.121915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.442 qpair failed and we were unable to recover it. 00:26:11.442 [2024-04-18 21:19:27.122231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.122516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.122533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.442 qpair failed and we were unable to recover it. 00:26:11.442 [2024-04-18 21:19:27.122815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.123086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.123102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.442 qpair failed and we were unable to recover it. 00:26:11.442 [2024-04-18 21:19:27.123454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.123801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.123838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.442 qpair failed and we were unable to recover it. 00:26:11.442 [2024-04-18 21:19:27.124199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.124550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.124587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.442 qpair failed and we were unable to recover it. 00:26:11.442 [2024-04-18 21:19:27.124962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.125389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.125425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.442 qpair failed and we were unable to recover it. 00:26:11.442 [2024-04-18 21:19:27.125848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.126259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.126296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.442 qpair failed and we were unable to recover it. 00:26:11.442 [2024-04-18 21:19:27.126685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.126973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.126989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.442 qpair failed and we were unable to recover it. 00:26:11.442 [2024-04-18 21:19:27.127312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.127691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.127729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.442 qpair failed and we were unable to recover it. 00:26:11.442 [2024-04-18 21:19:27.128166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.128591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.128607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.442 qpair failed and we were unable to recover it. 00:26:11.442 [2024-04-18 21:19:27.128839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.129120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.129164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.442 qpair failed and we were unable to recover it. 00:26:11.442 [2024-04-18 21:19:27.129581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.129802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.129838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.442 qpair failed and we were unable to recover it. 00:26:11.442 [2024-04-18 21:19:27.130231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.130502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.130526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.442 qpair failed and we were unable to recover it. 00:26:11.442 [2024-04-18 21:19:27.130767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.131048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.131091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.442 qpair failed and we were unable to recover it. 00:26:11.442 [2024-04-18 21:19:27.131487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.131913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.442 [2024-04-18 21:19:27.131951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.442 qpair failed and we were unable to recover it. 00:26:11.443 [2024-04-18 21:19:27.132269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.132570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.132608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.443 qpair failed and we were unable to recover it. 00:26:11.443 [2024-04-18 21:19:27.132903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.133078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.133113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.443 qpair failed and we were unable to recover it. 00:26:11.443 [2024-04-18 21:19:27.133530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.133935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.133971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.443 qpair failed and we were unable to recover it. 00:26:11.443 [2024-04-18 21:19:27.134317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.134536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.134554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.443 qpair failed and we were unable to recover it. 00:26:11.443 [2024-04-18 21:19:27.134790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.135159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.135196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.443 qpair failed and we were unable to recover it. 00:26:11.443 [2024-04-18 21:19:27.135483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.135650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.135687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.443 qpair failed and we were unable to recover it. 00:26:11.443 [2024-04-18 21:19:27.136014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.136388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.136424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.443 qpair failed and we were unable to recover it. 00:26:11.443 [2024-04-18 21:19:27.136851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.137172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.137208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.443 qpair failed and we were unable to recover it. 00:26:11.443 [2024-04-18 21:19:27.137502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.137811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.137831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.443 qpair failed and we were unable to recover it. 00:26:11.443 [2024-04-18 21:19:27.138134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.138620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.138658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.443 qpair failed and we were unable to recover it. 00:26:11.443 [2024-04-18 21:19:27.139078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.139402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.139439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.443 qpair failed and we were unable to recover it. 00:26:11.443 [2024-04-18 21:19:27.139791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.140025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.140041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.443 qpair failed and we were unable to recover it. 00:26:11.443 [2024-04-18 21:19:27.140249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.140690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.140727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.443 qpair failed and we were unable to recover it. 00:26:11.443 [2024-04-18 21:19:27.141140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.141402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.141438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.443 qpair failed and we were unable to recover it. 00:26:11.443 [2024-04-18 21:19:27.141791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.142174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.142211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.443 qpair failed and we were unable to recover it. 00:26:11.443 [2024-04-18 21:19:27.142559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.142832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.142869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.443 qpair failed and we were unable to recover it. 00:26:11.443 [2024-04-18 21:19:27.143042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.143309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.143352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.443 qpair failed and we were unable to recover it. 00:26:11.443 [2024-04-18 21:19:27.143575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.143856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.143873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.443 qpair failed and we were unable to recover it. 00:26:11.443 [2024-04-18 21:19:27.144169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.144388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.144404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.443 qpair failed and we were unable to recover it. 00:26:11.443 [2024-04-18 21:19:27.144733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.145022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.145038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.443 qpair failed and we were unable to recover it. 00:26:11.443 [2024-04-18 21:19:27.145322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.145538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.145555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.443 qpair failed and we were unable to recover it. 00:26:11.443 [2024-04-18 21:19:27.145789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.146061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.146095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.443 qpair failed and we were unable to recover it. 00:26:11.443 [2024-04-18 21:19:27.146422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.146766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.146804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.443 qpair failed and we were unable to recover it. 00:26:11.443 [2024-04-18 21:19:27.147158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.147474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.147521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.443 qpair failed and we were unable to recover it. 00:26:11.443 [2024-04-18 21:19:27.147869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.148116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.148132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.443 qpair failed and we were unable to recover it. 00:26:11.443 [2024-04-18 21:19:27.148450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.148855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.148892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.443 qpair failed and we were unable to recover it. 00:26:11.443 [2024-04-18 21:19:27.149270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.149568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.149585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.443 qpair failed and we were unable to recover it. 00:26:11.443 [2024-04-18 21:19:27.149865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.150222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.150259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.443 qpair failed and we were unable to recover it. 00:26:11.443 [2024-04-18 21:19:27.150539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.443 [2024-04-18 21:19:27.150900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.444 [2024-04-18 21:19:27.150937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.444 qpair failed and we were unable to recover it. 00:26:11.444 [2024-04-18 21:19:27.151295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.444 [2024-04-18 21:19:27.151628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.444 [2024-04-18 21:19:27.151665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.444 qpair failed and we were unable to recover it. 00:26:11.444 [2024-04-18 21:19:27.151961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.444 [2024-04-18 21:19:27.152238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.444 [2024-04-18 21:19:27.152274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.444 qpair failed and we were unable to recover it. 00:26:11.444 [2024-04-18 21:19:27.152615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.444 [2024-04-18 21:19:27.152951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.444 [2024-04-18 21:19:27.152987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.444 qpair failed and we were unable to recover it. 00:26:11.444 [2024-04-18 21:19:27.153329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.444 [2024-04-18 21:19:27.153670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.444 [2024-04-18 21:19:27.153709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.444 qpair failed and we were unable to recover it. 00:26:11.444 [2024-04-18 21:19:27.154042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.444 [2024-04-18 21:19:27.154360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.444 [2024-04-18 21:19:27.154397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.444 qpair failed and we were unable to recover it. 00:26:11.444 [2024-04-18 21:19:27.154841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.444 [2024-04-18 21:19:27.155149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.444 [2024-04-18 21:19:27.155185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.444 qpair failed and we were unable to recover it. 00:26:11.444 [2024-04-18 21:19:27.155573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.444 [2024-04-18 21:19:27.155904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.444 [2024-04-18 21:19:27.155940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.444 qpair failed and we were unable to recover it. 00:26:11.444 [2024-04-18 21:19:27.156239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.444 [2024-04-18 21:19:27.156589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.444 [2024-04-18 21:19:27.156626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.444 qpair failed and we were unable to recover it. 00:26:11.444 [2024-04-18 21:19:27.156970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.444 [2024-04-18 21:19:27.157362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.444 [2024-04-18 21:19:27.157398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.444 qpair failed and we were unable to recover it. 00:26:11.444 [2024-04-18 21:19:27.157771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.444 [2024-04-18 21:19:27.157947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.444 [2024-04-18 21:19:27.157964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.444 qpair failed and we were unable to recover it. 00:26:11.444 [2024-04-18 21:19:27.158219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.444 [2024-04-18 21:19:27.158671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.444 [2024-04-18 21:19:27.158688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.444 qpair failed and we were unable to recover it. 00:26:11.444 [2024-04-18 21:19:27.158970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.444 [2024-04-18 21:19:27.159329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.444 [2024-04-18 21:19:27.159365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.444 qpair failed and we were unable to recover it. 00:26:11.444 [2024-04-18 21:19:27.159751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.444 [2024-04-18 21:19:27.160042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.444 [2024-04-18 21:19:27.160079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.444 qpair failed and we were unable to recover it. 00:26:11.444 [2024-04-18 21:19:27.160382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.444 [2024-04-18 21:19:27.160730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.444 [2024-04-18 21:19:27.160768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.444 qpair failed and we were unable to recover it. 00:26:11.444 [2024-04-18 21:19:27.161110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.444 [2024-04-18 21:19:27.161281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.444 [2024-04-18 21:19:27.161317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.444 qpair failed and we were unable to recover it. 00:26:11.444 [2024-04-18 21:19:27.161719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.444 [2024-04-18 21:19:27.162103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.444 [2024-04-18 21:19:27.162139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.444 qpair failed and we were unable to recover it. 00:26:11.444 [2024-04-18 21:19:27.162494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.444 [2024-04-18 21:19:27.162735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.444 [2024-04-18 21:19:27.162752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.444 qpair failed and we were unable to recover it. 00:26:11.444 [2024-04-18 21:19:27.163120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.444 [2024-04-18 21:19:27.163483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.444 [2024-04-18 21:19:27.163531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.444 qpair failed and we were unable to recover it. 00:26:11.444 [2024-04-18 21:19:27.163801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.444 [2024-04-18 21:19:27.164056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.444 [2024-04-18 21:19:27.164092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.444 qpair failed and we were unable to recover it. 00:26:11.444 [2024-04-18 21:19:27.164447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.444 [2024-04-18 21:19:27.164798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.444 [2024-04-18 21:19:27.164814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.444 qpair failed and we were unable to recover it. 00:26:11.444 [2024-04-18 21:19:27.165094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.444 [2024-04-18 21:19:27.165394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.444 [2024-04-18 21:19:27.165430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.444 qpair failed and we were unable to recover it. 00:26:11.444 [2024-04-18 21:19:27.165840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.444 [2024-04-18 21:19:27.166103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.444 [2024-04-18 21:19:27.166139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.444 qpair failed and we were unable to recover it. 00:26:11.444 [2024-04-18 21:19:27.166551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.444 [2024-04-18 21:19:27.166884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.444 [2024-04-18 21:19:27.166920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.444 qpair failed and we were unable to recover it. 00:26:11.444 [2024-04-18 21:19:27.167198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.167548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.167585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.445 qpair failed and we were unable to recover it. 00:26:11.445 [2024-04-18 21:19:27.167927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.168259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.168296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.445 qpair failed and we were unable to recover it. 00:26:11.445 [2024-04-18 21:19:27.168564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.168832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.168867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.445 qpair failed and we were unable to recover it. 00:26:11.445 [2024-04-18 21:19:27.169216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.169495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.169518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.445 qpair failed and we were unable to recover it. 00:26:11.445 [2024-04-18 21:19:27.169746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.170073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.170109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.445 qpair failed and we were unable to recover it. 00:26:11.445 [2024-04-18 21:19:27.170289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.170720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.170758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.445 qpair failed and we were unable to recover it. 00:26:11.445 [2024-04-18 21:19:27.171123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.171461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.171498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.445 qpair failed and we were unable to recover it. 00:26:11.445 [2024-04-18 21:19:27.171804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.172145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.172188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.445 qpair failed and we were unable to recover it. 00:26:11.445 [2024-04-18 21:19:27.172531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.172779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.172816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.445 qpair failed and we were unable to recover it. 00:26:11.445 [2024-04-18 21:19:27.173182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.173499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.173551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.445 qpair failed and we were unable to recover it. 00:26:11.445 [2024-04-18 21:19:27.173895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.174283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.174319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.445 qpair failed and we were unable to recover it. 00:26:11.445 [2024-04-18 21:19:27.174596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.174933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.174969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.445 qpair failed and we were unable to recover it. 00:26:11.445 [2024-04-18 21:19:27.175396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.175727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.175764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.445 qpair failed and we were unable to recover it. 00:26:11.445 [2024-04-18 21:19:27.176052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.176321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.176356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.445 qpair failed and we were unable to recover it. 00:26:11.445 [2024-04-18 21:19:27.176758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.177102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.177139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.445 qpair failed and we were unable to recover it. 00:26:11.445 [2024-04-18 21:19:27.177532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.177838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.177875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.445 qpair failed and we were unable to recover it. 00:26:11.445 [2024-04-18 21:19:27.178224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.178486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.178542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.445 qpair failed and we were unable to recover it. 00:26:11.445 [2024-04-18 21:19:27.178908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.179132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.179167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.445 qpair failed and we were unable to recover it. 00:26:11.445 [2024-04-18 21:19:27.179463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.179796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.179834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.445 qpair failed and we were unable to recover it. 00:26:11.445 [2024-04-18 21:19:27.180129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.180342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.180390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.445 qpair failed and we were unable to recover it. 00:26:11.445 [2024-04-18 21:19:27.180666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.180793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.180809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.445 qpair failed and we were unable to recover it. 00:26:11.445 [2024-04-18 21:19:27.181150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.181419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.181458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.445 qpair failed and we were unable to recover it. 00:26:11.445 [2024-04-18 21:19:27.181799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.182156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.182192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.445 qpair failed and we were unable to recover it. 00:26:11.445 [2024-04-18 21:19:27.182465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.182881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.182918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.445 qpair failed and we were unable to recover it. 00:26:11.445 [2024-04-18 21:19:27.183258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.183481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.183497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.445 qpair failed and we were unable to recover it. 00:26:11.445 [2024-04-18 21:19:27.183788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.184152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.184189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.445 qpair failed and we were unable to recover it. 00:26:11.445 [2024-04-18 21:19:27.184561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.184946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.184984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.445 qpair failed and we were unable to recover it. 00:26:11.445 [2024-04-18 21:19:27.185262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.185485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.185531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.445 qpair failed and we were unable to recover it. 00:26:11.445 [2024-04-18 21:19:27.185881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.186217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-04-18 21:19:27.186253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.446 qpair failed and we were unable to recover it. 00:26:11.446 [2024-04-18 21:19:27.186522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.186746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.186782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.446 qpair failed and we were unable to recover it. 00:26:11.446 [2024-04-18 21:19:27.187132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.187440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.187456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.446 qpair failed and we were unable to recover it. 00:26:11.446 [2024-04-18 21:19:27.187691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.188035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.188070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.446 qpair failed and we were unable to recover it. 00:26:11.446 [2024-04-18 21:19:27.188440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.188768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.188785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.446 qpair failed and we were unable to recover it. 00:26:11.446 [2024-04-18 21:19:27.189067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.189351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.189367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.446 qpair failed and we were unable to recover it. 00:26:11.446 [2024-04-18 21:19:27.189728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.189994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.190031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.446 qpair failed and we were unable to recover it. 00:26:11.446 [2024-04-18 21:19:27.190383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.190740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.190777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.446 qpair failed and we were unable to recover it. 00:26:11.446 [2024-04-18 21:19:27.191109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.191375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.191410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.446 qpair failed and we were unable to recover it. 00:26:11.446 [2024-04-18 21:19:27.191714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.191987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.192030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.446 qpair failed and we were unable to recover it. 00:26:11.446 [2024-04-18 21:19:27.192377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.192638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.192675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.446 qpair failed and we were unable to recover it. 00:26:11.446 [2024-04-18 21:19:27.193004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.193417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.193453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.446 qpair failed and we were unable to recover it. 00:26:11.446 [2024-04-18 21:19:27.193703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.194073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.194109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.446 qpair failed and we were unable to recover it. 00:26:11.446 [2024-04-18 21:19:27.194371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.194589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.194627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.446 qpair failed and we were unable to recover it. 00:26:11.446 [2024-04-18 21:19:27.194974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.195355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.195392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.446 qpair failed and we were unable to recover it. 00:26:11.446 [2024-04-18 21:19:27.195720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.196056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.196093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.446 qpair failed and we were unable to recover it. 00:26:11.446 [2024-04-18 21:19:27.196417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.196625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.196642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.446 qpair failed and we were unable to recover it. 00:26:11.446 [2024-04-18 21:19:27.196961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.197288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.197324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.446 qpair failed and we were unable to recover it. 00:26:11.446 [2024-04-18 21:19:27.197750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.198085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.198122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.446 qpair failed and we were unable to recover it. 00:26:11.446 [2024-04-18 21:19:27.198470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.198737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.198774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.446 qpair failed and we were unable to recover it. 00:26:11.446 [2024-04-18 21:19:27.199167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.199503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.199549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.446 qpair failed and we were unable to recover it. 00:26:11.446 [2024-04-18 21:19:27.199968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.200323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.200358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.446 qpair failed and we were unable to recover it. 00:26:11.446 [2024-04-18 21:19:27.200689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.201003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.201039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.446 qpair failed and we were unable to recover it. 00:26:11.446 [2024-04-18 21:19:27.201455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.201804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.201842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.446 qpair failed and we were unable to recover it. 00:26:11.446 [2024-04-18 21:19:27.202250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.202655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.202693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.446 qpair failed and we were unable to recover it. 00:26:11.446 [2024-04-18 21:19:27.203020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.203337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.203373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.446 qpair failed and we were unable to recover it. 00:26:11.446 [2024-04-18 21:19:27.203787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.204170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.204206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.446 qpair failed and we were unable to recover it. 00:26:11.446 [2024-04-18 21:19:27.204481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.204684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.204720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.446 qpair failed and we were unable to recover it. 00:26:11.446 [2024-04-18 21:19:27.205117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.205497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.205548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.446 qpair failed and we were unable to recover it. 00:26:11.446 [2024-04-18 21:19:27.205980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.206387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.446 [2024-04-18 21:19:27.206422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.446 qpair failed and we were unable to recover it. 00:26:11.447 [2024-04-18 21:19:27.206822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.207100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.207143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.447 qpair failed and we were unable to recover it. 00:26:11.447 [2024-04-18 21:19:27.207481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.207820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.207857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.447 qpair failed and we were unable to recover it. 00:26:11.447 [2024-04-18 21:19:27.208252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.208630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.208668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.447 qpair failed and we were unable to recover it. 00:26:11.447 [2024-04-18 21:19:27.209064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.209405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.209422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.447 qpair failed and we were unable to recover it. 00:26:11.447 [2024-04-18 21:19:27.209723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.210080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.210097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.447 qpair failed and we were unable to recover it. 00:26:11.447 [2024-04-18 21:19:27.210476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.210746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.210784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.447 qpair failed and we were unable to recover it. 00:26:11.447 [2024-04-18 21:19:27.211192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.211594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.211632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.447 qpair failed and we were unable to recover it. 00:26:11.447 [2024-04-18 21:19:27.212073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.212403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.212439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.447 qpair failed and we were unable to recover it. 00:26:11.447 [2024-04-18 21:19:27.212662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.212941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.212978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.447 qpair failed and we were unable to recover it. 00:26:11.447 [2024-04-18 21:19:27.213371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.213696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.213713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.447 qpair failed and we were unable to recover it. 00:26:11.447 [2024-04-18 21:19:27.214101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.214446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.214483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.447 qpair failed and we were unable to recover it. 00:26:11.447 [2024-04-18 21:19:27.214836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.215193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.215230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.447 qpair failed and we were unable to recover it. 00:26:11.447 [2024-04-18 21:19:27.215656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.216039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.216075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.447 qpair failed and we were unable to recover it. 00:26:11.447 [2024-04-18 21:19:27.216408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.216757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.216774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.447 qpair failed and we were unable to recover it. 00:26:11.447 [2024-04-18 21:19:27.217150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.217551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.217588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.447 qpair failed and we were unable to recover it. 00:26:11.447 [2024-04-18 21:19:27.218007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.218340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.218376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.447 qpair failed and we were unable to recover it. 00:26:11.447 [2024-04-18 21:19:27.218768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.219004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.219020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.447 qpair failed and we were unable to recover it. 00:26:11.447 [2024-04-18 21:19:27.219263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.219644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.219672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.447 qpair failed and we were unable to recover it. 00:26:11.447 [2024-04-18 21:19:27.219905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.220182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.220218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.447 qpair failed and we were unable to recover it. 00:26:11.447 [2024-04-18 21:19:27.220636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.220876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.220912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.447 qpair failed and we were unable to recover it. 00:26:11.447 [2024-04-18 21:19:27.221190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.221566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.221603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.447 qpair failed and we were unable to recover it. 00:26:11.447 [2024-04-18 21:19:27.221954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.222334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.222370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.447 qpair failed and we were unable to recover it. 00:26:11.447 [2024-04-18 21:19:27.222691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.223062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.223099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.447 qpair failed and we were unable to recover it. 00:26:11.447 [2024-04-18 21:19:27.223447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.223786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.223823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.447 qpair failed and we were unable to recover it. 00:26:11.447 [2024-04-18 21:19:27.224181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.224498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.224560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.447 qpair failed and we were unable to recover it. 00:26:11.447 [2024-04-18 21:19:27.224890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.225294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.225330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.447 qpair failed and we were unable to recover it. 00:26:11.447 [2024-04-18 21:19:27.225741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.226094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.226130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.447 qpair failed and we were unable to recover it. 00:26:11.447 [2024-04-18 21:19:27.226550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.226895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.226931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.447 qpair failed and we were unable to recover it. 00:26:11.447 [2024-04-18 21:19:27.227282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.227682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.447 [2024-04-18 21:19:27.227719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.447 qpair failed and we were unable to recover it. 00:26:11.448 [2024-04-18 21:19:27.228111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.228441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.228476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.448 qpair failed and we were unable to recover it. 00:26:11.448 [2024-04-18 21:19:27.228907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.229312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.229348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.448 qpair failed and we were unable to recover it. 00:26:11.448 [2024-04-18 21:19:27.229739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.230068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.230105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.448 qpair failed and we were unable to recover it. 00:26:11.448 [2024-04-18 21:19:27.230537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.230886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.230922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.448 qpair failed and we were unable to recover it. 00:26:11.448 [2024-04-18 21:19:27.231259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.231536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.231573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.448 qpair failed and we were unable to recover it. 00:26:11.448 [2024-04-18 21:19:27.231989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.232249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.232286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.448 qpair failed and we were unable to recover it. 00:26:11.448 [2024-04-18 21:19:27.232656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.232982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.233018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.448 qpair failed and we were unable to recover it. 00:26:11.448 [2024-04-18 21:19:27.233435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.233759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.233797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.448 qpair failed and we were unable to recover it. 00:26:11.448 [2024-04-18 21:19:27.234125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.234384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.234400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.448 qpair failed and we were unable to recover it. 00:26:11.448 [2024-04-18 21:19:27.234751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.235065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.235101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.448 qpair failed and we were unable to recover it. 00:26:11.448 [2024-04-18 21:19:27.235447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.235845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.235883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.448 qpair failed and we were unable to recover it. 00:26:11.448 [2024-04-18 21:19:27.236169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.236501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.236548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.448 qpair failed and we were unable to recover it. 00:26:11.448 [2024-04-18 21:19:27.236768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.237063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.237099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.448 qpair failed and we were unable to recover it. 00:26:11.448 [2024-04-18 21:19:27.237491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.237843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.237880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.448 qpair failed and we were unable to recover it. 00:26:11.448 [2024-04-18 21:19:27.238252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.238578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.238616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.448 qpair failed and we were unable to recover it. 00:26:11.448 [2024-04-18 21:19:27.239036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.239442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.239478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.448 qpair failed and we were unable to recover it. 00:26:11.448 [2024-04-18 21:19:27.239894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.240245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.240281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.448 qpair failed and we were unable to recover it. 00:26:11.448 [2024-04-18 21:19:27.240625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.240942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.240978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.448 qpair failed and we were unable to recover it. 00:26:11.448 [2024-04-18 21:19:27.241152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.241420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.241456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.448 qpair failed and we were unable to recover it. 00:26:11.448 [2024-04-18 21:19:27.241904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.242170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.242206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.448 qpair failed and we were unable to recover it. 00:26:11.448 [2024-04-18 21:19:27.242530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.242833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.242869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.448 qpair failed and we were unable to recover it. 00:26:11.448 [2024-04-18 21:19:27.243280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.243620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.243658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.448 qpair failed and we were unable to recover it. 00:26:11.448 [2024-04-18 21:19:27.244014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.244358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.244400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.448 qpair failed and we were unable to recover it. 00:26:11.448 [2024-04-18 21:19:27.244808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.245212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.245248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.448 qpair failed and we were unable to recover it. 00:26:11.448 [2024-04-18 21:19:27.245522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.245884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.245901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.448 qpair failed and we were unable to recover it. 00:26:11.448 [2024-04-18 21:19:27.246281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.246587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.246604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.448 qpair failed and we were unable to recover it. 00:26:11.448 [2024-04-18 21:19:27.246856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.247237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.247274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.448 qpair failed and we were unable to recover it. 00:26:11.448 [2024-04-18 21:19:27.247624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.247971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.248007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.448 qpair failed and we were unable to recover it. 00:26:11.448 [2024-04-18 21:19:27.248428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.248762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.448 [2024-04-18 21:19:27.248799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.448 qpair failed and we were unable to recover it. 00:26:11.449 [2024-04-18 21:19:27.249130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.449 [2024-04-18 21:19:27.249561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.449 [2024-04-18 21:19:27.249609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.449 qpair failed and we were unable to recover it. 00:26:11.449 [2024-04-18 21:19:27.249897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.449 [2024-04-18 21:19:27.250264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.449 [2024-04-18 21:19:27.250301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.449 qpair failed and we were unable to recover it. 00:26:11.449 [2024-04-18 21:19:27.250722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.449 [2024-04-18 21:19:27.251079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.449 [2024-04-18 21:19:27.251115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.449 qpair failed and we were unable to recover it. 00:26:11.449 [2024-04-18 21:19:27.251526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.449 [2024-04-18 21:19:27.251881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.449 [2024-04-18 21:19:27.251917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.449 qpair failed and we were unable to recover it. 00:26:11.449 [2024-04-18 21:19:27.252320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.449 [2024-04-18 21:19:27.252575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.449 [2024-04-18 21:19:27.252592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.449 qpair failed and we were unable to recover it. 00:26:11.449 [2024-04-18 21:19:27.252891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.449 [2024-04-18 21:19:27.253220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.449 [2024-04-18 21:19:27.253256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.449 qpair failed and we were unable to recover it. 00:26:11.449 [2024-04-18 21:19:27.253540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.449 [2024-04-18 21:19:27.253844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.449 [2024-04-18 21:19:27.253880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.449 qpair failed and we were unable to recover it. 00:26:11.449 [2024-04-18 21:19:27.254115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.449 [2024-04-18 21:19:27.254447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.449 [2024-04-18 21:19:27.254463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.449 qpair failed and we were unable to recover it. 00:26:11.449 [2024-04-18 21:19:27.254829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.449 [2024-04-18 21:19:27.255100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.449 [2024-04-18 21:19:27.255137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.449 qpair failed and we were unable to recover it. 00:26:11.449 [2024-04-18 21:19:27.255470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.449 [2024-04-18 21:19:27.255885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.449 [2024-04-18 21:19:27.255922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.449 qpair failed and we were unable to recover it. 00:26:11.449 [2024-04-18 21:19:27.256203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.449 [2024-04-18 21:19:27.256586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.449 [2024-04-18 21:19:27.256619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.449 qpair failed and we were unable to recover it. 00:26:11.449 [2024-04-18 21:19:27.257021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.449 [2024-04-18 21:19:27.257367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.449 [2024-04-18 21:19:27.257404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.449 qpair failed and we were unable to recover it. 00:26:11.449 [2024-04-18 21:19:27.257764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.449 [2024-04-18 21:19:27.258127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.449 [2024-04-18 21:19:27.258143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.449 qpair failed and we were unable to recover it. 00:26:11.449 [2024-04-18 21:19:27.258442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.449 [2024-04-18 21:19:27.258812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.449 [2024-04-18 21:19:27.258850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.449 qpair failed and we were unable to recover it. 00:26:11.449 [2024-04-18 21:19:27.259269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.449 [2024-04-18 21:19:27.259601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.449 [2024-04-18 21:19:27.259617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.449 qpair failed and we were unable to recover it. 00:26:11.449 [2024-04-18 21:19:27.259835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.449 [2024-04-18 21:19:27.260120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.449 [2024-04-18 21:19:27.260156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.449 qpair failed and we were unable to recover it. 00:26:11.449 [2024-04-18 21:19:27.260589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.449 [2024-04-18 21:19:27.260980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.449 [2024-04-18 21:19:27.261016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.449 qpair failed and we were unable to recover it. 00:26:11.449 [2024-04-18 21:19:27.261419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.449 [2024-04-18 21:19:27.261823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.449 [2024-04-18 21:19:27.261861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.449 qpair failed and we were unable to recover it. 00:26:11.449 [2024-04-18 21:19:27.262257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.449 [2024-04-18 21:19:27.262640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.449 [2024-04-18 21:19:27.262657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.449 qpair failed and we were unable to recover it. 00:26:11.449 [2024-04-18 21:19:27.263033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.449 [2024-04-18 21:19:27.263281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.449 [2024-04-18 21:19:27.263317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.449 qpair failed and we were unable to recover it. 00:26:11.449 [2024-04-18 21:19:27.263647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.449 [2024-04-18 21:19:27.264048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.449 [2024-04-18 21:19:27.264083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.449 qpair failed and we were unable to recover it. 00:26:11.449 [2024-04-18 21:19:27.264384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.449 [2024-04-18 21:19:27.264737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.449 [2024-04-18 21:19:27.264753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.449 qpair failed and we were unable to recover it. 00:26:11.449 [2024-04-18 21:19:27.265068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.449 [2024-04-18 21:19:27.265279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.449 [2024-04-18 21:19:27.265295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.449 qpair failed and we were unable to recover it. 00:26:11.449 [2024-04-18 21:19:27.265649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.449 [2024-04-18 21:19:27.265973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.449 [2024-04-18 21:19:27.266009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.449 qpair failed and we were unable to recover it. 00:26:11.449 [2024-04-18 21:19:27.266442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.449 [2024-04-18 21:19:27.266675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.449 [2024-04-18 21:19:27.266712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.449 qpair failed and we were unable to recover it. 00:26:11.449 [2024-04-18 21:19:27.266989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.267374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.267416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.450 qpair failed and we were unable to recover it. 00:26:11.450 [2024-04-18 21:19:27.267779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.268199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.268235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.450 qpair failed and we were unable to recover it. 00:26:11.450 [2024-04-18 21:19:27.268546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.268810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.268827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.450 qpair failed and we were unable to recover it. 00:26:11.450 [2024-04-18 21:19:27.269011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.269376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.269412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.450 qpair failed and we were unable to recover it. 00:26:11.450 [2024-04-18 21:19:27.269752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.269975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.270011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.450 qpair failed and we were unable to recover it. 00:26:11.450 [2024-04-18 21:19:27.270357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.270750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.270787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.450 qpair failed and we were unable to recover it. 00:26:11.450 [2024-04-18 21:19:27.271209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.271609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.271626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.450 qpair failed and we were unable to recover it. 00:26:11.450 [2024-04-18 21:19:27.271868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.272268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.272304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.450 qpair failed and we were unable to recover it. 00:26:11.450 [2024-04-18 21:19:27.272640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.272992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.273028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.450 qpair failed and we were unable to recover it. 00:26:11.450 [2024-04-18 21:19:27.273318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.273750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.273787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.450 qpair failed and we were unable to recover it. 00:26:11.450 [2024-04-18 21:19:27.274210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.274503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.274524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.450 qpair failed and we were unable to recover it. 00:26:11.450 [2024-04-18 21:19:27.274883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.275209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.275245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.450 qpair failed and we were unable to recover it. 00:26:11.450 [2024-04-18 21:19:27.275581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.275889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.275925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.450 qpair failed and we were unable to recover it. 00:26:11.450 [2024-04-18 21:19:27.276323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.276647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.276663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.450 qpair failed and we were unable to recover it. 00:26:11.450 [2024-04-18 21:19:27.277045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.277313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.277349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.450 qpair failed and we were unable to recover it. 00:26:11.450 [2024-04-18 21:19:27.277738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.278010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.278046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.450 qpair failed and we were unable to recover it. 00:26:11.450 [2024-04-18 21:19:27.278394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.278796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.278834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.450 qpair failed and we were unable to recover it. 00:26:11.450 [2024-04-18 21:19:27.279245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.279637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.279654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.450 qpair failed and we were unable to recover it. 00:26:11.450 [2024-04-18 21:19:27.279975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.280309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.280345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.450 qpair failed and we were unable to recover it. 00:26:11.450 [2024-04-18 21:19:27.280735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.280866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.280886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.450 qpair failed and we were unable to recover it. 00:26:11.450 [2024-04-18 21:19:27.281254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.281599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.281635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.450 qpair failed and we were unable to recover it. 00:26:11.450 [2024-04-18 21:19:27.282056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.282440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.282475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.450 qpair failed and we were unable to recover it. 00:26:11.450 [2024-04-18 21:19:27.282834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.283122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.283138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.450 qpair failed and we were unable to recover it. 00:26:11.450 [2024-04-18 21:19:27.283489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.283847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.283884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.450 qpair failed and we were unable to recover it. 00:26:11.450 [2024-04-18 21:19:27.284232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.284563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.284601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.450 qpair failed and we were unable to recover it. 00:26:11.450 [2024-04-18 21:19:27.285022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.285403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.285440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.450 qpair failed and we were unable to recover it. 00:26:11.450 [2024-04-18 21:19:27.285734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.285992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.286028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.450 qpair failed and we were unable to recover it. 00:26:11.450 [2024-04-18 21:19:27.286360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.286679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.286717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.450 qpair failed and we were unable to recover it. 00:26:11.450 [2024-04-18 21:19:27.287137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.287463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.287499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.450 qpair failed and we were unable to recover it. 00:26:11.450 [2024-04-18 21:19:27.287866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.450 [2024-04-18 21:19:27.288265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.288307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.451 qpair failed and we were unable to recover it. 00:26:11.451 [2024-04-18 21:19:27.288650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.288924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.288941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.451 qpair failed and we were unable to recover it. 00:26:11.451 [2024-04-18 21:19:27.289242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.289584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.289622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.451 qpair failed and we were unable to recover it. 00:26:11.451 [2024-04-18 21:19:27.289950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.290262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.290306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.451 qpair failed and we were unable to recover it. 00:26:11.451 [2024-04-18 21:19:27.290617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.290902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.290918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.451 qpair failed and we were unable to recover it. 00:26:11.451 [2024-04-18 21:19:27.291205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.291576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.291592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.451 qpair failed and we were unable to recover it. 00:26:11.451 [2024-04-18 21:19:27.291840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.292115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.292158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.451 qpair failed and we were unable to recover it. 00:26:11.451 [2024-04-18 21:19:27.292527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.292865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.292901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.451 qpair failed and we were unable to recover it. 00:26:11.451 [2024-04-18 21:19:27.293326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.293645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.293661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.451 qpair failed and we were unable to recover it. 00:26:11.451 [2024-04-18 21:19:27.293952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.294317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.294353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.451 qpair failed and we were unable to recover it. 00:26:11.451 [2024-04-18 21:19:27.294689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.295091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.295107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.451 qpair failed and we were unable to recover it. 00:26:11.451 [2024-04-18 21:19:27.295440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.295733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.295771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.451 qpair failed and we were unable to recover it. 00:26:11.451 [2024-04-18 21:19:27.296122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.296472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.296508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.451 qpair failed and we were unable to recover it. 00:26:11.451 [2024-04-18 21:19:27.296860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.297163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.297199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.451 qpair failed and we were unable to recover it. 00:26:11.451 [2024-04-18 21:19:27.297608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.297994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.298031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.451 qpair failed and we were unable to recover it. 00:26:11.451 [2024-04-18 21:19:27.298468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.298785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.298802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.451 qpair failed and we were unable to recover it. 00:26:11.451 [2024-04-18 21:19:27.298924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.299265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.299302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.451 qpair failed and we were unable to recover it. 00:26:11.451 [2024-04-18 21:19:27.299703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.300120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.300135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.451 qpair failed and we were unable to recover it. 00:26:11.451 [2024-04-18 21:19:27.300444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.300696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.300734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.451 qpair failed and we were unable to recover it. 00:26:11.451 [2024-04-18 21:19:27.301149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.301469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.301505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.451 qpair failed and we were unable to recover it. 00:26:11.451 [2024-04-18 21:19:27.301866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.302221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.302258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.451 qpair failed and we were unable to recover it. 00:26:11.451 [2024-04-18 21:19:27.302690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.303097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.303134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.451 qpair failed and we were unable to recover it. 00:26:11.451 [2024-04-18 21:19:27.303462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.303749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.303787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.451 qpair failed and we were unable to recover it. 00:26:11.451 [2024-04-18 21:19:27.304205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.304472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.304488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.451 qpair failed and we were unable to recover it. 00:26:11.451 [2024-04-18 21:19:27.304871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.305224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.305261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.451 qpair failed and we were unable to recover it. 00:26:11.451 [2024-04-18 21:19:27.305616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.305948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.305984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.451 qpair failed and we were unable to recover it. 00:26:11.451 [2024-04-18 21:19:27.306283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.306616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.306633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.451 qpair failed and we were unable to recover it. 00:26:11.451 [2024-04-18 21:19:27.306928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.307260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.307296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.451 qpair failed and we were unable to recover it. 00:26:11.451 [2024-04-18 21:19:27.307661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.307980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.308016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.451 qpair failed and we were unable to recover it. 00:26:11.451 [2024-04-18 21:19:27.308307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.451 [2024-04-18 21:19:27.308721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.308759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.452 qpair failed and we were unable to recover it. 00:26:11.452 [2024-04-18 21:19:27.309175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.309521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.309558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.452 qpair failed and we were unable to recover it. 00:26:11.452 [2024-04-18 21:19:27.309976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.310312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.310348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.452 qpair failed and we were unable to recover it. 00:26:11.452 [2024-04-18 21:19:27.310710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.311012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.311049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.452 qpair failed and we were unable to recover it. 00:26:11.452 [2024-04-18 21:19:27.311470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.311818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.311835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.452 qpair failed and we were unable to recover it. 00:26:11.452 [2024-04-18 21:19:27.312212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.312474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.312521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.452 qpair failed and we were unable to recover it. 00:26:11.452 [2024-04-18 21:19:27.312961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.313205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.313241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.452 qpair failed and we were unable to recover it. 00:26:11.452 [2024-04-18 21:19:27.313587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.313999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.314035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.452 qpair failed and we were unable to recover it. 00:26:11.452 [2024-04-18 21:19:27.314473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.314838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.314875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.452 qpair failed and we were unable to recover it. 00:26:11.452 [2024-04-18 21:19:27.315209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.315537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.315575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.452 qpair failed and we were unable to recover it. 00:26:11.452 [2024-04-18 21:19:27.315905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.316274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.316311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.452 qpair failed and we were unable to recover it. 00:26:11.452 [2024-04-18 21:19:27.316593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.316874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.316910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.452 qpair failed and we were unable to recover it. 00:26:11.452 [2024-04-18 21:19:27.317199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.317534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.317572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.452 qpair failed and we were unable to recover it. 00:26:11.452 [2024-04-18 21:19:27.317914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.318317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.318352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.452 qpair failed and we were unable to recover it. 00:26:11.452 [2024-04-18 21:19:27.318723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.319068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.319104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.452 qpair failed and we were unable to recover it. 00:26:11.452 [2024-04-18 21:19:27.319502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.319786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.319802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.452 qpair failed and we were unable to recover it. 00:26:11.452 [2024-04-18 21:19:27.320081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.320302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.320319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.452 qpair failed and we were unable to recover it. 00:26:11.452 [2024-04-18 21:19:27.320691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.321072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.321108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.452 qpair failed and we were unable to recover it. 00:26:11.452 [2024-04-18 21:19:27.321452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.321737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.321782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.452 qpair failed and we were unable to recover it. 00:26:11.452 [2024-04-18 21:19:27.322066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.322305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.322321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.452 qpair failed and we were unable to recover it. 00:26:11.452 [2024-04-18 21:19:27.322548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.322893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.322928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.452 qpair failed and we were unable to recover it. 00:26:11.452 [2024-04-18 21:19:27.323342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.323624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.323680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.452 qpair failed and we were unable to recover it. 00:26:11.452 [2024-04-18 21:19:27.324112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.324554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.324609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.452 qpair failed and we were unable to recover it. 00:26:11.452 [2024-04-18 21:19:27.324904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.325260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.325297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.452 qpair failed and we were unable to recover it. 00:26:11.452 [2024-04-18 21:19:27.325703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.326062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.326099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.452 qpair failed and we were unable to recover it. 00:26:11.452 [2024-04-18 21:19:27.326429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.326795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.326833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.452 qpair failed and we were unable to recover it. 00:26:11.452 [2024-04-18 21:19:27.327256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.327591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.327629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.452 qpair failed and we were unable to recover it. 00:26:11.452 [2024-04-18 21:19:27.328029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.328305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.328343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.452 qpair failed and we were unable to recover it. 00:26:11.452 [2024-04-18 21:19:27.328629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.328953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.328990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.452 qpair failed and we were unable to recover it. 00:26:11.452 [2024-04-18 21:19:27.329407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.329723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.452 [2024-04-18 21:19:27.329760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.453 qpair failed and we were unable to recover it. 00:26:11.453 [2024-04-18 21:19:27.330105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.453 [2024-04-18 21:19:27.330428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.453 [2024-04-18 21:19:27.330464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.453 qpair failed and we were unable to recover it. 00:26:11.453 [2024-04-18 21:19:27.330821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.453 [2024-04-18 21:19:27.331204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.453 [2024-04-18 21:19:27.331240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.453 qpair failed and we were unable to recover it. 00:26:11.453 [2024-04-18 21:19:27.331638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.453 [2024-04-18 21:19:27.331965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.453 [2024-04-18 21:19:27.332001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.453 qpair failed and we were unable to recover it. 00:26:11.453 [2024-04-18 21:19:27.332342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.453 [2024-04-18 21:19:27.332675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.453 [2024-04-18 21:19:27.332691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.453 qpair failed and we were unable to recover it. 00:26:11.453 [2024-04-18 21:19:27.333056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.453 [2024-04-18 21:19:27.333304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.453 [2024-04-18 21:19:27.333320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.453 qpair failed and we were unable to recover it. 00:26:11.453 [2024-04-18 21:19:27.333612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.453 [2024-04-18 21:19:27.333986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.453 [2024-04-18 21:19:27.334022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.453 qpair failed and we were unable to recover it. 00:26:11.453 [2024-04-18 21:19:27.334442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.453 [2024-04-18 21:19:27.334694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.453 [2024-04-18 21:19:27.334732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.453 qpair failed and we were unable to recover it. 00:26:11.453 [2024-04-18 21:19:27.335060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.453 [2024-04-18 21:19:27.335333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.453 [2024-04-18 21:19:27.335369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.453 qpair failed and we were unable to recover it. 00:26:11.453 [2024-04-18 21:19:27.335781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.453 [2024-04-18 21:19:27.336095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.453 [2024-04-18 21:19:27.336132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.453 qpair failed and we were unable to recover it. 00:26:11.453 [2024-04-18 21:19:27.336474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.453 [2024-04-18 21:19:27.336839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.453 [2024-04-18 21:19:27.336876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.453 qpair failed and we were unable to recover it. 00:26:11.453 [2024-04-18 21:19:27.337205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.453 [2024-04-18 21:19:27.337538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.453 [2024-04-18 21:19:27.337576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.453 qpair failed and we were unable to recover it. 00:26:11.453 [2024-04-18 21:19:27.337957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.453 [2024-04-18 21:19:27.338320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.453 [2024-04-18 21:19:27.338336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.453 qpair failed and we were unable to recover it. 00:26:11.453 [2024-04-18 21:19:27.338611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.453 [2024-04-18 21:19:27.338905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.453 [2024-04-18 21:19:27.338941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.453 qpair failed and we were unable to recover it. 00:26:11.453 [2024-04-18 21:19:27.339335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.453 [2024-04-18 21:19:27.339718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.453 [2024-04-18 21:19:27.339754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.453 qpair failed and we were unable to recover it. 00:26:11.453 [2024-04-18 21:19:27.340170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.453 [2024-04-18 21:19:27.340484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.453 [2024-04-18 21:19:27.340539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.453 qpair failed and we were unable to recover it. 00:26:11.453 [2024-04-18 21:19:27.340963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.453 [2024-04-18 21:19:27.341299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.453 [2024-04-18 21:19:27.341335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.453 qpair failed and we were unable to recover it. 00:26:11.453 [2024-04-18 21:19:27.341747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.453 [2024-04-18 21:19:27.342108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.453 [2024-04-18 21:19:27.342124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.453 qpair failed and we were unable to recover it. 00:26:11.453 [2024-04-18 21:19:27.342435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.453 [2024-04-18 21:19:27.342761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.453 [2024-04-18 21:19:27.342798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.453 qpair failed and we were unable to recover it. 00:26:11.453 [2024-04-18 21:19:27.343082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.453 [2024-04-18 21:19:27.343424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.453 [2024-04-18 21:19:27.343461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.453 qpair failed and we were unable to recover it. 00:26:11.453 [2024-04-18 21:19:27.343868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.453 [2024-04-18 21:19:27.344184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.453 [2024-04-18 21:19:27.344221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.453 qpair failed and we were unable to recover it. 00:26:11.453 [2024-04-18 21:19:27.344618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.453 [2024-04-18 21:19:27.345027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.453 [2024-04-18 21:19:27.345063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.453 qpair failed and we were unable to recover it. 00:26:11.453 [2024-04-18 21:19:27.345475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.453 [2024-04-18 21:19:27.345868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.453 [2024-04-18 21:19:27.345906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.453 qpair failed and we were unable to recover it. 00:26:11.453 [2024-04-18 21:19:27.346299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.453 [2024-04-18 21:19:27.346693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.453 [2024-04-18 21:19:27.346731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.453 qpair failed and we were unable to recover it. 00:26:11.453 [2024-04-18 21:19:27.347163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.453 [2024-04-18 21:19:27.347523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.453 [2024-04-18 21:19:27.347559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.453 qpair failed and we were unable to recover it. 00:26:11.453 [2024-04-18 21:19:27.347817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.454 [2024-04-18 21:19:27.348151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.454 [2024-04-18 21:19:27.348167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.454 qpair failed and we were unable to recover it. 00:26:11.454 [2024-04-18 21:19:27.348466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.454 [2024-04-18 21:19:27.348893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.454 [2024-04-18 21:19:27.348931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.454 qpair failed and we were unable to recover it. 00:26:11.454 [2024-04-18 21:19:27.349346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.454 [2024-04-18 21:19:27.349623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.454 [2024-04-18 21:19:27.349661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.454 qpair failed and we were unable to recover it. 00:26:11.454 [2024-04-18 21:19:27.350090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.454 [2024-04-18 21:19:27.350341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.454 [2024-04-18 21:19:27.350376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.454 qpair failed and we were unable to recover it. 00:26:11.454 [2024-04-18 21:19:27.350740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.454 [2024-04-18 21:19:27.351106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.454 [2024-04-18 21:19:27.351142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.454 qpair failed and we were unable to recover it. 00:26:11.454 [2024-04-18 21:19:27.351497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.454 [2024-04-18 21:19:27.351765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.454 [2024-04-18 21:19:27.351781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.454 qpair failed and we were unable to recover it. 00:26:11.454 [2024-04-18 21:19:27.352078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.454 [2024-04-18 21:19:27.352384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.454 [2024-04-18 21:19:27.352419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.454 qpair failed and we were unable to recover it. 00:26:11.454 [2024-04-18 21:19:27.352765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.454 [2024-04-18 21:19:27.353081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.454 [2024-04-18 21:19:27.353118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.454 qpair failed and we were unable to recover it. 00:26:11.454 [2024-04-18 21:19:27.353490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.454 [2024-04-18 21:19:27.353880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.454 [2024-04-18 21:19:27.353917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.454 qpair failed and we were unable to recover it. 00:26:11.454 [2024-04-18 21:19:27.354295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.454 [2024-04-18 21:19:27.354629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.454 [2024-04-18 21:19:27.354667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.454 qpair failed and we were unable to recover it. 00:26:11.454 [2024-04-18 21:19:27.355038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.454 [2024-04-18 21:19:27.355410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.454 [2024-04-18 21:19:27.355449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.454 qpair failed and we were unable to recover it. 00:26:11.454 [2024-04-18 21:19:27.355811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.454 [2024-04-18 21:19:27.356163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.454 [2024-04-18 21:19:27.356200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.454 qpair failed and we were unable to recover it. 00:26:11.454 [2024-04-18 21:19:27.356487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.454 [2024-04-18 21:19:27.356737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.454 [2024-04-18 21:19:27.356754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.454 qpair failed and we were unable to recover it. 00:26:11.454 [2024-04-18 21:19:27.356981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.454 [2024-04-18 21:19:27.357346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.454 [2024-04-18 21:19:27.357362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.454 qpair failed and we were unable to recover it. 00:26:11.454 [2024-04-18 21:19:27.357596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.454 [2024-04-18 21:19:27.357886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.454 [2024-04-18 21:19:27.357902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.454 qpair failed and we were unable to recover it. 00:26:11.454 [2024-04-18 21:19:27.358023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.454 [2024-04-18 21:19:27.358308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.454 [2024-04-18 21:19:27.358324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.454 qpair failed and we were unable to recover it. 00:26:11.454 [2024-04-18 21:19:27.358624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.454 [2024-04-18 21:19:27.358908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.454 [2024-04-18 21:19:27.358924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.454 qpair failed and we were unable to recover it. 00:26:11.454 [2024-04-18 21:19:27.359204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.454 [2024-04-18 21:19:27.359499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.454 [2024-04-18 21:19:27.359523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.454 qpair failed and we were unable to recover it. 00:26:11.454 [2024-04-18 21:19:27.359734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.454 [2024-04-18 21:19:27.360012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.454 [2024-04-18 21:19:27.360047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.454 qpair failed and we were unable to recover it. 00:26:11.454 [2024-04-18 21:19:27.360438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.454 [2024-04-18 21:19:27.360759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.454 [2024-04-18 21:19:27.360779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.454 qpair failed and we were unable to recover it. 00:26:11.454 [2024-04-18 21:19:27.361158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.454 [2024-04-18 21:19:27.361524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.454 [2024-04-18 21:19:27.361541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.454 qpair failed and we were unable to recover it. 00:26:11.454 [2024-04-18 21:19:27.361841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.454 [2024-04-18 21:19:27.362181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.454 [2024-04-18 21:19:27.362216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.454 qpair failed and we were unable to recover it. 00:26:11.454 [2024-04-18 21:19:27.362576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.721 [2024-04-18 21:19:27.363050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.363066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.722 qpair failed and we were unable to recover it. 00:26:11.722 [2024-04-18 21:19:27.363450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.363782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.363798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.722 qpair failed and we were unable to recover it. 00:26:11.722 [2024-04-18 21:19:27.364091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.364320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.364336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.722 qpair failed and we were unable to recover it. 00:26:11.722 [2024-04-18 21:19:27.364578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.364944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.364979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.722 qpair failed and we were unable to recover it. 00:26:11.722 [2024-04-18 21:19:27.365249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.365567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.365603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.722 qpair failed and we were unable to recover it. 00:26:11.722 [2024-04-18 21:19:27.365945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.366245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.366261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.722 qpair failed and we were unable to recover it. 00:26:11.722 [2024-04-18 21:19:27.366556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.366831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.366869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.722 qpair failed and we were unable to recover it. 00:26:11.722 [2024-04-18 21:19:27.367262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.367613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.367629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.722 qpair failed and we were unable to recover it. 00:26:11.722 [2024-04-18 21:19:27.367933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.368223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.368238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.722 qpair failed and we were unable to recover it. 00:26:11.722 [2024-04-18 21:19:27.368539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.368840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.368877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.722 qpair failed and we were unable to recover it. 00:26:11.722 [2024-04-18 21:19:27.369229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.369627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.369664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.722 qpair failed and we were unable to recover it. 00:26:11.722 [2024-04-18 21:19:27.369956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.370273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.370309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.722 qpair failed and we were unable to recover it. 00:26:11.722 [2024-04-18 21:19:27.370739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.371122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.371158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.722 qpair failed and we were unable to recover it. 00:26:11.722 [2024-04-18 21:19:27.371496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.371801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.371837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.722 qpair failed and we were unable to recover it. 00:26:11.722 [2024-04-18 21:19:27.372237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.372621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.372658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.722 qpair failed and we were unable to recover it. 00:26:11.722 [2024-04-18 21:19:27.373068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.373304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.373320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.722 qpair failed and we were unable to recover it. 00:26:11.722 [2024-04-18 21:19:27.373761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.374043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.374059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.722 qpair failed and we were unable to recover it. 00:26:11.722 [2024-04-18 21:19:27.374367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.374727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.374744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.722 qpair failed and we were unable to recover it. 00:26:11.722 [2024-04-18 21:19:27.375044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.375438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.375474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.722 qpair failed and we were unable to recover it. 00:26:11.722 [2024-04-18 21:19:27.375818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.376167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.376204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.722 qpair failed and we were unable to recover it. 00:26:11.722 [2024-04-18 21:19:27.376622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.377004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.377040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.722 qpair failed and we were unable to recover it. 00:26:11.722 [2024-04-18 21:19:27.377559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.377880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.377917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.722 qpair failed and we were unable to recover it. 00:26:11.722 [2024-04-18 21:19:27.378195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.378536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.378573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.722 qpair failed and we were unable to recover it. 00:26:11.722 [2024-04-18 21:19:27.378887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.379158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.379188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.722 qpair failed and we were unable to recover it. 00:26:11.722 [2024-04-18 21:19:27.379535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.379809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.379847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.722 qpair failed and we were unable to recover it. 00:26:11.722 [2024-04-18 21:19:27.380142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.380451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.380486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.722 qpair failed and we were unable to recover it. 00:26:11.722 [2024-04-18 21:19:27.380832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.381217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.381253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.722 qpair failed and we were unable to recover it. 00:26:11.722 [2024-04-18 21:19:27.381546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.381936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.381972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.722 qpair failed and we were unable to recover it. 00:26:11.722 [2024-04-18 21:19:27.382381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.382724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.382761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.722 qpair failed and we were unable to recover it. 00:26:11.722 [2024-04-18 21:19:27.383172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.722 [2024-04-18 21:19:27.383550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.383588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.723 qpair failed and we were unable to recover it. 00:26:11.723 [2024-04-18 21:19:27.383940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.384153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.384169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.723 qpair failed and we were unable to recover it. 00:26:11.723 [2024-04-18 21:19:27.384397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.384736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.384752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.723 qpair failed and we were unable to recover it. 00:26:11.723 [2024-04-18 21:19:27.385154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.385472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.385525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.723 qpair failed and we were unable to recover it. 00:26:11.723 [2024-04-18 21:19:27.385901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.386168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.386205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.723 qpair failed and we were unable to recover it. 00:26:11.723 [2024-04-18 21:19:27.386554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.386940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.386979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.723 qpair failed and we were unable to recover it. 00:26:11.723 [2024-04-18 21:19:27.387224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.387422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.387438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.723 qpair failed and we were unable to recover it. 00:26:11.723 [2024-04-18 21:19:27.387747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.388029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.388045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.723 qpair failed and we were unable to recover it. 00:26:11.723 [2024-04-18 21:19:27.388405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.388642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.388658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.723 qpair failed and we were unable to recover it. 00:26:11.723 [2024-04-18 21:19:27.388956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.389332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.389348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.723 qpair failed and we were unable to recover it. 00:26:11.723 [2024-04-18 21:19:27.389649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.389943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.389959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.723 qpair failed and we were unable to recover it. 00:26:11.723 [2024-04-18 21:19:27.390328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.390613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.390630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.723 qpair failed and we were unable to recover it. 00:26:11.723 [2024-04-18 21:19:27.391003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.391381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.391417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.723 qpair failed and we were unable to recover it. 00:26:11.723 [2024-04-18 21:19:27.391707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.392090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.392126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.723 qpair failed and we were unable to recover it. 00:26:11.723 [2024-04-18 21:19:27.392560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.392961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.392997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.723 qpair failed and we were unable to recover it. 00:26:11.723 [2024-04-18 21:19:27.393348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.393695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.393732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.723 qpair failed and we were unable to recover it. 00:26:11.723 [2024-04-18 21:19:27.394096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.394499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.394550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.723 qpair failed and we were unable to recover it. 00:26:11.723 [2024-04-18 21:19:27.394822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.395187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.395223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.723 qpair failed and we were unable to recover it. 00:26:11.723 [2024-04-18 21:19:27.395622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.395953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.395989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.723 qpair failed and we were unable to recover it. 00:26:11.723 [2024-04-18 21:19:27.396432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.396760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.396780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.723 qpair failed and we were unable to recover it. 00:26:11.723 [2024-04-18 21:19:27.397149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.397425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.397465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.723 qpair failed and we were unable to recover it. 00:26:11.723 [2024-04-18 21:19:27.397865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.398208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.398246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.723 qpair failed and we were unable to recover it. 00:26:11.723 [2024-04-18 21:19:27.398668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.399004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.399040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.723 qpair failed and we were unable to recover it. 00:26:11.723 [2024-04-18 21:19:27.399459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.399729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.399745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.723 qpair failed and we were unable to recover it. 00:26:11.723 [2024-04-18 21:19:27.400060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.400406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.400422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.723 qpair failed and we were unable to recover it. 00:26:11.723 [2024-04-18 21:19:27.400817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.401038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.401054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.723 qpair failed and we were unable to recover it. 00:26:11.723 [2024-04-18 21:19:27.401367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.401713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.401751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.723 qpair failed and we were unable to recover it. 00:26:11.723 [2024-04-18 21:19:27.402078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.402344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.402380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.723 qpair failed and we were unable to recover it. 00:26:11.723 [2024-04-18 21:19:27.402652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.402928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.402944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.723 qpair failed and we were unable to recover it. 00:26:11.723 [2024-04-18 21:19:27.403290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.723 [2024-04-18 21:19:27.403468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.403488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.724 qpair failed and we were unable to recover it. 00:26:11.724 [2024-04-18 21:19:27.403786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.404091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.404128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.724 qpair failed and we were unable to recover it. 00:26:11.724 [2024-04-18 21:19:27.404470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.404831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.404869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.724 qpair failed and we were unable to recover it. 00:26:11.724 [2024-04-18 21:19:27.405285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.405613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.405656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.724 qpair failed and we were unable to recover it. 00:26:11.724 [2024-04-18 21:19:27.405895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.406206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.406222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.724 qpair failed and we were unable to recover it. 00:26:11.724 [2024-04-18 21:19:27.406403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.406631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.406647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.724 qpair failed and we were unable to recover it. 00:26:11.724 [2024-04-18 21:19:27.406945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.407225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.407261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.724 qpair failed and we were unable to recover it. 00:26:11.724 [2024-04-18 21:19:27.407555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.408029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.408065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.724 qpair failed and we were unable to recover it. 00:26:11.724 [2024-04-18 21:19:27.408362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.408711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.408748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.724 qpair failed and we were unable to recover it. 00:26:11.724 [2024-04-18 21:19:27.409111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.409421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.409437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.724 qpair failed and we were unable to recover it. 00:26:11.724 [2024-04-18 21:19:27.409815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.410113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.410129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.724 qpair failed and we were unable to recover it. 00:26:11.724 [2024-04-18 21:19:27.410366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.410728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.410744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.724 qpair failed and we were unable to recover it. 00:26:11.724 [2024-04-18 21:19:27.411101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.411425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.411461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.724 qpair failed and we were unable to recover it. 00:26:11.724 [2024-04-18 21:19:27.411878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.412280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.412315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.724 qpair failed and we were unable to recover it. 00:26:11.724 [2024-04-18 21:19:27.412655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.413020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.413056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.724 qpair failed and we were unable to recover it. 00:26:11.724 [2024-04-18 21:19:27.413414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.413749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.413787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.724 qpair failed and we were unable to recover it. 00:26:11.724 [2024-04-18 21:19:27.414227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.414542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.414580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.724 qpair failed and we were unable to recover it. 00:26:11.724 [2024-04-18 21:19:27.414895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.415124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.415139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.724 qpair failed and we were unable to recover it. 00:26:11.724 [2024-04-18 21:19:27.415431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.415802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.415839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.724 qpair failed and we were unable to recover it. 00:26:11.724 [2024-04-18 21:19:27.416261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.416596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.416633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.724 qpair failed and we were unable to recover it. 00:26:11.724 [2024-04-18 21:19:27.416905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.417309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.417345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.724 qpair failed and we were unable to recover it. 00:26:11.724 [2024-04-18 21:19:27.417700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.418065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.418081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.724 qpair failed and we were unable to recover it. 00:26:11.724 [2024-04-18 21:19:27.418366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.418773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.418816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.724 qpair failed and we were unable to recover it. 00:26:11.724 [2024-04-18 21:19:27.419114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.419425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.419461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.724 qpair failed and we were unable to recover it. 00:26:11.724 [2024-04-18 21:19:27.419870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.420282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.420318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.724 qpair failed and we were unable to recover it. 00:26:11.724 [2024-04-18 21:19:27.420607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.420945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.420981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.724 qpair failed and we were unable to recover it. 00:26:11.724 [2024-04-18 21:19:27.421312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.421643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.421684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.724 qpair failed and we were unable to recover it. 00:26:11.724 [2024-04-18 21:19:27.421974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.422336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.422352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.724 qpair failed and we were unable to recover it. 00:26:11.724 [2024-04-18 21:19:27.422600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.422847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.422864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.724 qpair failed and we were unable to recover it. 00:26:11.724 [2024-04-18 21:19:27.423185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.724 [2024-04-18 21:19:27.423593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.423630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.725 qpair failed and we were unable to recover it. 00:26:11.725 [2024-04-18 21:19:27.424052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.424372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.424388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.725 qpair failed and we were unable to recover it. 00:26:11.725 [2024-04-18 21:19:27.424630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.425001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.425037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.725 qpair failed and we were unable to recover it. 00:26:11.725 [2024-04-18 21:19:27.425458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.425811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.425849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.725 qpair failed and we were unable to recover it. 00:26:11.725 [2024-04-18 21:19:27.426188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.426575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.426612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.725 qpair failed and we were unable to recover it. 00:26:11.725 [2024-04-18 21:19:27.427040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.427310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.427347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.725 qpair failed and we were unable to recover it. 00:26:11.725 [2024-04-18 21:19:27.427797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.428114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.428151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.725 qpair failed and we were unable to recover it. 00:26:11.725 [2024-04-18 21:19:27.428504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.428850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.428886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.725 qpair failed and we were unable to recover it. 00:26:11.725 [2024-04-18 21:19:27.429330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.429732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.429770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.725 qpair failed and we were unable to recover it. 00:26:11.725 [2024-04-18 21:19:27.430131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.430438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.430474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.725 qpair failed and we were unable to recover it. 00:26:11.725 [2024-04-18 21:19:27.430840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.431111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.431147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.725 qpair failed and we were unable to recover it. 00:26:11.725 [2024-04-18 21:19:27.431485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.431823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.431840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.725 qpair failed and we were unable to recover it. 00:26:11.725 [2024-04-18 21:19:27.432206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.432576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.432614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.725 qpair failed and we were unable to recover it. 00:26:11.725 [2024-04-18 21:19:27.432881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.433262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.433278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.725 qpair failed and we were unable to recover it. 00:26:11.725 [2024-04-18 21:19:27.433581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.433984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.434021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.725 qpair failed and we were unable to recover it. 00:26:11.725 [2024-04-18 21:19:27.434419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.434760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.434797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.725 qpair failed and we were unable to recover it. 00:26:11.725 [2024-04-18 21:19:27.435143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.435364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.435400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.725 qpair failed and we were unable to recover it. 00:26:11.725 [2024-04-18 21:19:27.435796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.436111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.436128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.725 qpair failed and we were unable to recover it. 00:26:11.725 [2024-04-18 21:19:27.436517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.436819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.436856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.725 qpair failed and we were unable to recover it. 00:26:11.725 [2024-04-18 21:19:27.437223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.437547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.437585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.725 qpair failed and we were unable to recover it. 00:26:11.725 [2024-04-18 21:19:27.437867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.438270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.438307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.725 qpair failed and we were unable to recover it. 00:26:11.725 [2024-04-18 21:19:27.438662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.438953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.438969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.725 qpair failed and we were unable to recover it. 00:26:11.725 [2024-04-18 21:19:27.439278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.439554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.439598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.725 qpair failed and we were unable to recover it. 00:26:11.725 [2024-04-18 21:19:27.439996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.440424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.440461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.725 qpair failed and we were unable to recover it. 00:26:11.725 [2024-04-18 21:19:27.440868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.441087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.441102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.725 qpair failed and we were unable to recover it. 00:26:11.725 [2024-04-18 21:19:27.441404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.441773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.441811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.725 qpair failed and we were unable to recover it. 00:26:11.725 [2024-04-18 21:19:27.442155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.442497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.442543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.725 qpair failed and we were unable to recover it. 00:26:11.725 [2024-04-18 21:19:27.442963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.443227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.443243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.725 qpair failed and we were unable to recover it. 00:26:11.725 [2024-04-18 21:19:27.443536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.443813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.443829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.725 qpair failed and we were unable to recover it. 00:26:11.725 [2024-04-18 21:19:27.444197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.444535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.725 [2024-04-18 21:19:27.444551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.726 qpair failed and we were unable to recover it. 00:26:11.726 [2024-04-18 21:19:27.444831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.726 [2024-04-18 21:19:27.445119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.726 [2024-04-18 21:19:27.445135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.726 qpair failed and we were unable to recover it. 00:26:11.726 [2024-04-18 21:19:27.445466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.726 [2024-04-18 21:19:27.445877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.726 [2024-04-18 21:19:27.445915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.726 qpair failed and we were unable to recover it. 00:26:11.726 [2024-04-18 21:19:27.446317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.726 [2024-04-18 21:19:27.446624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.726 [2024-04-18 21:19:27.446663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.726 qpair failed and we were unable to recover it. 00:26:11.726 [2024-04-18 21:19:27.447093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.726 [2024-04-18 21:19:27.447436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.726 [2024-04-18 21:19:27.447472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.726 qpair failed and we were unable to recover it. 00:26:11.726 [2024-04-18 21:19:27.447828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.726 [2024-04-18 21:19:27.448140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.726 [2024-04-18 21:19:27.448176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.726 qpair failed and we were unable to recover it. 00:26:11.726 [2024-04-18 21:19:27.448506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.726 [2024-04-18 21:19:27.448873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.726 [2024-04-18 21:19:27.448909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.726 qpair failed and we were unable to recover it. 00:26:11.726 [2024-04-18 21:19:27.449144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.726 [2024-04-18 21:19:27.449546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.726 [2024-04-18 21:19:27.449582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.726 qpair failed and we were unable to recover it. 00:26:11.726 [2024-04-18 21:19:27.449867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.726 [2024-04-18 21:19:27.450264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.726 [2024-04-18 21:19:27.450279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.726 qpair failed and we were unable to recover it. 00:26:11.726 [2024-04-18 21:19:27.450584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.726 [2024-04-18 21:19:27.450993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.726 [2024-04-18 21:19:27.451028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.726 qpair failed and we were unable to recover it. 00:26:11.726 [2024-04-18 21:19:27.451353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.726 [2024-04-18 21:19:27.451681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.726 [2024-04-18 21:19:27.451718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.726 qpair failed and we were unable to recover it. 00:26:11.726 [2024-04-18 21:19:27.452064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.726 [2024-04-18 21:19:27.452456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.726 [2024-04-18 21:19:27.452492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.726 qpair failed and we were unable to recover it. 00:26:11.726 [2024-04-18 21:19:27.452874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.726 [2024-04-18 21:19:27.453271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.726 [2024-04-18 21:19:27.453308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.726 qpair failed and we were unable to recover it. 00:26:11.726 [2024-04-18 21:19:27.453723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.726 [2024-04-18 21:19:27.453993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.726 [2024-04-18 21:19:27.454029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.726 qpair failed and we were unable to recover it. 00:26:11.726 [2024-04-18 21:19:27.454256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.726 [2024-04-18 21:19:27.454534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.726 [2024-04-18 21:19:27.454571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.726 qpair failed and we were unable to recover it. 00:26:11.726 [2024-04-18 21:19:27.454948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.726 [2024-04-18 21:19:27.455356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.726 [2024-04-18 21:19:27.455392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.726 qpair failed and we were unable to recover it. 00:26:11.726 [2024-04-18 21:19:27.455811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.726 [2024-04-18 21:19:27.456212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.726 [2024-04-18 21:19:27.456227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.726 qpair failed and we were unable to recover it. 00:26:11.726 [2024-04-18 21:19:27.456596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.726 [2024-04-18 21:19:27.456886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.726 [2024-04-18 21:19:27.456902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.726 qpair failed and we were unable to recover it. 00:26:11.726 [2024-04-18 21:19:27.457205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.726 [2024-04-18 21:19:27.457593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.726 [2024-04-18 21:19:27.457630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.726 qpair failed and we were unable to recover it. 00:26:11.726 [2024-04-18 21:19:27.458043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.726 [2024-04-18 21:19:27.458370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.726 [2024-04-18 21:19:27.458406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.726 qpair failed and we were unable to recover it. 00:26:11.726 [2024-04-18 21:19:27.458778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.726 [2024-04-18 21:19:27.459185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.726 [2024-04-18 21:19:27.459221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.726 qpair failed and we were unable to recover it. 00:26:11.726 [2024-04-18 21:19:27.459527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.726 [2024-04-18 21:19:27.459879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.726 [2024-04-18 21:19:27.459915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.726 qpair failed and we were unable to recover it. 00:26:11.726 [2024-04-18 21:19:27.460207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.726 [2024-04-18 21:19:27.460607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.726 [2024-04-18 21:19:27.460645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.726 qpair failed and we were unable to recover it. 00:26:11.726 [2024-04-18 21:19:27.460987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.726 [2024-04-18 21:19:27.461318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.726 [2024-04-18 21:19:27.461354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.726 qpair failed and we were unable to recover it. 00:26:11.727 [2024-04-18 21:19:27.461783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.462048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.462064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.727 qpair failed and we were unable to recover it. 00:26:11.727 [2024-04-18 21:19:27.462273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.462677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.462714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.727 qpair failed and we were unable to recover it. 00:26:11.727 [2024-04-18 21:19:27.463108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.463433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.463469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.727 qpair failed and we were unable to recover it. 00:26:11.727 [2024-04-18 21:19:27.463814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.464129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.464166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.727 qpair failed and we were unable to recover it. 00:26:11.727 [2024-04-18 21:19:27.464568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.464835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.464871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.727 qpair failed and we were unable to recover it. 00:26:11.727 [2024-04-18 21:19:27.465266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.465688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.465725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.727 qpair failed and we were unable to recover it. 00:26:11.727 [2024-04-18 21:19:27.466064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.466333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.466349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.727 qpair failed and we were unable to recover it. 00:26:11.727 [2024-04-18 21:19:27.466738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.466946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.466982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.727 qpair failed and we were unable to recover it. 00:26:11.727 [2024-04-18 21:19:27.467391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.467660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.467698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.727 qpair failed and we were unable to recover it. 00:26:11.727 [2024-04-18 21:19:27.468035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.468248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.468266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.727 qpair failed and we were unable to recover it. 00:26:11.727 [2024-04-18 21:19:27.468560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.468912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.468949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.727 qpair failed and we were unable to recover it. 00:26:11.727 [2024-04-18 21:19:27.469243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.469584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.469621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.727 qpair failed and we were unable to recover it. 00:26:11.727 [2024-04-18 21:19:27.469906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.470139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.470174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.727 qpair failed and we were unable to recover it. 00:26:11.727 [2024-04-18 21:19:27.470528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.470794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.470830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.727 qpair failed and we were unable to recover it. 00:26:11.727 [2024-04-18 21:19:27.471167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.471484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.471531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.727 qpair failed and we were unable to recover it. 00:26:11.727 [2024-04-18 21:19:27.471962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.472342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.472380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.727 qpair failed and we were unable to recover it. 00:26:11.727 [2024-04-18 21:19:27.472606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.472965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.473002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.727 qpair failed and we were unable to recover it. 00:26:11.727 [2024-04-18 21:19:27.473336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.473695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.473711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.727 qpair failed and we were unable to recover it. 00:26:11.727 [2024-04-18 21:19:27.474011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.474341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.474377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.727 qpair failed and we were unable to recover it. 00:26:11.727 [2024-04-18 21:19:27.474707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.474974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.475012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.727 qpair failed and we were unable to recover it. 00:26:11.727 [2024-04-18 21:19:27.475311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.475583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.475627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.727 qpair failed and we were unable to recover it. 00:26:11.727 [2024-04-18 21:19:27.475928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.476251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.476287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.727 qpair failed and we were unable to recover it. 00:26:11.727 [2024-04-18 21:19:27.476626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.476824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.476861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.727 qpair failed and we were unable to recover it. 00:26:11.727 [2024-04-18 21:19:27.477205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.477543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.477580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.727 qpair failed and we were unable to recover it. 00:26:11.727 [2024-04-18 21:19:27.477850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.478044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.478080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.727 qpair failed and we were unable to recover it. 00:26:11.727 [2024-04-18 21:19:27.478420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.478705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.478742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.727 qpair failed and we were unable to recover it. 00:26:11.727 [2024-04-18 21:19:27.479085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.479378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.479414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.727 qpair failed and we were unable to recover it. 00:26:11.727 [2024-04-18 21:19:27.479758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.480044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.480080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.727 qpair failed and we were unable to recover it. 00:26:11.727 [2024-04-18 21:19:27.480503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.480864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.727 [2024-04-18 21:19:27.480900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.727 qpair failed and we were unable to recover it. 00:26:11.727 [2024-04-18 21:19:27.481192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.481474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.481490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.728 qpair failed and we were unable to recover it. 00:26:11.728 [2024-04-18 21:19:27.481792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.482070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.482106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.728 qpair failed and we were unable to recover it. 00:26:11.728 [2024-04-18 21:19:27.482541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.482803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.482840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.728 qpair failed and we were unable to recover it. 00:26:11.728 [2024-04-18 21:19:27.483093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.483413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.483429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.728 qpair failed and we were unable to recover it. 00:26:11.728 [2024-04-18 21:19:27.483649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.483926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.483943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.728 qpair failed and we were unable to recover it. 00:26:11.728 [2024-04-18 21:19:27.484178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.484463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.484502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.728 qpair failed and we were unable to recover it. 00:26:11.728 [2024-04-18 21:19:27.484863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.485267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.485304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.728 qpair failed and we were unable to recover it. 00:26:11.728 [2024-04-18 21:19:27.485577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.485902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.485937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.728 qpair failed and we were unable to recover it. 00:26:11.728 [2024-04-18 21:19:27.486269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.486649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.486687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.728 qpair failed and we were unable to recover it. 00:26:11.728 [2024-04-18 21:19:27.487017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.487423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.487439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.728 qpair failed and we were unable to recover it. 00:26:11.728 [2024-04-18 21:19:27.487673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.487962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.487999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.728 qpair failed and we were unable to recover it. 00:26:11.728 [2024-04-18 21:19:27.488348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.488682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.488719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.728 qpair failed and we were unable to recover it. 00:26:11.728 [2024-04-18 21:19:27.489161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.489422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.489458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.728 qpair failed and we were unable to recover it. 00:26:11.728 [2024-04-18 21:19:27.489814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.490148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.490185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.728 qpair failed and we were unable to recover it. 00:26:11.728 [2024-04-18 21:19:27.490526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.490843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.490879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.728 qpair failed and we were unable to recover it. 00:26:11.728 [2024-04-18 21:19:27.491233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.491547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.491584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.728 qpair failed and we were unable to recover it. 00:26:11.728 [2024-04-18 21:19:27.491882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.492276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.492312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.728 qpair failed and we were unable to recover it. 00:26:11.728 [2024-04-18 21:19:27.492707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.493043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.493079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.728 qpair failed and we were unable to recover it. 00:26:11.728 [2024-04-18 21:19:27.493428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.493863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.493900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.728 qpair failed and we were unable to recover it. 00:26:11.728 [2024-04-18 21:19:27.494243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.494356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.494372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.728 qpair failed and we were unable to recover it. 00:26:11.728 [2024-04-18 21:19:27.494595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.494931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.494967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.728 qpair failed and we were unable to recover it. 00:26:11.728 [2024-04-18 21:19:27.495379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.495723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.495760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.728 qpair failed and we were unable to recover it. 00:26:11.728 [2024-04-18 21:19:27.495999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.496220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.496257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.728 qpair failed and we were unable to recover it. 00:26:11.728 [2024-04-18 21:19:27.496612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.497046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.497082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.728 qpair failed and we were unable to recover it. 00:26:11.728 [2024-04-18 21:19:27.497469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.497771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.497809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.728 qpair failed and we were unable to recover it. 00:26:11.728 [2024-04-18 21:19:27.498069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.498432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.498468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.728 qpair failed and we were unable to recover it. 00:26:11.728 [2024-04-18 21:19:27.498766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.499177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.499194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.728 qpair failed and we were unable to recover it. 00:26:11.728 [2024-04-18 21:19:27.499419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.499762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.499799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.728 qpair failed and we were unable to recover it. 00:26:11.728 [2024-04-18 21:19:27.500221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.500570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.728 [2024-04-18 21:19:27.500601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.728 qpair failed and we were unable to recover it. 00:26:11.728 [2024-04-18 21:19:27.500897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.501212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.501247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.729 qpair failed and we were unable to recover it. 00:26:11.729 [2024-04-18 21:19:27.501601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.501990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.502025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.729 qpair failed and we were unable to recover it. 00:26:11.729 [2024-04-18 21:19:27.502368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.502700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.502737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.729 qpair failed and we were unable to recover it. 00:26:11.729 [2024-04-18 21:19:27.503132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.503474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.503527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.729 qpair failed and we were unable to recover it. 00:26:11.729 [2024-04-18 21:19:27.503863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.504199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.504215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.729 qpair failed and we were unable to recover it. 00:26:11.729 [2024-04-18 21:19:27.504445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.504728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.504746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.729 qpair failed and we were unable to recover it. 00:26:11.729 [2024-04-18 21:19:27.505043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.505318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.505354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.729 qpair failed and we were unable to recover it. 00:26:11.729 [2024-04-18 21:19:27.505697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.505967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.506003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.729 qpair failed and we were unable to recover it. 00:26:11.729 [2024-04-18 21:19:27.506343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.506674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.506711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.729 qpair failed and we were unable to recover it. 00:26:11.729 [2024-04-18 21:19:27.507032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.507245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.507261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.729 qpair failed and we were unable to recover it. 00:26:11.729 [2024-04-18 21:19:27.507494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.507731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.507747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.729 qpair failed and we were unable to recover it. 00:26:11.729 [2024-04-18 21:19:27.508043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.508261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.508277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.729 qpair failed and we were unable to recover it. 00:26:11.729 [2024-04-18 21:19:27.508576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.508855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.508904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.729 qpair failed and we were unable to recover it. 00:26:11.729 [2024-04-18 21:19:27.509234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.509650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.509696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.729 qpair failed and we were unable to recover it. 00:26:11.729 [2024-04-18 21:19:27.510041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.510299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.510315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.729 qpair failed and we were unable to recover it. 00:26:11.729 [2024-04-18 21:19:27.510608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.510834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.510850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.729 qpair failed and we were unable to recover it. 00:26:11.729 [2024-04-18 21:19:27.511138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.511417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.511434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.729 qpair failed and we were unable to recover it. 00:26:11.729 [2024-04-18 21:19:27.511859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.512106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.512122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.729 qpair failed and we were unable to recover it. 00:26:11.729 [2024-04-18 21:19:27.512367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.512713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.512751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.729 qpair failed and we were unable to recover it. 00:26:11.729 [2024-04-18 21:19:27.513035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.513222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.513259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.729 qpair failed and we were unable to recover it. 00:26:11.729 [2024-04-18 21:19:27.513558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.513828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.513876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.729 qpair failed and we were unable to recover it. 00:26:11.729 [2024-04-18 21:19:27.514094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.514376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.514412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.729 qpair failed and we were unable to recover it. 00:26:11.729 [2024-04-18 21:19:27.514756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.515032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.515067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.729 qpair failed and we were unable to recover it. 00:26:11.729 [2024-04-18 21:19:27.515406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.515726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.515771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.729 qpair failed and we were unable to recover it. 00:26:11.729 [2024-04-18 21:19:27.516031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.516372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.516409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.729 qpair failed and we were unable to recover it. 00:26:11.729 [2024-04-18 21:19:27.516740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.516968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.517004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.729 qpair failed and we were unable to recover it. 00:26:11.729 [2024-04-18 21:19:27.517331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.517642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.517680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.729 qpair failed and we were unable to recover it. 00:26:11.729 [2024-04-18 21:19:27.518038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.518293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.518328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.729 qpair failed and we were unable to recover it. 00:26:11.729 [2024-04-18 21:19:27.518602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.518813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.518830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.729 qpair failed and we were unable to recover it. 00:26:11.729 [2024-04-18 21:19:27.519078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.729 [2024-04-18 21:19:27.519296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.519312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.730 qpair failed and we were unable to recover it. 00:26:11.730 [2024-04-18 21:19:27.519536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.519762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.519778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.730 qpair failed and we were unable to recover it. 00:26:11.730 [2024-04-18 21:19:27.520062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.520328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.520365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.730 qpair failed and we were unable to recover it. 00:26:11.730 [2024-04-18 21:19:27.520762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.521143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.521159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.730 qpair failed and we were unable to recover it. 00:26:11.730 [2024-04-18 21:19:27.521396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.521685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.521725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.730 qpair failed and we were unable to recover it. 00:26:11.730 [2024-04-18 21:19:27.522069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.522334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.522369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.730 qpair failed and we were unable to recover it. 00:26:11.730 [2024-04-18 21:19:27.522695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.522967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.523004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.730 qpair failed and we were unable to recover it. 00:26:11.730 [2024-04-18 21:19:27.523354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.523682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.523719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.730 qpair failed and we were unable to recover it. 00:26:11.730 [2024-04-18 21:19:27.523993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.524257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.524294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.730 qpair failed and we were unable to recover it. 00:26:11.730 [2024-04-18 21:19:27.524649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.525057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.525093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.730 qpair failed and we were unable to recover it. 00:26:11.730 [2024-04-18 21:19:27.525438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.525673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.525689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.730 qpair failed and we were unable to recover it. 00:26:11.730 [2024-04-18 21:19:27.526070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.526395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.526411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.730 qpair failed and we were unable to recover it. 00:26:11.730 [2024-04-18 21:19:27.526812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.527133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.527169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.730 qpair failed and we were unable to recover it. 00:26:11.730 [2024-04-18 21:19:27.527503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.527735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.527751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.730 qpair failed and we were unable to recover it. 00:26:11.730 [2024-04-18 21:19:27.528001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.528284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.528321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.730 qpair failed and we were unable to recover it. 00:26:11.730 [2024-04-18 21:19:27.528721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.528987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.529023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.730 qpair failed and we were unable to recover it. 00:26:11.730 [2024-04-18 21:19:27.529294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.529556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.529594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.730 qpair failed and we were unable to recover it. 00:26:11.730 [2024-04-18 21:19:27.529858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.530109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.530145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.730 qpair failed and we were unable to recover it. 00:26:11.730 [2024-04-18 21:19:27.530488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.530713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.530729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.730 qpair failed and we were unable to recover it. 00:26:11.730 [2024-04-18 21:19:27.531086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.531360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.531396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.730 qpair failed and we were unable to recover it. 00:26:11.730 [2024-04-18 21:19:27.531771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.532042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.532078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.730 qpair failed and we were unable to recover it. 00:26:11.730 [2024-04-18 21:19:27.532430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.532739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.532756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.730 qpair failed and we were unable to recover it. 00:26:11.730 [2024-04-18 21:19:27.532993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.533352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.533388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.730 qpair failed and we were unable to recover it. 00:26:11.730 [2024-04-18 21:19:27.533728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.534050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.534087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.730 qpair failed and we were unable to recover it. 00:26:11.730 [2024-04-18 21:19:27.534428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.534784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.534822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.730 qpair failed and we were unable to recover it. 00:26:11.730 [2024-04-18 21:19:27.535115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.535474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.535490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.730 qpair failed and we were unable to recover it. 00:26:11.730 [2024-04-18 21:19:27.535793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.536118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.536154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.730 qpair failed and we were unable to recover it. 00:26:11.730 [2024-04-18 21:19:27.536414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.536705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.536722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.730 qpair failed and we were unable to recover it. 00:26:11.730 [2024-04-18 21:19:27.536960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.537243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.537258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.730 qpair failed and we were unable to recover it. 00:26:11.730 [2024-04-18 21:19:27.537481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.537774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.730 [2024-04-18 21:19:27.537812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.731 qpair failed and we were unable to recover it. 00:26:11.731 [2024-04-18 21:19:27.538209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.731 [2024-04-18 21:19:27.538547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.731 [2024-04-18 21:19:27.538584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.731 qpair failed and we were unable to recover it. 00:26:11.731 [2024-04-18 21:19:27.538866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.731 [2024-04-18 21:19:27.539115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.731 [2024-04-18 21:19:27.539131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.731 qpair failed and we were unable to recover it. 00:26:11.731 [2024-04-18 21:19:27.539392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.731 [2024-04-18 21:19:27.539617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.731 [2024-04-18 21:19:27.539633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.731 qpair failed and we were unable to recover it. 00:26:11.731 [2024-04-18 21:19:27.539923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.731 [2024-04-18 21:19:27.540171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.731 [2024-04-18 21:19:27.540207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.731 qpair failed and we were unable to recover it. 00:26:11.731 [2024-04-18 21:19:27.540628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.731 [2024-04-18 21:19:27.540843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.731 [2024-04-18 21:19:27.540859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.731 qpair failed and we were unable to recover it. 00:26:11.731 [2024-04-18 21:19:27.541114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.731 [2024-04-18 21:19:27.541552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.731 [2024-04-18 21:19:27.541597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.731 qpair failed and we were unable to recover it. 00:26:11.731 [2024-04-18 21:19:27.541876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.731 [2024-04-18 21:19:27.542152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.731 [2024-04-18 21:19:27.542188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.731 qpair failed and we were unable to recover it. 00:26:11.731 [2024-04-18 21:19:27.542455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.731 [2024-04-18 21:19:27.542628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.731 [2024-04-18 21:19:27.542645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.731 qpair failed and we were unable to recover it. 00:26:11.731 [2024-04-18 21:19:27.543021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.731 [2024-04-18 21:19:27.543403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.731 [2024-04-18 21:19:27.543439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.731 qpair failed and we were unable to recover it. 00:26:11.731 [2024-04-18 21:19:27.543674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.731 [2024-04-18 21:19:27.543947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.731 [2024-04-18 21:19:27.543984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.731 qpair failed and we were unable to recover it. 00:26:11.731 [2024-04-18 21:19:27.544324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.731 [2024-04-18 21:19:27.544671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.731 [2024-04-18 21:19:27.544709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.731 qpair failed and we were unable to recover it. 00:26:11.731 [2024-04-18 21:19:27.544976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.731 [2024-04-18 21:19:27.545252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.731 [2024-04-18 21:19:27.545289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.731 qpair failed and we were unable to recover it. 00:26:11.731 [2024-04-18 21:19:27.545570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.731 [2024-04-18 21:19:27.545832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.731 [2024-04-18 21:19:27.545869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.731 qpair failed and we were unable to recover it. 00:26:11.731 [2024-04-18 21:19:27.546186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.731 [2024-04-18 21:19:27.546508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.731 [2024-04-18 21:19:27.546531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.731 qpair failed and we were unable to recover it. 00:26:11.731 [2024-04-18 21:19:27.546814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.731 [2024-04-18 21:19:27.547184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.731 [2024-04-18 21:19:27.547219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.731 qpair failed and we were unable to recover it. 00:26:11.731 [2024-04-18 21:19:27.547552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.731 [2024-04-18 21:19:27.547905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.731 [2024-04-18 21:19:27.547948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.731 qpair failed and we were unable to recover it. 00:26:11.731 [2024-04-18 21:19:27.548243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.731 [2024-04-18 21:19:27.548502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.731 [2024-04-18 21:19:27.548526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.731 qpair failed and we were unable to recover it. 00:26:11.731 [2024-04-18 21:19:27.548756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.731 [2024-04-18 21:19:27.549110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.731 [2024-04-18 21:19:27.549146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.731 qpair failed and we were unable to recover it. 00:26:11.731 [2024-04-18 21:19:27.549438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.731 [2024-04-18 21:19:27.549715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.731 [2024-04-18 21:19:27.549753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.731 qpair failed and we were unable to recover it. 00:26:11.731 [2024-04-18 21:19:27.550032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.731 [2024-04-18 21:19:27.550359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.731 [2024-04-18 21:19:27.550395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.731 qpair failed and we were unable to recover it. 00:26:11.731 [2024-04-18 21:19:27.550740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.731 [2024-04-18 21:19:27.551150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.731 [2024-04-18 21:19:27.551186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.731 qpair failed and we were unable to recover it. 00:26:11.731 [2024-04-18 21:19:27.551525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.731 [2024-04-18 21:19:27.551740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.731 [2024-04-18 21:19:27.551757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.731 qpair failed and we were unable to recover it. 00:26:11.731 [2024-04-18 21:19:27.552180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.731 [2024-04-18 21:19:27.552402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.731 [2024-04-18 21:19:27.552438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.731 qpair failed and we were unable to recover it. 00:26:11.731 [2024-04-18 21:19:27.552796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.731 [2024-04-18 21:19:27.553117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.731 [2024-04-18 21:19:27.553154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.731 qpair failed and we were unable to recover it. 00:26:11.731 [2024-04-18 21:19:27.553527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.731 [2024-04-18 21:19:27.554020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.554062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.732 qpair failed and we were unable to recover it. 00:26:11.732 [2024-04-18 21:19:27.554435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.554709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.554747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.732 qpair failed and we were unable to recover it. 00:26:11.732 [2024-04-18 21:19:27.555181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.555526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.555563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.732 qpair failed and we were unable to recover it. 00:26:11.732 [2024-04-18 21:19:27.555912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.556228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.556244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.732 qpair failed and we were unable to recover it. 00:26:11.732 [2024-04-18 21:19:27.556552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.556785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.556822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.732 qpair failed and we were unable to recover it. 00:26:11.732 [2024-04-18 21:19:27.557086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.557419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.557456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.732 qpair failed and we were unable to recover it. 00:26:11.732 [2024-04-18 21:19:27.557839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.558192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.558228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.732 qpair failed and we were unable to recover it. 00:26:11.732 [2024-04-18 21:19:27.558557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.558877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.558913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.732 qpair failed and we were unable to recover it. 00:26:11.732 [2024-04-18 21:19:27.559191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.559538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.559576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.732 qpair failed and we were unable to recover it. 00:26:11.732 [2024-04-18 21:19:27.559874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.560213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.560249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.732 qpair failed and we were unable to recover it. 00:26:11.732 [2024-04-18 21:19:27.560572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.560791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.560807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.732 qpair failed and we were unable to recover it. 00:26:11.732 [2024-04-18 21:19:27.561100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.561331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.561346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.732 qpair failed and we were unable to recover it. 00:26:11.732 [2024-04-18 21:19:27.561649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.561980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.562016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.732 qpair failed and we were unable to recover it. 00:26:11.732 [2024-04-18 21:19:27.562367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.562751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.562788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.732 qpair failed and we were unable to recover it. 00:26:11.732 [2024-04-18 21:19:27.563031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.563284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.563320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.732 qpair failed and we were unable to recover it. 00:26:11.732 [2024-04-18 21:19:27.563592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.563958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.563994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.732 qpair failed and we were unable to recover it. 00:26:11.732 [2024-04-18 21:19:27.564283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.564650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.564688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.732 qpair failed and we were unable to recover it. 00:26:11.732 [2024-04-18 21:19:27.565018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.565285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.565323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.732 qpair failed and we were unable to recover it. 00:26:11.732 [2024-04-18 21:19:27.565655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.565927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.565967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.732 qpair failed and we were unable to recover it. 00:26:11.732 [2024-04-18 21:19:27.566305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.566626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.566663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.732 qpair failed and we were unable to recover it. 00:26:11.732 [2024-04-18 21:19:27.566945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.567100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.567135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.732 qpair failed and we were unable to recover it. 00:26:11.732 [2024-04-18 21:19:27.567458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.567775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.567824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.732 qpair failed and we were unable to recover it. 00:26:11.732 [2024-04-18 21:19:27.568188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.568476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.568527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.732 qpair failed and we were unable to recover it. 00:26:11.732 [2024-04-18 21:19:27.568818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.568988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.569004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.732 qpair failed and we were unable to recover it. 00:26:11.732 [2024-04-18 21:19:27.569228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.569561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.569599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.732 qpair failed and we were unable to recover it. 00:26:11.732 [2024-04-18 21:19:27.570024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.570336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.570352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.732 qpair failed and we were unable to recover it. 00:26:11.732 [2024-04-18 21:19:27.570600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.570909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.570944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.732 qpair failed and we were unable to recover it. 00:26:11.732 [2024-04-18 21:19:27.571239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.571504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.571528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.732 qpair failed and we were unable to recover it. 00:26:11.732 [2024-04-18 21:19:27.571768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.572077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.572093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.732 qpair failed and we were unable to recover it. 00:26:11.732 [2024-04-18 21:19:27.572389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.572684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.732 [2024-04-18 21:19:27.572715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.733 qpair failed and we were unable to recover it. 00:26:11.733 [2024-04-18 21:19:27.573058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.573382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.573398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.733 qpair failed and we were unable to recover it. 00:26:11.733 [2024-04-18 21:19:27.573696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.573974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.574010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.733 qpair failed and we were unable to recover it. 00:26:11.733 [2024-04-18 21:19:27.574481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.574770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.574807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.733 qpair failed and we were unable to recover it. 00:26:11.733 [2024-04-18 21:19:27.575193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.575424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.575440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.733 qpair failed and we were unable to recover it. 00:26:11.733 [2024-04-18 21:19:27.575738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.576033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.576069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.733 qpair failed and we were unable to recover it. 00:26:11.733 [2024-04-18 21:19:27.576360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.576630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.576667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.733 qpair failed and we were unable to recover it. 00:26:11.733 [2024-04-18 21:19:27.577023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.577289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.577326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.733 qpair failed and we were unable to recover it. 00:26:11.733 [2024-04-18 21:19:27.577617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.577950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.577986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.733 qpair failed and we were unable to recover it. 00:26:11.733 [2024-04-18 21:19:27.578309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.578584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.578632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.733 qpair failed and we were unable to recover it. 00:26:11.733 [2024-04-18 21:19:27.578898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.579281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.579318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.733 qpair failed and we were unable to recover it. 00:26:11.733 [2024-04-18 21:19:27.579657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.579988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.580024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.733 qpair failed and we were unable to recover it. 00:26:11.733 [2024-04-18 21:19:27.580424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.580695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.580732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.733 qpair failed and we were unable to recover it. 00:26:11.733 [2024-04-18 21:19:27.581011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.581268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.581312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.733 qpair failed and we were unable to recover it. 00:26:11.733 [2024-04-18 21:19:27.581542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.581839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.581855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.733 qpair failed and we were unable to recover it. 00:26:11.733 [2024-04-18 21:19:27.582138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.582351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.582368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.733 qpair failed and we were unable to recover it. 00:26:11.733 [2024-04-18 21:19:27.582661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.583056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.583072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.733 qpair failed and we were unable to recover it. 00:26:11.733 [2024-04-18 21:19:27.583299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.583630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.583667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.733 qpair failed and we were unable to recover it. 00:26:11.733 [2024-04-18 21:19:27.584084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.584344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.584380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.733 qpair failed and we were unable to recover it. 00:26:11.733 [2024-04-18 21:19:27.584732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.585057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.585072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.733 qpair failed and we were unable to recover it. 00:26:11.733 [2024-04-18 21:19:27.585427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.585634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.585671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.733 qpair failed and we were unable to recover it. 00:26:11.733 [2024-04-18 21:19:27.586022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.586292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.586328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.733 qpair failed and we were unable to recover it. 00:26:11.733 [2024-04-18 21:19:27.586667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.586939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.586981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.733 qpair failed and we were unable to recover it. 00:26:11.733 [2024-04-18 21:19:27.587271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.587597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.587634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.733 qpair failed and we were unable to recover it. 00:26:11.733 [2024-04-18 21:19:27.587982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.588318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.588355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.733 qpair failed and we were unable to recover it. 00:26:11.733 [2024-04-18 21:19:27.588647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.588974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.589011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.733 qpair failed and we were unable to recover it. 00:26:11.733 [2024-04-18 21:19:27.589406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.589727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.589766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.733 qpair failed and we were unable to recover it. 00:26:11.733 [2024-04-18 21:19:27.590112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.590390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.590427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.733 qpair failed and we were unable to recover it. 00:26:11.733 [2024-04-18 21:19:27.590690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.590948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.590984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.733 qpair failed and we were unable to recover it. 00:26:11.733 [2024-04-18 21:19:27.591279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.591641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.733 [2024-04-18 21:19:27.591658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.734 qpair failed and we were unable to recover it. 00:26:11.734 [2024-04-18 21:19:27.592017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.592284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.592320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.734 qpair failed and we were unable to recover it. 00:26:11.734 [2024-04-18 21:19:27.592610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.592845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.592881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.734 qpair failed and we were unable to recover it. 00:26:11.734 [2024-04-18 21:19:27.593153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.593528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.593566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.734 qpair failed and we were unable to recover it. 00:26:11.734 [2024-04-18 21:19:27.593835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.594164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.594206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.734 qpair failed and we were unable to recover it. 00:26:11.734 [2024-04-18 21:19:27.594524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.594749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.594766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.734 qpair failed and we were unable to recover it. 00:26:11.734 [2024-04-18 21:19:27.595085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.595330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.595366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.734 qpair failed and we were unable to recover it. 00:26:11.734 [2024-04-18 21:19:27.595660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.595936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.595972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.734 qpair failed and we were unable to recover it. 00:26:11.734 [2024-04-18 21:19:27.596253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.596530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.596546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.734 qpair failed and we were unable to recover it. 00:26:11.734 [2024-04-18 21:19:27.596782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.597067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.597083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.734 qpair failed and we were unable to recover it. 00:26:11.734 [2024-04-18 21:19:27.597365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.597674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.597711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.734 qpair failed and we were unable to recover it. 00:26:11.734 [2024-04-18 21:19:27.598002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.598264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.598308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.734 qpair failed and we were unable to recover it. 00:26:11.734 [2024-04-18 21:19:27.598549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.598775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.598812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.734 qpair failed and we were unable to recover it. 00:26:11.734 [2024-04-18 21:19:27.599159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.599413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.599428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.734 qpair failed and we were unable to recover it. 00:26:11.734 [2024-04-18 21:19:27.599710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.599929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.599945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.734 qpair failed and we were unable to recover it. 00:26:11.734 [2024-04-18 21:19:27.600162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.600436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.600472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.734 qpair failed and we were unable to recover it. 00:26:11.734 [2024-04-18 21:19:27.600778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.601178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.601215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.734 qpair failed and we were unable to recover it. 00:26:11.734 [2024-04-18 21:19:27.601552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.601910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.601946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.734 qpair failed and we were unable to recover it. 00:26:11.734 [2024-04-18 21:19:27.602232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.602617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.602656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.734 qpair failed and we were unable to recover it. 00:26:11.734 [2024-04-18 21:19:27.603056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.603435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.603471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.734 qpair failed and we were unable to recover it. 00:26:11.734 [2024-04-18 21:19:27.603837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.604095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.604131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.734 qpair failed and we were unable to recover it. 00:26:11.734 [2024-04-18 21:19:27.604436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.604664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.604680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.734 qpair failed and we were unable to recover it. 00:26:11.734 [2024-04-18 21:19:27.605055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.605330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.605368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.734 qpair failed and we were unable to recover it. 00:26:11.734 [2024-04-18 21:19:27.605735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.605994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.606031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.734 qpair failed and we were unable to recover it. 00:26:11.734 [2024-04-18 21:19:27.606307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.606589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.606631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.734 qpair failed and we were unable to recover it. 00:26:11.734 [2024-04-18 21:19:27.606919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.607205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.607225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.734 qpair failed and we were unable to recover it. 00:26:11.734 [2024-04-18 21:19:27.607477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.607789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.607804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.734 qpair failed and we were unable to recover it. 00:26:11.734 [2024-04-18 21:19:27.608075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.608298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.608311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.734 qpair failed and we were unable to recover it. 00:26:11.734 [2024-04-18 21:19:27.608541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.608828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.608841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.734 qpair failed and we were unable to recover it. 00:26:11.734 [2024-04-18 21:19:27.609019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.609233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.734 [2024-04-18 21:19:27.609246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.734 qpair failed and we were unable to recover it. 00:26:11.735 [2024-04-18 21:19:27.609423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.735 [2024-04-18 21:19:27.609645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.735 [2024-04-18 21:19:27.609659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.735 qpair failed and we were unable to recover it. 00:26:11.735 [2024-04-18 21:19:27.610002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.735 [2024-04-18 21:19:27.610277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.735 [2024-04-18 21:19:27.610290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.735 qpair failed and we were unable to recover it. 00:26:11.735 [2024-04-18 21:19:27.610627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.735 [2024-04-18 21:19:27.610852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.735 [2024-04-18 21:19:27.610865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.735 qpair failed and we were unable to recover it. 00:26:11.735 [2024-04-18 21:19:27.611067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.735 [2024-04-18 21:19:27.611289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.735 [2024-04-18 21:19:27.611302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.735 qpair failed and we were unable to recover it. 00:26:11.735 [2024-04-18 21:19:27.611535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.735 [2024-04-18 21:19:27.611749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.735 [2024-04-18 21:19:27.611762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.735 qpair failed and we were unable to recover it. 00:26:11.735 [2024-04-18 21:19:27.611983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.735 [2024-04-18 21:19:27.612202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.735 [2024-04-18 21:19:27.612218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.735 qpair failed and we were unable to recover it. 00:26:11.735 [2024-04-18 21:19:27.612488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.735 [2024-04-18 21:19:27.612713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.735 [2024-04-18 21:19:27.612727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.735 qpair failed and we were unable to recover it. 00:26:11.735 [2024-04-18 21:19:27.612945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.735 [2024-04-18 21:19:27.613211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.735 [2024-04-18 21:19:27.613224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.735 qpair failed and we were unable to recover it. 00:26:11.735 [2024-04-18 21:19:27.613581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.735 [2024-04-18 21:19:27.613917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.735 [2024-04-18 21:19:27.613930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.735 qpair failed and we were unable to recover it. 00:26:11.735 [2024-04-18 21:19:27.614208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.735 [2024-04-18 21:19:27.614488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.735 [2024-04-18 21:19:27.614501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.735 qpair failed and we were unable to recover it. 00:26:11.735 [2024-04-18 21:19:27.614741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.735 [2024-04-18 21:19:27.615014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.735 [2024-04-18 21:19:27.615027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.735 qpair failed and we were unable to recover it. 00:26:11.735 [2024-04-18 21:19:27.615236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.735 [2024-04-18 21:19:27.615522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.735 [2024-04-18 21:19:27.615536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.735 qpair failed and we were unable to recover it. 00:26:11.735 [2024-04-18 21:19:27.615817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.735 [2024-04-18 21:19:27.616173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.735 [2024-04-18 21:19:27.616186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.735 qpair failed and we were unable to recover it. 00:26:11.735 [2024-04-18 21:19:27.616417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.735 [2024-04-18 21:19:27.616630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.735 [2024-04-18 21:19:27.616644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.735 qpair failed and we were unable to recover it. 00:26:11.735 [2024-04-18 21:19:27.616950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.735 [2024-04-18 21:19:27.617171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.735 [2024-04-18 21:19:27.617184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.735 qpair failed and we were unable to recover it. 00:26:11.735 [2024-04-18 21:19:27.617472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.735 [2024-04-18 21:19:27.617702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.735 [2024-04-18 21:19:27.617716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.735 qpair failed and we were unable to recover it. 00:26:11.735 [2024-04-18 21:19:27.617998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.735 [2024-04-18 21:19:27.618356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.735 [2024-04-18 21:19:27.618369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.735 qpair failed and we were unable to recover it. 00:26:11.735 [2024-04-18 21:19:27.618660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.735 [2024-04-18 21:19:27.618866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.735 [2024-04-18 21:19:27.618879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.735 qpair failed and we were unable to recover it. 00:26:11.735 [2024-04-18 21:19:27.619246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.735 [2024-04-18 21:19:27.619604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.735 [2024-04-18 21:19:27.619619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.735 qpair failed and we were unable to recover it. 00:26:11.735 [2024-04-18 21:19:27.619902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.735 [2024-04-18 21:19:27.620128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.735 [2024-04-18 21:19:27.620141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.735 qpair failed and we were unable to recover it. 00:26:11.735 [2024-04-18 21:19:27.620522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.735 [2024-04-18 21:19:27.620809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.735 [2024-04-18 21:19:27.620823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.735 qpair failed and we were unable to recover it. 00:26:11.735 [2024-04-18 21:19:27.621156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.735 [2024-04-18 21:19:27.621423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.735 [2024-04-18 21:19:27.621437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.735 qpair failed and we were unable to recover it. 00:26:11.735 [2024-04-18 21:19:27.621714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.735 [2024-04-18 21:19:27.622047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.735 [2024-04-18 21:19:27.622060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.735 qpair failed and we were unable to recover it. 00:26:11.735 [2024-04-18 21:19:27.622330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.735 [2024-04-18 21:19:27.622667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.735 [2024-04-18 21:19:27.622681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.735 qpair failed and we were unable to recover it. 00:26:11.735 [2024-04-18 21:19:27.622954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.735 [2024-04-18 21:19:27.623285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.735 [2024-04-18 21:19:27.623298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.736 qpair failed and we were unable to recover it. 00:26:11.736 [2024-04-18 21:19:27.623527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.623857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.623870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.736 qpair failed and we were unable to recover it. 00:26:11.736 [2024-04-18 21:19:27.624142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.624477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.624490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.736 qpair failed and we were unable to recover it. 00:26:11.736 [2024-04-18 21:19:27.624831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.625166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.625179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.736 qpair failed and we were unable to recover it. 00:26:11.736 [2024-04-18 21:19:27.625485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.625784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.625798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.736 qpair failed and we were unable to recover it. 00:26:11.736 [2024-04-18 21:19:27.626011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.626218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.626231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.736 qpair failed and we were unable to recover it. 00:26:11.736 [2024-04-18 21:19:27.626466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.626739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.626754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.736 qpair failed and we were unable to recover it. 00:26:11.736 [2024-04-18 21:19:27.627114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.627328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.627342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.736 qpair failed and we were unable to recover it. 00:26:11.736 [2024-04-18 21:19:27.627652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.627941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.627954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.736 qpair failed and we were unable to recover it. 00:26:11.736 [2024-04-18 21:19:27.628246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.628525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.628539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.736 qpair failed and we were unable to recover it. 00:26:11.736 [2024-04-18 21:19:27.628830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.629159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.629172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.736 qpair failed and we were unable to recover it. 00:26:11.736 [2024-04-18 21:19:27.629439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.629733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.629747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.736 qpair failed and we were unable to recover it. 00:26:11.736 [2024-04-18 21:19:27.629960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.630231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.630244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.736 qpair failed and we were unable to recover it. 00:26:11.736 [2024-04-18 21:19:27.630589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.630802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.630815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.736 qpair failed and we were unable to recover it. 00:26:11.736 [2024-04-18 21:19:27.631031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.631384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.631397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.736 qpair failed and we were unable to recover it. 00:26:11.736 [2024-04-18 21:19:27.631600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.631960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.631973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.736 qpair failed and we were unable to recover it. 00:26:11.736 [2024-04-18 21:19:27.632307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.632639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.632654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.736 qpair failed and we were unable to recover it. 00:26:11.736 [2024-04-18 21:19:27.632985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.633276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.633289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.736 qpair failed and we were unable to recover it. 00:26:11.736 [2024-04-18 21:19:27.633566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.633940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.633953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.736 qpair failed and we were unable to recover it. 00:26:11.736 [2024-04-18 21:19:27.634266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.634533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.634548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.736 qpair failed and we were unable to recover it. 00:26:11.736 [2024-04-18 21:19:27.634818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.635047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.635060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.736 qpair failed and we were unable to recover it. 00:26:11.736 [2024-04-18 21:19:27.635351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.635552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.635566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.736 qpair failed and we were unable to recover it. 00:26:11.736 [2024-04-18 21:19:27.635888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.636092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.636105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.736 qpair failed and we were unable to recover it. 00:26:11.736 [2024-04-18 21:19:27.636393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.636773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.636786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.736 qpair failed and we were unable to recover it. 00:26:11.736 [2024-04-18 21:19:27.636900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.637175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.637189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.736 qpair failed and we were unable to recover it. 00:26:11.736 [2024-04-18 21:19:27.637481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.637694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.637708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.736 qpair failed and we were unable to recover it. 00:26:11.736 [2024-04-18 21:19:27.637996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.638261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.638274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.736 qpair failed and we were unable to recover it. 00:26:11.736 [2024-04-18 21:19:27.638631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.638919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.638932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.736 qpair failed and we were unable to recover it. 00:26:11.736 [2024-04-18 21:19:27.639217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.639480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.639497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.736 qpair failed and we were unable to recover it. 00:26:11.736 [2024-04-18 21:19:27.639890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.640241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.736 [2024-04-18 21:19:27.640254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.736 qpair failed and we were unable to recover it. 00:26:11.737 [2024-04-18 21:19:27.640471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.737 [2024-04-18 21:19:27.640844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.737 [2024-04-18 21:19:27.640858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.737 qpair failed and we were unable to recover it. 00:26:11.737 [2024-04-18 21:19:27.641139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.737 [2024-04-18 21:19:27.641421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.737 [2024-04-18 21:19:27.641434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.737 qpair failed and we were unable to recover it. 00:26:11.737 [2024-04-18 21:19:27.641707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.737 [2024-04-18 21:19:27.641928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.737 [2024-04-18 21:19:27.641944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.737 qpair failed and we were unable to recover it. 00:26:11.737 [2024-04-18 21:19:27.642224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.737 [2024-04-18 21:19:27.642439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.737 [2024-04-18 21:19:27.642452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.737 qpair failed and we were unable to recover it. 00:26:11.737 [2024-04-18 21:19:27.642734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.737 [2024-04-18 21:19:27.643070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.737 [2024-04-18 21:19:27.643082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.737 qpair failed and we were unable to recover it. 00:26:11.737 [2024-04-18 21:19:27.643306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.737 [2024-04-18 21:19:27.643580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.737 [2024-04-18 21:19:27.643594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.737 qpair failed and we were unable to recover it. 00:26:11.737 [2024-04-18 21:19:27.643813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.737 [2024-04-18 21:19:27.644144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.737 [2024-04-18 21:19:27.644157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:11.737 qpair failed and we were unable to recover it. 00:26:11.737 [2024-04-18 21:19:27.644493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.001 [2024-04-18 21:19:27.644781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.001 [2024-04-18 21:19:27.644795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.001 qpair failed and we were unable to recover it. 00:26:12.001 [2024-04-18 21:19:27.645073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.001 [2024-04-18 21:19:27.645354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.001 [2024-04-18 21:19:27.645367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.001 qpair failed and we were unable to recover it. 00:26:12.001 [2024-04-18 21:19:27.645578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.001 [2024-04-18 21:19:27.645784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.001 [2024-04-18 21:19:27.645797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.001 qpair failed and we were unable to recover it. 00:26:12.001 [2024-04-18 21:19:27.646019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.646315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.646328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.002 qpair failed and we were unable to recover it. 00:26:12.002 [2024-04-18 21:19:27.646455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.646743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.646757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.002 qpair failed and we were unable to recover it. 00:26:12.002 [2024-04-18 21:19:27.647113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.647332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.647348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.002 qpair failed and we were unable to recover it. 00:26:12.002 [2024-04-18 21:19:27.647566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.647894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.647908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.002 qpair failed and we were unable to recover it. 00:26:12.002 [2024-04-18 21:19:27.648240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.648573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.648587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.002 qpair failed and we were unable to recover it. 00:26:12.002 [2024-04-18 21:19:27.648854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.649131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.649144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.002 qpair failed and we were unable to recover it. 00:26:12.002 [2024-04-18 21:19:27.649363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.649651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.649664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.002 qpair failed and we were unable to recover it. 00:26:12.002 [2024-04-18 21:19:27.649952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.650218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.650231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.002 qpair failed and we were unable to recover it. 00:26:12.002 [2024-04-18 21:19:27.650529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.650814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.650827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.002 qpair failed and we were unable to recover it. 00:26:12.002 [2024-04-18 21:19:27.651090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.651369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.651382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.002 qpair failed and we were unable to recover it. 00:26:12.002 [2024-04-18 21:19:27.651648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.651953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.651966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.002 qpair failed and we were unable to recover it. 00:26:12.002 [2024-04-18 21:19:27.652245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.652649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.652663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.002 qpair failed and we were unable to recover it. 00:26:12.002 [2024-04-18 21:19:27.652965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.653228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.653241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.002 qpair failed and we were unable to recover it. 00:26:12.002 [2024-04-18 21:19:27.653611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.653891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.653904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.002 qpair failed and we were unable to recover it. 00:26:12.002 [2024-04-18 21:19:27.654240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.654507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.654529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.002 qpair failed and we were unable to recover it. 00:26:12.002 [2024-04-18 21:19:27.654813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.655084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.655097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.002 qpair failed and we were unable to recover it. 00:26:12.002 [2024-04-18 21:19:27.655391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.655694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.655708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.002 qpair failed and we were unable to recover it. 00:26:12.002 [2024-04-18 21:19:27.656050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.656267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.656281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.002 qpair failed and we were unable to recover it. 00:26:12.002 [2024-04-18 21:19:27.656501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.656791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.656804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.002 qpair failed and we were unable to recover it. 00:26:12.002 [2024-04-18 21:19:27.657118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.657350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.657363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.002 qpair failed and we were unable to recover it. 00:26:12.002 [2024-04-18 21:19:27.657667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.657948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.657962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.002 qpair failed and we were unable to recover it. 00:26:12.002 [2024-04-18 21:19:27.658187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.658526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.658540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.002 qpair failed and we were unable to recover it. 00:26:12.002 [2024-04-18 21:19:27.658757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.659048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.659062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.002 qpair failed and we were unable to recover it. 00:26:12.002 [2024-04-18 21:19:27.659358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.659690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.659704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.002 qpair failed and we were unable to recover it. 00:26:12.002 [2024-04-18 21:19:27.660010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.660362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.660375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.002 qpair failed and we were unable to recover it. 00:26:12.002 [2024-04-18 21:19:27.660600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.660880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.660892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.002 qpair failed and we were unable to recover it. 00:26:12.002 [2024-04-18 21:19:27.661108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.661282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.661295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.002 qpair failed and we were unable to recover it. 00:26:12.002 [2024-04-18 21:19:27.661592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.661804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.661816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.002 qpair failed and we were unable to recover it. 00:26:12.002 [2024-04-18 21:19:27.662174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.002 [2024-04-18 21:19:27.662502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.662522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.003 qpair failed and we were unable to recover it. 00:26:12.003 [2024-04-18 21:19:27.662866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.663199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.663212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.003 qpair failed and we were unable to recover it. 00:26:12.003 [2024-04-18 21:19:27.663487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.663771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.663785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.003 qpair failed and we were unable to recover it. 00:26:12.003 [2024-04-18 21:19:27.664052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.664353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.664366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.003 qpair failed and we were unable to recover it. 00:26:12.003 [2024-04-18 21:19:27.664585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.664870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.664883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.003 qpair failed and we were unable to recover it. 00:26:12.003 [2024-04-18 21:19:27.664989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.665264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.665277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.003 qpair failed and we were unable to recover it. 00:26:12.003 [2024-04-18 21:19:27.665497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.665781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.665795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.003 qpair failed and we were unable to recover it. 00:26:12.003 [2024-04-18 21:19:27.666011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.666294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.666307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.003 qpair failed and we were unable to recover it. 00:26:12.003 [2024-04-18 21:19:27.666584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.666874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.666887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.003 qpair failed and we were unable to recover it. 00:26:12.003 [2024-04-18 21:19:27.667223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.667499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.667518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.003 qpair failed and we were unable to recover it. 00:26:12.003 [2024-04-18 21:19:27.667853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.668186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.668198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.003 qpair failed and we were unable to recover it. 00:26:12.003 [2024-04-18 21:19:27.668417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.668586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.668600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.003 qpair failed and we were unable to recover it. 00:26:12.003 [2024-04-18 21:19:27.668871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.669249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.669262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.003 qpair failed and we were unable to recover it. 00:26:12.003 [2024-04-18 21:19:27.669596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.669815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.669828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.003 qpair failed and we were unable to recover it. 00:26:12.003 [2024-04-18 21:19:27.670106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.670369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.670382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.003 qpair failed and we were unable to recover it. 00:26:12.003 [2024-04-18 21:19:27.670584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.670918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.670931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.003 qpair failed and we were unable to recover it. 00:26:12.003 [2024-04-18 21:19:27.671284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.671482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.671495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.003 qpair failed and we were unable to recover it. 00:26:12.003 [2024-04-18 21:19:27.671769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.671995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.672007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.003 qpair failed and we were unable to recover it. 00:26:12.003 [2024-04-18 21:19:27.672366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.672638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.672652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.003 qpair failed and we were unable to recover it. 00:26:12.003 [2024-04-18 21:19:27.672925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.673258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.673271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.003 qpair failed and we were unable to recover it. 00:26:12.003 [2024-04-18 21:19:27.673567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.673777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.673790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.003 qpair failed and we were unable to recover it. 00:26:12.003 [2024-04-18 21:19:27.674025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.674412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.674425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.003 qpair failed and we were unable to recover it. 00:26:12.003 [2024-04-18 21:19:27.674641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.674848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.674861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.003 qpair failed and we were unable to recover it. 00:26:12.003 [2024-04-18 21:19:27.675214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.675476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.675489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.003 qpair failed and we were unable to recover it. 00:26:12.003 [2024-04-18 21:19:27.675708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.675985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.675998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.003 qpair failed and we were unable to recover it. 00:26:12.003 [2024-04-18 21:19:27.676282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.676559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.676575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.003 qpair failed and we were unable to recover it. 00:26:12.003 [2024-04-18 21:19:27.676791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.677069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.677083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.003 qpair failed and we were unable to recover it. 00:26:12.003 [2024-04-18 21:19:27.677377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.677657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.677670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.003 qpair failed and we were unable to recover it. 00:26:12.003 [2024-04-18 21:19:27.677934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.678143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.003 [2024-04-18 21:19:27.678156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.003 qpair failed and we were unable to recover it. 00:26:12.004 [2024-04-18 21:19:27.678440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.678771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.678785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.004 qpair failed and we were unable to recover it. 00:26:12.004 [2024-04-18 21:19:27.679049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.679375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.679388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.004 qpair failed and we were unable to recover it. 00:26:12.004 [2024-04-18 21:19:27.679622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.679977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.679990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.004 qpair failed and we were unable to recover it. 00:26:12.004 [2024-04-18 21:19:27.680252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.680537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.680562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.004 qpair failed and we were unable to recover it. 00:26:12.004 [2024-04-18 21:19:27.680851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.681209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.681222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.004 qpair failed and we were unable to recover it. 00:26:12.004 [2024-04-18 21:19:27.681532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.681815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.681828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.004 qpair failed and we were unable to recover it. 00:26:12.004 [2024-04-18 21:19:27.682133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.682439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.682452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.004 qpair failed and we were unable to recover it. 00:26:12.004 [2024-04-18 21:19:27.682788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.683172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.683185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.004 qpair failed and we were unable to recover it. 00:26:12.004 [2024-04-18 21:19:27.683477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.683690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.683703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.004 qpair failed and we were unable to recover it. 00:26:12.004 [2024-04-18 21:19:27.683901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.684191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.684204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.004 qpair failed and we were unable to recover it. 00:26:12.004 [2024-04-18 21:19:27.684538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.684801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.684815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.004 qpair failed and we were unable to recover it. 00:26:12.004 [2024-04-18 21:19:27.685035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.685244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.685257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.004 qpair failed and we were unable to recover it. 00:26:12.004 [2024-04-18 21:19:27.685456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.685742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.685755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.004 qpair failed and we were unable to recover it. 00:26:12.004 [2024-04-18 21:19:27.686036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.686339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.686352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.004 qpair failed and we were unable to recover it. 00:26:12.004 [2024-04-18 21:19:27.686647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.686933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.686946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.004 qpair failed and we were unable to recover it. 00:26:12.004 [2024-04-18 21:19:27.687211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.687407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.687420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.004 qpair failed and we were unable to recover it. 00:26:12.004 [2024-04-18 21:19:27.687709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.688078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.688091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.004 qpair failed and we were unable to recover it. 00:26:12.004 [2024-04-18 21:19:27.688427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.688714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.688728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.004 qpair failed and we were unable to recover it. 00:26:12.004 [2024-04-18 21:19:27.688947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.689301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.689314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.004 qpair failed and we were unable to recover it. 00:26:12.004 [2024-04-18 21:19:27.689661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.689946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.689959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.004 qpair failed and we were unable to recover it. 00:26:12.004 [2024-04-18 21:19:27.690290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.690517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.690531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.004 qpair failed and we were unable to recover it. 00:26:12.004 [2024-04-18 21:19:27.690886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.691237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.691249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.004 qpair failed and we were unable to recover it. 00:26:12.004 [2024-04-18 21:19:27.691493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.691873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.691886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.004 qpair failed and we were unable to recover it. 00:26:12.004 [2024-04-18 21:19:27.692099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.692329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.692342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.004 qpair failed and we were unable to recover it. 00:26:12.004 [2024-04-18 21:19:27.692559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.692896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.692909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.004 qpair failed and we were unable to recover it. 00:26:12.004 [2024-04-18 21:19:27.693193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.693477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.693490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.004 qpair failed and we were unable to recover it. 00:26:12.004 [2024-04-18 21:19:27.693803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.694086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.694099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.004 qpair failed and we were unable to recover it. 00:26:12.004 [2024-04-18 21:19:27.694317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.694624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.694637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.004 qpair failed and we were unable to recover it. 00:26:12.004 [2024-04-18 21:19:27.694899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.695119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.004 [2024-04-18 21:19:27.695132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.004 qpair failed and we were unable to recover it. 00:26:12.005 [2024-04-18 21:19:27.695402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.695553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.695567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.005 qpair failed and we were unable to recover it. 00:26:12.005 [2024-04-18 21:19:27.695782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.695984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.695997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.005 qpair failed and we were unable to recover it. 00:26:12.005 [2024-04-18 21:19:27.696354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.696455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.696468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.005 qpair failed and we were unable to recover it. 00:26:12.005 [2024-04-18 21:19:27.696825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.697189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.697201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.005 qpair failed and we were unable to recover it. 00:26:12.005 [2024-04-18 21:19:27.697547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.697894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.697907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.005 qpair failed and we were unable to recover it. 00:26:12.005 [2024-04-18 21:19:27.698183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.698398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.698411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.005 qpair failed and we were unable to recover it. 00:26:12.005 [2024-04-18 21:19:27.698634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.698896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.698910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.005 qpair failed and we were unable to recover it. 00:26:12.005 [2024-04-18 21:19:27.699132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.699402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.699415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.005 qpair failed and we were unable to recover it. 00:26:12.005 [2024-04-18 21:19:27.699705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.699875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.699888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.005 qpair failed and we were unable to recover it. 00:26:12.005 [2024-04-18 21:19:27.700103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.700368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.700381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.005 qpair failed and we were unable to recover it. 00:26:12.005 [2024-04-18 21:19:27.700607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.700951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.700964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.005 qpair failed and we were unable to recover it. 00:26:12.005 [2024-04-18 21:19:27.701299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.701525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.701538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.005 qpair failed and we were unable to recover it. 00:26:12.005 [2024-04-18 21:19:27.701824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.702167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.702180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.005 qpair failed and we were unable to recover it. 00:26:12.005 [2024-04-18 21:19:27.702559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.702843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.702856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.005 qpair failed and we were unable to recover it. 00:26:12.005 [2024-04-18 21:19:27.703070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.703369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.703382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.005 qpair failed and we were unable to recover it. 00:26:12.005 [2024-04-18 21:19:27.703742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.703965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.703977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.005 qpair failed and we were unable to recover it. 00:26:12.005 [2024-04-18 21:19:27.704308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.704582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.704596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.005 qpair failed and we were unable to recover it. 00:26:12.005 [2024-04-18 21:19:27.704932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.705229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.705242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.005 qpair failed and we were unable to recover it. 00:26:12.005 [2024-04-18 21:19:27.705523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.705855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.705870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.005 qpair failed and we were unable to recover it. 00:26:12.005 [2024-04-18 21:19:27.706080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.706407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.706420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.005 qpair failed and we were unable to recover it. 00:26:12.005 [2024-04-18 21:19:27.706758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.707092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.707105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.005 qpair failed and we were unable to recover it. 00:26:12.005 [2024-04-18 21:19:27.707416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.707696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.707709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.005 qpair failed and we were unable to recover it. 00:26:12.005 [2024-04-18 21:19:27.708069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.708360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.708373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.005 qpair failed and we were unable to recover it. 00:26:12.005 [2024-04-18 21:19:27.708731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.709069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.709082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.005 qpair failed and we were unable to recover it. 00:26:12.005 [2024-04-18 21:19:27.709387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.709503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.709529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.005 qpair failed and we were unable to recover it. 00:26:12.005 [2024-04-18 21:19:27.709795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.710147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.710160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.005 qpair failed and we were unable to recover it. 00:26:12.005 [2024-04-18 21:19:27.710426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.710768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.710781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.005 qpair failed and we were unable to recover it. 00:26:12.005 [2024-04-18 21:19:27.711161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.711466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.005 [2024-04-18 21:19:27.711479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.005 qpair failed and we were unable to recover it. 00:26:12.005 [2024-04-18 21:19:27.711751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.006 [2024-04-18 21:19:27.712106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.006 [2024-04-18 21:19:27.712119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.006 qpair failed and we were unable to recover it. 00:26:12.006 [2024-04-18 21:19:27.712404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.006 [2024-04-18 21:19:27.712576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.006 [2024-04-18 21:19:27.712590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.006 qpair failed and we were unable to recover it. 00:26:12.006 [2024-04-18 21:19:27.712923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.006 [2024-04-18 21:19:27.713120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.006 [2024-04-18 21:19:27.713133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.006 qpair failed and we were unable to recover it. 00:26:12.006 [2024-04-18 21:19:27.713346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.006 [2024-04-18 21:19:27.713681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.006 [2024-04-18 21:19:27.713695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.006 qpair failed and we were unable to recover it. 00:26:12.006 [2024-04-18 21:19:27.713926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.006 [2024-04-18 21:19:27.714278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.006 [2024-04-18 21:19:27.714291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.006 qpair failed and we were unable to recover it. 00:26:12.006 [2024-04-18 21:19:27.714568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.006 [2024-04-18 21:19:27.714710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.006 [2024-04-18 21:19:27.714723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.006 qpair failed and we were unable to recover it. 00:26:12.006 [2024-04-18 21:19:27.715055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.006 [2024-04-18 21:19:27.715319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.006 [2024-04-18 21:19:27.715332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.006 qpair failed and we were unable to recover it. 00:26:12.006 [2024-04-18 21:19:27.715599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.006 [2024-04-18 21:19:27.715953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.006 [2024-04-18 21:19:27.715965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.006 qpair failed and we were unable to recover it. 00:26:12.006 [2024-04-18 21:19:27.716319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.006 [2024-04-18 21:19:27.716602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.006 [2024-04-18 21:19:27.716616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.006 qpair failed and we were unable to recover it. 00:26:12.006 [2024-04-18 21:19:27.716977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.006 [2024-04-18 21:19:27.717262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.006 [2024-04-18 21:19:27.717274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.006 qpair failed and we were unable to recover it. 00:26:12.006 [2024-04-18 21:19:27.717502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.006 [2024-04-18 21:19:27.717861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.006 [2024-04-18 21:19:27.717875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.006 qpair failed and we were unable to recover it. 00:26:12.006 [2024-04-18 21:19:27.718097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.006 [2024-04-18 21:19:27.718404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.006 [2024-04-18 21:19:27.718417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.006 qpair failed and we were unable to recover it. 00:26:12.006 [2024-04-18 21:19:27.718767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.006 [2024-04-18 21:19:27.719048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.006 [2024-04-18 21:19:27.719061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.006 qpair failed and we were unable to recover it. 00:26:12.006 [2024-04-18 21:19:27.719280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.006 [2024-04-18 21:19:27.719509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.006 [2024-04-18 21:19:27.719528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.006 qpair failed and we were unable to recover it. 00:26:12.006 [2024-04-18 21:19:27.719763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.006 [2024-04-18 21:19:27.720051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.006 [2024-04-18 21:19:27.720069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.006 qpair failed and we were unable to recover it. 00:26:12.006 [2024-04-18 21:19:27.720346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.006 [2024-04-18 21:19:27.720456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.006 [2024-04-18 21:19:27.720469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.006 qpair failed and we were unable to recover it. 00:26:12.006 [2024-04-18 21:19:27.720829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.006 [2024-04-18 21:19:27.721062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.006 [2024-04-18 21:19:27.721075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.006 qpair failed and we were unable to recover it. 00:26:12.006 [2024-04-18 21:19:27.721351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.006 [2024-04-18 21:19:27.721705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.006 [2024-04-18 21:19:27.721718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.006 qpair failed and we were unable to recover it. 00:26:12.006 [2024-04-18 21:19:27.722076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.006 [2024-04-18 21:19:27.722342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.006 [2024-04-18 21:19:27.722355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.006 qpair failed and we were unable to recover it. 00:26:12.006 [2024-04-18 21:19:27.722637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.006 [2024-04-18 21:19:27.722990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.006 [2024-04-18 21:19:27.723003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.006 qpair failed and we were unable to recover it. 00:26:12.006 [2024-04-18 21:19:27.723284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.006 [2024-04-18 21:19:27.723617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.006 [2024-04-18 21:19:27.723630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.006 qpair failed and we were unable to recover it. 00:26:12.006 [2024-04-18 21:19:27.723911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.006 [2024-04-18 21:19:27.724274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.006 [2024-04-18 21:19:27.724287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.006 qpair failed and we were unable to recover it. 00:26:12.006 [2024-04-18 21:19:27.724522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.006 [2024-04-18 21:19:27.724881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.006 [2024-04-18 21:19:27.724894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.006 qpair failed and we were unable to recover it. 00:26:12.006 [2024-04-18 21:19:27.725179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.006 [2024-04-18 21:19:27.725508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.006 [2024-04-18 21:19:27.725528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.006 qpair failed and we were unable to recover it. 00:26:12.006 [2024-04-18 21:19:27.725867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.726078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.726091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.007 qpair failed and we were unable to recover it. 00:26:12.007 [2024-04-18 21:19:27.726332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.726661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.726674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.007 qpair failed and we were unable to recover it. 00:26:12.007 [2024-04-18 21:19:27.726959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.727220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.727233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.007 qpair failed and we were unable to recover it. 00:26:12.007 [2024-04-18 21:19:27.727594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.727951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.727964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.007 qpair failed and we were unable to recover it. 00:26:12.007 [2024-04-18 21:19:27.728327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.728598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.728612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.007 qpair failed and we were unable to recover it. 00:26:12.007 [2024-04-18 21:19:27.728888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.729169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.729182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.007 qpair failed and we were unable to recover it. 00:26:12.007 [2024-04-18 21:19:27.729459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.729740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.729754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.007 qpair failed and we were unable to recover it. 00:26:12.007 [2024-04-18 21:19:27.730023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.730377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.730390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.007 qpair failed and we were unable to recover it. 00:26:12.007 [2024-04-18 21:19:27.730748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.731028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.731041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.007 qpair failed and we were unable to recover it. 00:26:12.007 [2024-04-18 21:19:27.731250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.731606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.731620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.007 qpair failed and we were unable to recover it. 00:26:12.007 [2024-04-18 21:19:27.731848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.732157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.732170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.007 qpair failed and we were unable to recover it. 00:26:12.007 [2024-04-18 21:19:27.732525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.732751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.732765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.007 qpair failed and we were unable to recover it. 00:26:12.007 [2024-04-18 21:19:27.733045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.733306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.733319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.007 qpair failed and we were unable to recover it. 00:26:12.007 [2024-04-18 21:19:27.733665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.733894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.733907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.007 qpair failed and we were unable to recover it. 00:26:12.007 [2024-04-18 21:19:27.734174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.734477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.734490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.007 qpair failed and we were unable to recover it. 00:26:12.007 [2024-04-18 21:19:27.734724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.734831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.734844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.007 qpair failed and we were unable to recover it. 00:26:12.007 [2024-04-18 21:19:27.735117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.735507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.735527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.007 qpair failed and we were unable to recover it. 00:26:12.007 [2024-04-18 21:19:27.735905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.736271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.736286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.007 qpair failed and we were unable to recover it. 00:26:12.007 [2024-04-18 21:19:27.736461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.736681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.736695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.007 qpair failed and we were unable to recover it. 00:26:12.007 [2024-04-18 21:19:27.737032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.737334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.737347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.007 qpair failed and we were unable to recover it. 00:26:12.007 [2024-04-18 21:19:27.737634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.737847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.737860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.007 qpair failed and we were unable to recover it. 00:26:12.007 [2024-04-18 21:19:27.738035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.738388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.738400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.007 qpair failed and we were unable to recover it. 00:26:12.007 [2024-04-18 21:19:27.738715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.738992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.739006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.007 qpair failed and we were unable to recover it. 00:26:12.007 [2024-04-18 21:19:27.739299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.739562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.739575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.007 qpair failed and we were unable to recover it. 00:26:12.007 [2024-04-18 21:19:27.739842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.740117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.740130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.007 qpair failed and we were unable to recover it. 00:26:12.007 [2024-04-18 21:19:27.740371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.740675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.740689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.007 qpair failed and we were unable to recover it. 00:26:12.007 [2024-04-18 21:19:27.741027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.741377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.741390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.007 qpair failed and we were unable to recover it. 00:26:12.007 [2024-04-18 21:19:27.741726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.742080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.742096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.007 qpair failed and we were unable to recover it. 00:26:12.007 [2024-04-18 21:19:27.742368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.742642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.007 [2024-04-18 21:19:27.742655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.008 qpair failed and we were unable to recover it. 00:26:12.008 [2024-04-18 21:19:27.742955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.743301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.743314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.008 qpair failed and we were unable to recover it. 00:26:12.008 [2024-04-18 21:19:27.743612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.743903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.743916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.008 qpair failed and we were unable to recover it. 00:26:12.008 [2024-04-18 21:19:27.744279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.744660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.744674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.008 qpair failed and we were unable to recover it. 00:26:12.008 [2024-04-18 21:19:27.744945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.745172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.745186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.008 qpair failed and we were unable to recover it. 00:26:12.008 [2024-04-18 21:19:27.745521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.745808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.745821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.008 qpair failed and we were unable to recover it. 00:26:12.008 [2024-04-18 21:19:27.746152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.746482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.746495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.008 qpair failed and we were unable to recover it. 00:26:12.008 [2024-04-18 21:19:27.746705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.747052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.747066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.008 qpair failed and we were unable to recover it. 00:26:12.008 [2024-04-18 21:19:27.747364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.747587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.747601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.008 qpair failed and we were unable to recover it. 00:26:12.008 [2024-04-18 21:19:27.747821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.748175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.748188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.008 qpair failed and we were unable to recover it. 00:26:12.008 [2024-04-18 21:19:27.748410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.748710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.748724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.008 qpair failed and we were unable to recover it. 00:26:12.008 [2024-04-18 21:19:27.748988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.749289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.749302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.008 qpair failed and we were unable to recover it. 00:26:12.008 [2024-04-18 21:19:27.749663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.749996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.750009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.008 qpair failed and we were unable to recover it. 00:26:12.008 [2024-04-18 21:19:27.750289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.750551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.750564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.008 qpair failed and we were unable to recover it. 00:26:12.008 [2024-04-18 21:19:27.750848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.751199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.751212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.008 qpair failed and we were unable to recover it. 00:26:12.008 [2024-04-18 21:19:27.751546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.751818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.751831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.008 qpair failed and we were unable to recover it. 00:26:12.008 [2024-04-18 21:19:27.752128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.752413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.752425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.008 qpair failed and we were unable to recover it. 00:26:12.008 [2024-04-18 21:19:27.752701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.753082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.753095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.008 qpair failed and we were unable to recover it. 00:26:12.008 [2024-04-18 21:19:27.753316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.753600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.753614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.008 qpair failed and we were unable to recover it. 00:26:12.008 [2024-04-18 21:19:27.753840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.754141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.754154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.008 qpair failed and we were unable to recover it. 00:26:12.008 [2024-04-18 21:19:27.754488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.754851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.754864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.008 qpair failed and we were unable to recover it. 00:26:12.008 [2024-04-18 21:19:27.755104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.755454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.755467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.008 qpair failed and we were unable to recover it. 00:26:12.008 [2024-04-18 21:19:27.755699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.756037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.756049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.008 qpair failed and we were unable to recover it. 00:26:12.008 [2024-04-18 21:19:27.756343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.756523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.756537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.008 qpair failed and we were unable to recover it. 00:26:12.008 [2024-04-18 21:19:27.756757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.757107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.757120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.008 qpair failed and we were unable to recover it. 00:26:12.008 [2024-04-18 21:19:27.757474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.757759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.757772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.008 qpair failed and we were unable to recover it. 00:26:12.008 [2024-04-18 21:19:27.757945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.758293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.758306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.008 qpair failed and we were unable to recover it. 00:26:12.008 [2024-04-18 21:19:27.758608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.758965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.758978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.008 qpair failed and we were unable to recover it. 00:26:12.008 [2024-04-18 21:19:27.759265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.759498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.759516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.008 qpair failed and we were unable to recover it. 00:26:12.008 [2024-04-18 21:19:27.759782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.760137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.008 [2024-04-18 21:19:27.760151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.008 qpair failed and we were unable to recover it. 00:26:12.009 [2024-04-18 21:19:27.760424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.760727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.760741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.009 qpair failed and we were unable to recover it. 00:26:12.009 [2024-04-18 21:19:27.761076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.761288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.761301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.009 qpair failed and we were unable to recover it. 00:26:12.009 [2024-04-18 21:19:27.761610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.761820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.761833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.009 qpair failed and we were unable to recover it. 00:26:12.009 [2024-04-18 21:19:27.762168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.762450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.762463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.009 qpair failed and we were unable to recover it. 00:26:12.009 [2024-04-18 21:19:27.762827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.763080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.763108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.009 qpair failed and we were unable to recover it. 00:26:12.009 [2024-04-18 21:19:27.763427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.763737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.763766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.009 qpair failed and we were unable to recover it. 00:26:12.009 [2024-04-18 21:19:27.764076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.764438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.764467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.009 qpair failed and we were unable to recover it. 00:26:12.009 [2024-04-18 21:19:27.764793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.764967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.764980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.009 qpair failed and we were unable to recover it. 00:26:12.009 [2024-04-18 21:19:27.765252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.765456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.765484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.009 qpair failed and we were unable to recover it. 00:26:12.009 [2024-04-18 21:19:27.765896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.766237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.766265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.009 qpair failed and we were unable to recover it. 00:26:12.009 [2024-04-18 21:19:27.766534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.766991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.767019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.009 qpair failed and we were unable to recover it. 00:26:12.009 [2024-04-18 21:19:27.767329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.767652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.767681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.009 qpair failed and we were unable to recover it. 00:26:12.009 [2024-04-18 21:19:27.767996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.768252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.768280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.009 qpair failed and we were unable to recover it. 00:26:12.009 [2024-04-18 21:19:27.768601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.769029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.769057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.009 qpair failed and we were unable to recover it. 00:26:12.009 [2024-04-18 21:19:27.769357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.769756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.769770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.009 qpair failed and we were unable to recover it. 00:26:12.009 [2024-04-18 21:19:27.770055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.770398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.770411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.009 qpair failed and we were unable to recover it. 00:26:12.009 [2024-04-18 21:19:27.770750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.771022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.771050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.009 qpair failed and we were unable to recover it. 00:26:12.009 [2024-04-18 21:19:27.771383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.771772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.771786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.009 qpair failed and we were unable to recover it. 00:26:12.009 [2024-04-18 21:19:27.771991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.772202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.772215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.009 qpair failed and we were unable to recover it. 00:26:12.009 [2024-04-18 21:19:27.772507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.772762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.772791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.009 qpair failed and we were unable to recover it. 00:26:12.009 [2024-04-18 21:19:27.773110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.773468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.773502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.009 qpair failed and we were unable to recover it. 00:26:12.009 [2024-04-18 21:19:27.773814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.774059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.774087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.009 qpair failed and we were unable to recover it. 00:26:12.009 [2024-04-18 21:19:27.774397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.774753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.774766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.009 qpair failed and we were unable to recover it. 00:26:12.009 [2024-04-18 21:19:27.774987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.775224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.775252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.009 qpair failed and we were unable to recover it. 00:26:12.009 [2024-04-18 21:19:27.775500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.775823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.775852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.009 qpair failed and we were unable to recover it. 00:26:12.009 [2024-04-18 21:19:27.776272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.776508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.776545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.009 qpair failed and we were unable to recover it. 00:26:12.009 [2024-04-18 21:19:27.776979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.777193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.777207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.009 qpair failed and we were unable to recover it. 00:26:12.009 [2024-04-18 21:19:27.777486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.777711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.777725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.009 qpair failed and we were unable to recover it. 00:26:12.009 [2024-04-18 21:19:27.778040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.778316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.009 [2024-04-18 21:19:27.778330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.009 qpair failed and we were unable to recover it. 00:26:12.009 [2024-04-18 21:19:27.778618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.778956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.778985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.010 qpair failed and we were unable to recover it. 00:26:12.010 [2024-04-18 21:19:27.779296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.779689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.779718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.010 qpair failed and we were unable to recover it. 00:26:12.010 [2024-04-18 21:19:27.780097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.780344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.780372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.010 qpair failed and we were unable to recover it. 00:26:12.010 [2024-04-18 21:19:27.780758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.781009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.781038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.010 qpair failed and we were unable to recover it. 00:26:12.010 [2024-04-18 21:19:27.781340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.781680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.781709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.010 qpair failed and we were unable to recover it. 00:26:12.010 [2024-04-18 21:19:27.782012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.782249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.782277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.010 qpair failed and we were unable to recover it. 00:26:12.010 [2024-04-18 21:19:27.782619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.782925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.782954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.010 qpair failed and we were unable to recover it. 00:26:12.010 [2024-04-18 21:19:27.783262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.783651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.783681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.010 qpair failed and we were unable to recover it. 00:26:12.010 [2024-04-18 21:19:27.783938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.784149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.784162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.010 qpair failed and we were unable to recover it. 00:26:12.010 [2024-04-18 21:19:27.784431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.784649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.784663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.010 qpair failed and we were unable to recover it. 00:26:12.010 [2024-04-18 21:19:27.784933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.785155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.785168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.010 qpair failed and we were unable to recover it. 00:26:12.010 [2024-04-18 21:19:27.785456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.785665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.785679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.010 qpair failed and we were unable to recover it. 00:26:12.010 [2024-04-18 21:19:27.785954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.786167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.786181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.010 qpair failed and we were unable to recover it. 00:26:12.010 [2024-04-18 21:19:27.786456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.786733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.786747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.010 qpair failed and we were unable to recover it. 00:26:12.010 [2024-04-18 21:19:27.786961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.787246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.787274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.010 qpair failed and we were unable to recover it. 00:26:12.010 [2024-04-18 21:19:27.787613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.787865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.787878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.010 qpair failed and we were unable to recover it. 00:26:12.010 [2024-04-18 21:19:27.788097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.788258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.788286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.010 qpair failed and we were unable to recover it. 00:26:12.010 [2024-04-18 21:19:27.788597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.788911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.788939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.010 qpair failed and we were unable to recover it. 00:26:12.010 [2024-04-18 21:19:27.789176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.789466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.789494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.010 qpair failed and we were unable to recover it. 00:26:12.010 [2024-04-18 21:19:27.789906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.790266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.790295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.010 qpair failed and we were unable to recover it. 00:26:12.010 [2024-04-18 21:19:27.790553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.790865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.790893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.010 qpair failed and we were unable to recover it. 00:26:12.010 [2024-04-18 21:19:27.791133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.791442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.791470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.010 qpair failed and we were unable to recover it. 00:26:12.010 [2024-04-18 21:19:27.791745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.792044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.792072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.010 qpair failed and we were unable to recover it. 00:26:12.010 [2024-04-18 21:19:27.792322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.792572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.792601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.010 qpair failed and we were unable to recover it. 00:26:12.010 [2024-04-18 21:19:27.792853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.793281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.793309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.010 qpair failed and we were unable to recover it. 00:26:12.010 [2024-04-18 21:19:27.793619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.793968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.793996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.010 qpair failed and we were unable to recover it. 00:26:12.010 [2024-04-18 21:19:27.794265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.794446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.794474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.010 qpair failed and we were unable to recover it. 00:26:12.010 [2024-04-18 21:19:27.794805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.795105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.795134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.010 qpair failed and we were unable to recover it. 00:26:12.010 [2024-04-18 21:19:27.795382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.795650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.010 [2024-04-18 21:19:27.795679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.011 qpair failed and we were unable to recover it. 00:26:12.011 [2024-04-18 21:19:27.795918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.011 [2024-04-18 21:19:27.796159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.011 [2024-04-18 21:19:27.796171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.011 qpair failed and we were unable to recover it. 00:26:12.011 [2024-04-18 21:19:27.796578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.011 [2024-04-18 21:19:27.796783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.011 [2024-04-18 21:19:27.796796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.011 qpair failed and we were unable to recover it. 00:26:12.011 [2024-04-18 21:19:27.797046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.011 [2024-04-18 21:19:27.797280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.011 [2024-04-18 21:19:27.797308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.011 qpair failed and we were unable to recover it. 00:26:12.011 [2024-04-18 21:19:27.797619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.011 [2024-04-18 21:19:27.797937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.011 [2024-04-18 21:19:27.797966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.011 qpair failed and we were unable to recover it. 00:26:12.011 [2024-04-18 21:19:27.798152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.011 [2024-04-18 21:19:27.798473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.011 [2024-04-18 21:19:27.798501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.011 qpair failed and we were unable to recover it. 00:26:12.011 [2024-04-18 21:19:27.798763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.011 [2024-04-18 21:19:27.798967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.011 [2024-04-18 21:19:27.798980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.011 qpair failed and we were unable to recover it. 00:26:12.011 [2024-04-18 21:19:27.799253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.011 [2024-04-18 21:19:27.799530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.011 [2024-04-18 21:19:27.799544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.011 qpair failed and we were unable to recover it. 00:26:12.011 [2024-04-18 21:19:27.799778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.011 [2024-04-18 21:19:27.800129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.011 [2024-04-18 21:19:27.800142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.011 qpair failed and we were unable to recover it. 00:26:12.011 [2024-04-18 21:19:27.800356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.011 [2024-04-18 21:19:27.800692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.011 [2024-04-18 21:19:27.800705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.011 qpair failed and we were unable to recover it. 00:26:12.011 [2024-04-18 21:19:27.801044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.011 [2024-04-18 21:19:27.801406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.011 [2024-04-18 21:19:27.801434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.011 qpair failed and we were unable to recover it. 00:26:12.011 [2024-04-18 21:19:27.801760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.011 [2024-04-18 21:19:27.802000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.011 [2024-04-18 21:19:27.802028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.011 qpair failed and we were unable to recover it. 00:26:12.011 [2024-04-18 21:19:27.802425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.011 [2024-04-18 21:19:27.802760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.011 [2024-04-18 21:19:27.802790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.011 qpair failed and we were unable to recover it. 00:26:12.011 [2024-04-18 21:19:27.803111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.011 [2024-04-18 21:19:27.803362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.011 [2024-04-18 21:19:27.803390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.011 qpair failed and we were unable to recover it. 00:26:12.011 [2024-04-18 21:19:27.803785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.011 [2024-04-18 21:19:27.804046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.011 [2024-04-18 21:19:27.804062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.011 qpair failed and we were unable to recover it. 00:26:12.011 [2024-04-18 21:19:27.804330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.011 [2024-04-18 21:19:27.804687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.011 [2024-04-18 21:19:27.804725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.011 qpair failed and we were unable to recover it. 00:26:12.011 [2024-04-18 21:19:27.804979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.011 [2024-04-18 21:19:27.805354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.011 [2024-04-18 21:19:27.805382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.011 qpair failed and we were unable to recover it. 00:26:12.011 [2024-04-18 21:19:27.805766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.011 [2024-04-18 21:19:27.806132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.011 [2024-04-18 21:19:27.806160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.011 qpair failed and we were unable to recover it. 00:26:12.011 [2024-04-18 21:19:27.806414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.011 [2024-04-18 21:19:27.806712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.011 [2024-04-18 21:19:27.806742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.011 qpair failed and we were unable to recover it. 00:26:12.011 [2024-04-18 21:19:27.807129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.011 [2024-04-18 21:19:27.807433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.011 [2024-04-18 21:19:27.807462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.011 qpair failed and we were unable to recover it. 00:26:12.011 [2024-04-18 21:19:27.807742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.011 [2024-04-18 21:19:27.808008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.011 [2024-04-18 21:19:27.808021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.011 qpair failed and we were unable to recover it. 00:26:12.011 [2024-04-18 21:19:27.808316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.011 [2024-04-18 21:19:27.808525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.011 [2024-04-18 21:19:27.808539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.011 qpair failed and we were unable to recover it. 00:26:12.011 [2024-04-18 21:19:27.808643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.011 [2024-04-18 21:19:27.808924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.011 [2024-04-18 21:19:27.808952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.011 qpair failed and we were unable to recover it. 00:26:12.011 [2024-04-18 21:19:27.809268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.011 [2024-04-18 21:19:27.809565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.011 [2024-04-18 21:19:27.809597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.011 qpair failed and we were unable to recover it. 00:26:12.011 [2024-04-18 21:19:27.809818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.011 [2024-04-18 21:19:27.810092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.011 [2024-04-18 21:19:27.810106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.011 qpair failed and we were unable to recover it. 00:26:12.011 [2024-04-18 21:19:27.810330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.011 [2024-04-18 21:19:27.810574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.011 [2024-04-18 21:19:27.810604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.012 qpair failed and we were unable to recover it. 00:26:12.012 [2024-04-18 21:19:27.810848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.811143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.811170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.012 qpair failed and we were unable to recover it. 00:26:12.012 [2024-04-18 21:19:27.811482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.811750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.811780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.012 qpair failed and we were unable to recover it. 00:26:12.012 [2024-04-18 21:19:27.812170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.812424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.812453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.012 qpair failed and we were unable to recover it. 00:26:12.012 [2024-04-18 21:19:27.812764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.813103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.813132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.012 qpair failed and we were unable to recover it. 00:26:12.012 [2024-04-18 21:19:27.813448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.813698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.813729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.012 qpair failed and we were unable to recover it. 00:26:12.012 [2024-04-18 21:19:27.814031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.814303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.814317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.012 qpair failed and we were unable to recover it. 00:26:12.012 [2024-04-18 21:19:27.814534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.814861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.814889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.012 qpair failed and we were unable to recover it. 00:26:12.012 [2024-04-18 21:19:27.815203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.815465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.815492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.012 qpair failed and we were unable to recover it. 00:26:12.012 [2024-04-18 21:19:27.815759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.816007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.816021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.012 qpair failed and we were unable to recover it. 00:26:12.012 [2024-04-18 21:19:27.816240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.816529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.816559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.012 qpair failed and we were unable to recover it. 00:26:12.012 [2024-04-18 21:19:27.816864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.817157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.817170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.012 qpair failed and we were unable to recover it. 00:26:12.012 [2024-04-18 21:19:27.817402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.817630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.817643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.012 qpair failed and we were unable to recover it. 00:26:12.012 [2024-04-18 21:19:27.817860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.818167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.818195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.012 qpair failed and we were unable to recover it. 00:26:12.012 [2024-04-18 21:19:27.818449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.818762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.818791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.012 qpair failed and we were unable to recover it. 00:26:12.012 [2024-04-18 21:19:27.819035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.819327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.819356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.012 qpair failed and we were unable to recover it. 00:26:12.012 [2024-04-18 21:19:27.819598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.819900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.819928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.012 qpair failed and we were unable to recover it. 00:26:12.012 [2024-04-18 21:19:27.820188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.820376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.820389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.012 qpair failed and we were unable to recover it. 00:26:12.012 [2024-04-18 21:19:27.820628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.820903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.820931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.012 qpair failed and we were unable to recover it. 00:26:12.012 [2024-04-18 21:19:27.821243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.821490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.821530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.012 qpair failed and we were unable to recover it. 00:26:12.012 [2024-04-18 21:19:27.821837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.822227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.822240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.012 qpair failed and we were unable to recover it. 00:26:12.012 [2024-04-18 21:19:27.822451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.822731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.822744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.012 qpair failed and we were unable to recover it. 00:26:12.012 [2024-04-18 21:19:27.823091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.823369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.823383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.012 qpair failed and we were unable to recover it. 00:26:12.012 [2024-04-18 21:19:27.823651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.823913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.823926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.012 qpair failed and we were unable to recover it. 00:26:12.012 [2024-04-18 21:19:27.824151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.824391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.824419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.012 qpair failed and we were unable to recover it. 00:26:12.012 [2024-04-18 21:19:27.824695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.825022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.825051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.012 qpair failed and we were unable to recover it. 00:26:12.012 [2024-04-18 21:19:27.825222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.825439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.825471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.012 qpair failed and we were unable to recover it. 00:26:12.012 [2024-04-18 21:19:27.825786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.826025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.826053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.012 qpair failed and we were unable to recover it. 00:26:12.012 [2024-04-18 21:19:27.826386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.826589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.012 [2024-04-18 21:19:27.826619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.012 qpair failed and we were unable to recover it. 00:26:12.013 [2024-04-18 21:19:27.826922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.827191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.827204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.013 qpair failed and we were unable to recover it. 00:26:12.013 [2024-04-18 21:19:27.827430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.827653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.827666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.013 qpair failed and we were unable to recover it. 00:26:12.013 [2024-04-18 21:19:27.827965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.828179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.828192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.013 qpair failed and we were unable to recover it. 00:26:12.013 [2024-04-18 21:19:27.828466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.828739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.828753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.013 qpair failed and we were unable to recover it. 00:26:12.013 [2024-04-18 21:19:27.829107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.829321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.829334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.013 qpair failed and we were unable to recover it. 00:26:12.013 [2024-04-18 21:19:27.829604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.829814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.829827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.013 qpair failed and we were unable to recover it. 00:26:12.013 [2024-04-18 21:19:27.830094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.830299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.830312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.013 qpair failed and we were unable to recover it. 00:26:12.013 [2024-04-18 21:19:27.830589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.830861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.830874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.013 qpair failed and we were unable to recover it. 00:26:12.013 [2024-04-18 21:19:27.831155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.831417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.831431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.013 qpair failed and we were unable to recover it. 00:26:12.013 [2024-04-18 21:19:27.831706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.832042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.832071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.013 qpair failed and we were unable to recover it. 00:26:12.013 [2024-04-18 21:19:27.832341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.832659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.832688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.013 qpair failed and we were unable to recover it. 00:26:12.013 [2024-04-18 21:19:27.833000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.833309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.833342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.013 qpair failed and we were unable to recover it. 00:26:12.013 [2024-04-18 21:19:27.833649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.833884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.833913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.013 qpair failed and we were unable to recover it. 00:26:12.013 [2024-04-18 21:19:27.834176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.834472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.834500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.013 qpair failed and we were unable to recover it. 00:26:12.013 [2024-04-18 21:19:27.834834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.835060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.835088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.013 qpair failed and we were unable to recover it. 00:26:12.013 [2024-04-18 21:19:27.835323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.835634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.835664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.013 qpair failed and we were unable to recover it. 00:26:12.013 [2024-04-18 21:19:27.835917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.836164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.836193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.013 qpair failed and we were unable to recover it. 00:26:12.013 [2024-04-18 21:19:27.836449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.836743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.836773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.013 qpair failed and we were unable to recover it. 00:26:12.013 [2024-04-18 21:19:27.837006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.837314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.837355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.013 qpair failed and we were unable to recover it. 00:26:12.013 [2024-04-18 21:19:27.837673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.837908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.837937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.013 qpair failed and we were unable to recover it. 00:26:12.013 [2024-04-18 21:19:27.838303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.838613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.838642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.013 qpair failed and we were unable to recover it. 00:26:12.013 [2024-04-18 21:19:27.838951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.839248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.839276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.013 qpair failed and we were unable to recover it. 00:26:12.013 [2024-04-18 21:19:27.839593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.839834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.839863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.013 qpair failed and we were unable to recover it. 00:26:12.013 [2024-04-18 21:19:27.840106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.840349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.840378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.013 qpair failed and we were unable to recover it. 00:26:12.013 [2024-04-18 21:19:27.840690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.840952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.840981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.013 qpair failed and we were unable to recover it. 00:26:12.013 [2024-04-18 21:19:27.841228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.841458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.841487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.013 qpair failed and we were unable to recover it. 00:26:12.013 [2024-04-18 21:19:27.841744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.842038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.842067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.013 qpair failed and we were unable to recover it. 00:26:12.013 [2024-04-18 21:19:27.842433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.842751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.842780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.013 qpair failed and we were unable to recover it. 00:26:12.013 [2024-04-18 21:19:27.843152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.843404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.013 [2024-04-18 21:19:27.843433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.014 qpair failed and we were unable to recover it. 00:26:12.014 [2024-04-18 21:19:27.843769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.843977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.843991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.014 qpair failed and we were unable to recover it. 00:26:12.014 [2024-04-18 21:19:27.844201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.844486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.844499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.014 qpair failed and we were unable to recover it. 00:26:12.014 [2024-04-18 21:19:27.844727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.844939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.844967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.014 qpair failed and we were unable to recover it. 00:26:12.014 [2024-04-18 21:19:27.845365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.845726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.845756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.014 qpair failed and we were unable to recover it. 00:26:12.014 [2024-04-18 21:19:27.845908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.846142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.846154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.014 qpair failed and we were unable to recover it. 00:26:12.014 [2024-04-18 21:19:27.846433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.846717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.846730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.014 qpair failed and we were unable to recover it. 00:26:12.014 [2024-04-18 21:19:27.846997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.847223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.847235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.014 qpair failed and we were unable to recover it. 00:26:12.014 [2024-04-18 21:19:27.847501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.847786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.847799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.014 qpair failed and we were unable to recover it. 00:26:12.014 [2024-04-18 21:19:27.848081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.848398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.848427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.014 qpair failed and we were unable to recover it. 00:26:12.014 [2024-04-18 21:19:27.848806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.849113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.849142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.014 qpair failed and we were unable to recover it. 00:26:12.014 [2024-04-18 21:19:27.849452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.849775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.849804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.014 qpair failed and we were unable to recover it. 00:26:12.014 [2024-04-18 21:19:27.850104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.850383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.850396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.014 qpair failed and we were unable to recover it. 00:26:12.014 [2024-04-18 21:19:27.850711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.850991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.851004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.014 qpair failed and we were unable to recover it. 00:26:12.014 [2024-04-18 21:19:27.851239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.851545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.851575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.014 qpair failed and we were unable to recover it. 00:26:12.014 [2024-04-18 21:19:27.851826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.852137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.852166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.014 qpair failed and we were unable to recover it. 00:26:12.014 [2024-04-18 21:19:27.852411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.852692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.852722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.014 qpair failed and we were unable to recover it. 00:26:12.014 [2024-04-18 21:19:27.853062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.853373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.853401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.014 qpair failed and we were unable to recover it. 00:26:12.014 [2024-04-18 21:19:27.853719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.854092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.854121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.014 qpair failed and we were unable to recover it. 00:26:12.014 [2024-04-18 21:19:27.854365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.854770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.854799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.014 qpair failed and we were unable to recover it. 00:26:12.014 [2024-04-18 21:19:27.855059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.855369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.855397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.014 qpair failed and we were unable to recover it. 00:26:12.014 [2024-04-18 21:19:27.855740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.856059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.856086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.014 qpair failed and we were unable to recover it. 00:26:12.014 [2024-04-18 21:19:27.856349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.856730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.856772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.014 qpair failed and we were unable to recover it. 00:26:12.014 [2024-04-18 21:19:27.857057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.857506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.857532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.014 qpair failed and we were unable to recover it. 00:26:12.014 [2024-04-18 21:19:27.857775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.857961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.857978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.014 qpair failed and we were unable to recover it. 00:26:12.014 [2024-04-18 21:19:27.858283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.858578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.858594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.014 qpair failed and we were unable to recover it. 00:26:12.014 [2024-04-18 21:19:27.858880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.859128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.859144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.014 qpair failed and we were unable to recover it. 00:26:12.014 [2024-04-18 21:19:27.859442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.859735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.859752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.014 qpair failed and we were unable to recover it. 00:26:12.014 [2024-04-18 21:19:27.859969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.860279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.014 [2024-04-18 21:19:27.860295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.014 qpair failed and we were unable to recover it. 00:26:12.014 [2024-04-18 21:19:27.860522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.015 [2024-04-18 21:19:27.860753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.015 [2024-04-18 21:19:27.860769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.015 qpair failed and we were unable to recover it. 00:26:12.015 [2024-04-18 21:19:27.860981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.015 [2024-04-18 21:19:27.861342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.015 [2024-04-18 21:19:27.861358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.015 qpair failed and we were unable to recover it. 00:26:12.015 [2024-04-18 21:19:27.861526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.015 [2024-04-18 21:19:27.861734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.015 [2024-04-18 21:19:27.861751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.015 qpair failed and we were unable to recover it. 00:26:12.015 [2024-04-18 21:19:27.861997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.015 [2024-04-18 21:19:27.862337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.015 [2024-04-18 21:19:27.862353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.015 qpair failed and we were unable to recover it. 00:26:12.015 [2024-04-18 21:19:27.862645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.015 [2024-04-18 21:19:27.862987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.015 [2024-04-18 21:19:27.863004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.015 qpair failed and we were unable to recover it. 00:26:12.015 [2024-04-18 21:19:27.863225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.015 [2024-04-18 21:19:27.863458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.015 [2024-04-18 21:19:27.863478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.015 qpair failed and we were unable to recover it. 00:26:12.015 [2024-04-18 21:19:27.863777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.015 [2024-04-18 21:19:27.864014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.015 [2024-04-18 21:19:27.864031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.015 qpair failed and we were unable to recover it. 00:26:12.015 [2024-04-18 21:19:27.864266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.015 [2024-04-18 21:19:27.864544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.015 [2024-04-18 21:19:27.864561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.015 qpair failed and we were unable to recover it. 00:26:12.015 [2024-04-18 21:19:27.864852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.015 [2024-04-18 21:19:27.865144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.015 [2024-04-18 21:19:27.865160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.015 qpair failed and we were unable to recover it. 00:26:12.015 [2024-04-18 21:19:27.865497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.015 [2024-04-18 21:19:27.865795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.015 [2024-04-18 21:19:27.865811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.015 qpair failed and we were unable to recover it. 00:26:12.015 [2024-04-18 21:19:27.866104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.015 [2024-04-18 21:19:27.866380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.015 [2024-04-18 21:19:27.866396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.015 qpair failed and we were unable to recover it. 00:26:12.015 [2024-04-18 21:19:27.866741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.015 [2024-04-18 21:19:27.867086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.015 [2024-04-18 21:19:27.867102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.015 qpair failed and we were unable to recover it. 00:26:12.015 [2024-04-18 21:19:27.867499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.015 [2024-04-18 21:19:27.867790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.015 [2024-04-18 21:19:27.867807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.015 qpair failed and we were unable to recover it. 00:26:12.015 [2024-04-18 21:19:27.868097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.015 [2024-04-18 21:19:27.868390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.015 [2024-04-18 21:19:27.868407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.015 qpair failed and we were unable to recover it. 00:26:12.015 [2024-04-18 21:19:27.868646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.015 [2024-04-18 21:19:27.869008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.015 [2024-04-18 21:19:27.869023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.015 qpair failed and we were unable to recover it. 00:26:12.015 [2024-04-18 21:19:27.869310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.015 [2024-04-18 21:19:27.869545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.015 [2024-04-18 21:19:27.869565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.015 qpair failed and we were unable to recover it. 00:26:12.015 [2024-04-18 21:19:27.869794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.015 [2024-04-18 21:19:27.870080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.015 [2024-04-18 21:19:27.870096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.015 qpair failed and we were unable to recover it. 00:26:12.015 [2024-04-18 21:19:27.870384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.015 [2024-04-18 21:19:27.870663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.015 [2024-04-18 21:19:27.870704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.015 qpair failed and we were unable to recover it. 00:26:12.015 [2024-04-18 21:19:27.870988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.015 [2024-04-18 21:19:27.871320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.015 [2024-04-18 21:19:27.871356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.015 qpair failed and we were unable to recover it. 00:26:12.015 [2024-04-18 21:19:27.871750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.015 [2024-04-18 21:19:27.872132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.015 [2024-04-18 21:19:27.872168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.015 qpair failed and we were unable to recover it. 00:26:12.015 [2024-04-18 21:19:27.872463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.015 [2024-04-18 21:19:27.872728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.015 [2024-04-18 21:19:27.872766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.015 qpair failed and we were unable to recover it. 00:26:12.015 [2024-04-18 21:19:27.873115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.015 [2024-04-18 21:19:27.873531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.015 [2024-04-18 21:19:27.873568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.015 qpair failed and we were unable to recover it. 00:26:12.015 [2024-04-18 21:19:27.873859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.015 [2024-04-18 21:19:27.874111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.015 [2024-04-18 21:19:27.874147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.015 qpair failed and we were unable to recover it. 00:26:12.015 [2024-04-18 21:19:27.874425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.015 [2024-04-18 21:19:27.874705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.015 [2024-04-18 21:19:27.874742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.015 qpair failed and we were unable to recover it. 00:26:12.015 [2024-04-18 21:19:27.875148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.015 [2024-04-18 21:19:27.875462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.015 [2024-04-18 21:19:27.875498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.015 qpair failed and we were unable to recover it. 00:26:12.016 [2024-04-18 21:19:27.875797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.876047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.876082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.016 qpair failed and we were unable to recover it. 00:26:12.016 [2024-04-18 21:19:27.876437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.876755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.876792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.016 qpair failed and we were unable to recover it. 00:26:12.016 [2024-04-18 21:19:27.877188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.877439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.877477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.016 qpair failed and we were unable to recover it. 00:26:12.016 [2024-04-18 21:19:27.877886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.878199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.878236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.016 qpair failed and we were unable to recover it. 00:26:12.016 [2024-04-18 21:19:27.878579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.878907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.878948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.016 qpair failed and we were unable to recover it. 00:26:12.016 [2024-04-18 21:19:27.879256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.879559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.879596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.016 qpair failed and we were unable to recover it. 00:26:12.016 [2024-04-18 21:19:27.880011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.880292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.880308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.016 qpair failed and we were unable to recover it. 00:26:12.016 [2024-04-18 21:19:27.880532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.880821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.880837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.016 qpair failed and we were unable to recover it. 00:26:12.016 [2024-04-18 21:19:27.881124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.881349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.881366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.016 qpair failed and we were unable to recover it. 00:26:12.016 [2024-04-18 21:19:27.881587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.881873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.881889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.016 qpair failed and we were unable to recover it. 00:26:12.016 [2024-04-18 21:19:27.882265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.882592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.882630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.016 qpair failed and we were unable to recover it. 00:26:12.016 [2024-04-18 21:19:27.882904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.883301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.883337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.016 qpair failed and we were unable to recover it. 00:26:12.016 [2024-04-18 21:19:27.883613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.884000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.884036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.016 qpair failed and we were unable to recover it. 00:26:12.016 [2024-04-18 21:19:27.884424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.884808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.884824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.016 qpair failed and we were unable to recover it. 00:26:12.016 [2024-04-18 21:19:27.885131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.885418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.885434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.016 qpair failed and we were unable to recover it. 00:26:12.016 [2024-04-18 21:19:27.885663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.886004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.886020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.016 qpair failed and we were unable to recover it. 00:26:12.016 [2024-04-18 21:19:27.886337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.886602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.886640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.016 qpair failed and we were unable to recover it. 00:26:12.016 [2024-04-18 21:19:27.887043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.887374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.887410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.016 qpair failed and we were unable to recover it. 00:26:12.016 [2024-04-18 21:19:27.887764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.888120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.888157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.016 qpair failed and we were unable to recover it. 00:26:12.016 [2024-04-18 21:19:27.888436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.888789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.888826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.016 qpair failed and we were unable to recover it. 00:26:12.016 [2024-04-18 21:19:27.889236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.889539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.889556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.016 qpair failed and we were unable to recover it. 00:26:12.016 [2024-04-18 21:19:27.889861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.890195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.890232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.016 qpair failed and we were unable to recover it. 00:26:12.016 [2024-04-18 21:19:27.890583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.890968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.891005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.016 qpair failed and we were unable to recover it. 00:26:12.016 [2024-04-18 21:19:27.891273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.891655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.891693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.016 qpair failed and we were unable to recover it. 00:26:12.016 [2024-04-18 21:19:27.891972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.892304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.892341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.016 qpair failed and we were unable to recover it. 00:26:12.016 [2024-04-18 21:19:27.892762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.893014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.893051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.016 qpair failed and we were unable to recover it. 00:26:12.016 [2024-04-18 21:19:27.893391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.893684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.893721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.016 qpair failed and we were unable to recover it. 00:26:12.016 [2024-04-18 21:19:27.894084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.894402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.894438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.016 qpair failed and we were unable to recover it. 00:26:12.016 [2024-04-18 21:19:27.894783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.016 [2024-04-18 21:19:27.895098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.895142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.017 qpair failed and we were unable to recover it. 00:26:12.017 [2024-04-18 21:19:27.895528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.895854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.895891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.017 qpair failed and we were unable to recover it. 00:26:12.017 [2024-04-18 21:19:27.896105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.896393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.896409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461c90 with addr=10.0.0.2, port=4420 00:26:12.017 qpair failed and we were unable to recover it. 00:26:12.017 [2024-04-18 21:19:27.896800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.897157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.897170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.017 qpair failed and we were unable to recover it. 00:26:12.017 [2024-04-18 21:19:27.897401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.897667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.897700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.017 qpair failed and we were unable to recover it. 00:26:12.017 [2024-04-18 21:19:27.898112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.898413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.898423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.017 qpair failed and we were unable to recover it. 00:26:12.017 [2024-04-18 21:19:27.898683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.898995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.899004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.017 qpair failed and we were unable to recover it. 00:26:12.017 [2024-04-18 21:19:27.899204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.899458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.899487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.017 qpair failed and we were unable to recover it. 00:26:12.017 [2024-04-18 21:19:27.899825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.900155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.900185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.017 qpair failed and we were unable to recover it. 00:26:12.017 [2024-04-18 21:19:27.900585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.900969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.900998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.017 qpair failed and we were unable to recover it. 00:26:12.017 [2024-04-18 21:19:27.901305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.901695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.901726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.017 qpair failed and we were unable to recover it. 00:26:12.017 [2024-04-18 21:19:27.902043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.902289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.902318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.017 qpair failed and we were unable to recover it. 00:26:12.017 [2024-04-18 21:19:27.902569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.902962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.902990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.017 qpair failed and we were unable to recover it. 00:26:12.017 [2024-04-18 21:19:27.903384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.903720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.903757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.017 qpair failed and we were unable to recover it. 00:26:12.017 [2024-04-18 21:19:27.904146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.904442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.904471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.017 qpair failed and we were unable to recover it. 00:26:12.017 [2024-04-18 21:19:27.904854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.905171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.905180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.017 qpair failed and we were unable to recover it. 00:26:12.017 [2024-04-18 21:19:27.905443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.905777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.905807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.017 qpair failed and we were unable to recover it. 00:26:12.017 [2024-04-18 21:19:27.906142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.906443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.906452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.017 qpair failed and we were unable to recover it. 00:26:12.017 [2024-04-18 21:19:27.906727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.906840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.906849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.017 qpair failed and we were unable to recover it. 00:26:12.017 [2024-04-18 21:19:27.907194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.907516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.907526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.017 qpair failed and we were unable to recover it. 00:26:12.017 [2024-04-18 21:19:27.907762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.908109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.908138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.017 qpair failed and we were unable to recover it. 00:26:12.017 [2024-04-18 21:19:27.908521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.908833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.908862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.017 qpair failed and we were unable to recover it. 00:26:12.017 [2024-04-18 21:19:27.909223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.909464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.909492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.017 qpair failed and we were unable to recover it. 00:26:12.017 [2024-04-18 21:19:27.909889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.910203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.910216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.017 qpair failed and we were unable to recover it. 00:26:12.017 [2024-04-18 21:19:27.910604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.910921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.910950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.017 qpair failed and we were unable to recover it. 00:26:12.017 [2024-04-18 21:19:27.911340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.911562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.911571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.017 qpair failed and we were unable to recover it. 00:26:12.017 [2024-04-18 21:19:27.911835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.912134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.912163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.017 qpair failed and we were unable to recover it. 00:26:12.017 [2024-04-18 21:19:27.912464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.912782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.912814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.017 qpair failed and we were unable to recover it. 00:26:12.017 [2024-04-18 21:19:27.913154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.017 [2024-04-18 21:19:27.913551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.018 [2024-04-18 21:19:27.913560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.018 qpair failed and we were unable to recover it. 00:26:12.018 [2024-04-18 21:19:27.913895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.018 [2024-04-18 21:19:27.914206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.018 [2024-04-18 21:19:27.914234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.018 qpair failed and we were unable to recover it. 00:26:12.018 [2024-04-18 21:19:27.914627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.018 [2024-04-18 21:19:27.914845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.018 [2024-04-18 21:19:27.914874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.018 qpair failed and we were unable to recover it. 00:26:12.018 [2024-04-18 21:19:27.915179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.018 [2024-04-18 21:19:27.915486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.018 [2024-04-18 21:19:27.915525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.018 qpair failed and we were unable to recover it. 00:26:12.018 [2024-04-18 21:19:27.915846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.018 [2024-04-18 21:19:27.916165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.018 [2024-04-18 21:19:27.916194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.018 qpair failed and we were unable to recover it. 00:26:12.018 [2024-04-18 21:19:27.916497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.018 [2024-04-18 21:19:27.916834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.018 [2024-04-18 21:19:27.916869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.018 qpair failed and we were unable to recover it. 00:26:12.018 [2024-04-18 21:19:27.917138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.018 [2024-04-18 21:19:27.917413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.018 [2024-04-18 21:19:27.917441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.018 qpair failed and we were unable to recover it. 00:26:12.018 [2024-04-18 21:19:27.917806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.018 [2024-04-18 21:19:27.918208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.018 [2024-04-18 21:19:27.918217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.018 qpair failed and we were unable to recover it. 00:26:12.018 [2024-04-18 21:19:27.918549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.018 [2024-04-18 21:19:27.918826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.018 [2024-04-18 21:19:27.918835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.018 qpair failed and we were unable to recover it. 00:26:12.018 [2024-04-18 21:19:27.919138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.018 [2024-04-18 21:19:27.919498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.018 [2024-04-18 21:19:27.919536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.018 qpair failed and we were unable to recover it. 00:26:12.018 [2024-04-18 21:19:27.919859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.018 [2024-04-18 21:19:27.920245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.018 [2024-04-18 21:19:27.920273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.018 qpair failed and we were unable to recover it. 00:26:12.018 [2024-04-18 21:19:27.920580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.018 [2024-04-18 21:19:27.920875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.018 [2024-04-18 21:19:27.920884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.018 qpair failed and we were unable to recover it. 00:26:12.018 [2024-04-18 21:19:27.921119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.018 [2024-04-18 21:19:27.921377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.018 [2024-04-18 21:19:27.921405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.018 qpair failed and we were unable to recover it. 00:26:12.018 [2024-04-18 21:19:27.921802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.018 [2024-04-18 21:19:27.922107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.018 [2024-04-18 21:19:27.922116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.018 qpair failed and we were unable to recover it. 00:26:12.018 [2024-04-18 21:19:27.922490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.018 [2024-04-18 21:19:27.922705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.018 [2024-04-18 21:19:27.922715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.018 qpair failed and we were unable to recover it. 00:26:12.018 [2024-04-18 21:19:27.922905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.018 [2024-04-18 21:19:27.923230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.018 [2024-04-18 21:19:27.923244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.018 qpair failed and we were unable to recover it. 00:26:12.018 [2024-04-18 21:19:27.923584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.018 [2024-04-18 21:19:27.923977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.018 [2024-04-18 21:19:27.924006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.018 qpair failed and we were unable to recover it. 00:26:12.018 [2024-04-18 21:19:27.924395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.018 [2024-04-18 21:19:27.924689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.018 [2024-04-18 21:19:27.924719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.018 qpair failed and we were unable to recover it. 00:26:12.018 [2024-04-18 21:19:27.925032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.018 [2024-04-18 21:19:27.925356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.018 [2024-04-18 21:19:27.925366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.018 qpair failed and we were unable to recover it. 00:26:12.018 [2024-04-18 21:19:27.925709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.018 [2024-04-18 21:19:27.925905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.018 [2024-04-18 21:19:27.925915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.018 qpair failed and we were unable to recover it. 00:26:12.018 [2024-04-18 21:19:27.926007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.018 [2024-04-18 21:19:27.926358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.018 [2024-04-18 21:19:27.926387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.018 qpair failed and we were unable to recover it. 00:26:12.018 [2024-04-18 21:19:27.926750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.018 [2024-04-18 21:19:27.927056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.018 [2024-04-18 21:19:27.927084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.018 qpair failed and we were unable to recover it. 00:26:12.018 [2024-04-18 21:19:27.927356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.290 [2024-04-18 21:19:27.927622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.290 [2024-04-18 21:19:27.927632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.290 qpair failed and we were unable to recover it. 00:26:12.290 [2024-04-18 21:19:27.927911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.290 [2024-04-18 21:19:27.928202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.290 [2024-04-18 21:19:27.928211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.290 qpair failed and we were unable to recover it. 00:26:12.290 [2024-04-18 21:19:27.928468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.290 [2024-04-18 21:19:27.928727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.290 [2024-04-18 21:19:27.928737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.290 qpair failed and we were unable to recover it. 00:26:12.290 [2024-04-18 21:19:27.929014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.290 [2024-04-18 21:19:27.929225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.290 [2024-04-18 21:19:27.929234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.290 qpair failed and we were unable to recover it. 00:26:12.290 [2024-04-18 21:19:27.929554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.290 [2024-04-18 21:19:27.929948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.290 [2024-04-18 21:19:27.929977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.290 qpair failed and we were unable to recover it. 00:26:12.290 [2024-04-18 21:19:27.930392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.290 [2024-04-18 21:19:27.930777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.290 [2024-04-18 21:19:27.930806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.290 qpair failed and we were unable to recover it. 00:26:12.290 [2024-04-18 21:19:27.931197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.290 [2024-04-18 21:19:27.931574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.290 [2024-04-18 21:19:27.931603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.290 qpair failed and we were unable to recover it. 00:26:12.290 [2024-04-18 21:19:27.931837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.290 [2024-04-18 21:19:27.932147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.290 [2024-04-18 21:19:27.932175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.290 qpair failed and we were unable to recover it. 00:26:12.290 [2024-04-18 21:19:27.932552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.290 [2024-04-18 21:19:27.932939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.290 [2024-04-18 21:19:27.932967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.290 qpair failed and we were unable to recover it. 00:26:12.290 [2024-04-18 21:19:27.933333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.290 [2024-04-18 21:19:27.933654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.290 [2024-04-18 21:19:27.933684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.290 qpair failed and we were unable to recover it. 00:26:12.290 [2024-04-18 21:19:27.934072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.290 [2024-04-18 21:19:27.934302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.290 [2024-04-18 21:19:27.934330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.290 qpair failed and we were unable to recover it. 00:26:12.290 [2024-04-18 21:19:27.934710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.290 [2024-04-18 21:19:27.935098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.290 [2024-04-18 21:19:27.935127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.290 qpair failed and we were unable to recover it. 00:26:12.290 [2024-04-18 21:19:27.935443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.290 [2024-04-18 21:19:27.935726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.290 [2024-04-18 21:19:27.935756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.290 qpair failed and we were unable to recover it. 00:26:12.290 [2024-04-18 21:19:27.936004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.290 [2024-04-18 21:19:27.936369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.290 [2024-04-18 21:19:27.936397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.290 qpair failed and we were unable to recover it. 00:26:12.290 [2024-04-18 21:19:27.936665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.290 [2024-04-18 21:19:27.936896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.290 [2024-04-18 21:19:27.936925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.290 qpair failed and we were unable to recover it. 00:26:12.290 [2024-04-18 21:19:27.937244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.290 [2024-04-18 21:19:27.937489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.290 [2024-04-18 21:19:27.937537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.290 qpair failed and we were unable to recover it. 00:26:12.290 [2024-04-18 21:19:27.937933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.290 [2024-04-18 21:19:27.938160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.290 [2024-04-18 21:19:27.938189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.291 qpair failed and we were unable to recover it. 00:26:12.291 [2024-04-18 21:19:27.938501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.938746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.938775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.291 qpair failed and we were unable to recover it. 00:26:12.291 [2024-04-18 21:19:27.939143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.939325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.939352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.291 qpair failed and we were unable to recover it. 00:26:12.291 [2024-04-18 21:19:27.939719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.939924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.939953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.291 qpair failed and we were unable to recover it. 00:26:12.291 [2024-04-18 21:19:27.940346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.940644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.940673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.291 qpair failed and we were unable to recover it. 00:26:12.291 [2024-04-18 21:19:27.941062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.941446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.941474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.291 qpair failed and we were unable to recover it. 00:26:12.291 [2024-04-18 21:19:27.941792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.942092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.942125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.291 qpair failed and we were unable to recover it. 00:26:12.291 [2024-04-18 21:19:27.942490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.942815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.942845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.291 qpair failed and we were unable to recover it. 00:26:12.291 [2024-04-18 21:19:27.943162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.943489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.943528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.291 qpair failed and we were unable to recover it. 00:26:12.291 [2024-04-18 21:19:27.943797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.944101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.944109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.291 qpair failed and we were unable to recover it. 00:26:12.291 [2024-04-18 21:19:27.944441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.944744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.944775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.291 qpair failed and we were unable to recover it. 00:26:12.291 [2024-04-18 21:19:27.945148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.945437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.945446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.291 qpair failed and we were unable to recover it. 00:26:12.291 [2024-04-18 21:19:27.945737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.946031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.946060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.291 qpair failed and we were unable to recover it. 00:26:12.291 [2024-04-18 21:19:27.946379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.946705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.946734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.291 qpair failed and we were unable to recover it. 00:26:12.291 [2024-04-18 21:19:27.947139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.947530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.947559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.291 qpair failed and we were unable to recover it. 00:26:12.291 [2024-04-18 21:19:27.947849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.948189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.948218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.291 qpair failed and we were unable to recover it. 00:26:12.291 [2024-04-18 21:19:27.948551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.948791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.948820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.291 qpair failed and we were unable to recover it. 00:26:12.291 [2024-04-18 21:19:27.949182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.949457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.949485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.291 qpair failed and we were unable to recover it. 00:26:12.291 [2024-04-18 21:19:27.949824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.950063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.950092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.291 qpair failed and we were unable to recover it. 00:26:12.291 [2024-04-18 21:19:27.950424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.950684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.950714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.291 qpair failed and we were unable to recover it. 00:26:12.291 [2024-04-18 21:19:27.951032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.951384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.951393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.291 qpair failed and we were unable to recover it. 00:26:12.291 [2024-04-18 21:19:27.951665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.951992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.952002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.291 qpair failed and we were unable to recover it. 00:26:12.291 [2024-04-18 21:19:27.952217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.952483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.952493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.291 qpair failed and we were unable to recover it. 00:26:12.291 [2024-04-18 21:19:27.952803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.953082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.953092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.291 qpair failed and we were unable to recover it. 00:26:12.291 [2024-04-18 21:19:27.953363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.953710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.953740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.291 qpair failed and we were unable to recover it. 00:26:12.291 [2024-04-18 21:19:27.953986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.954371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.954399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.291 qpair failed and we were unable to recover it. 00:26:12.291 [2024-04-18 21:19:27.954718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.955029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.955057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.291 qpair failed and we were unable to recover it. 00:26:12.291 [2024-04-18 21:19:27.955446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.955772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.955802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.291 qpair failed and we were unable to recover it. 00:26:12.291 [2024-04-18 21:19:27.956203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.956523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.956552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.291 qpair failed and we were unable to recover it. 00:26:12.291 [2024-04-18 21:19:27.956858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.291 [2024-04-18 21:19:27.957175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.957205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.292 qpair failed and we were unable to recover it. 00:26:12.292 [2024-04-18 21:19:27.957464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.957644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.957653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.292 qpair failed and we were unable to recover it. 00:26:12.292 [2024-04-18 21:19:27.958001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.958243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.958272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.292 qpair failed and we were unable to recover it. 00:26:12.292 [2024-04-18 21:19:27.958583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.958961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.958989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.292 qpair failed and we were unable to recover it. 00:26:12.292 [2024-04-18 21:19:27.959191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.959522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.959552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.292 qpair failed and we were unable to recover it. 00:26:12.292 [2024-04-18 21:19:27.959948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.960335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.960363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.292 qpair failed and we were unable to recover it. 00:26:12.292 [2024-04-18 21:19:27.960750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.961076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.961105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.292 qpair failed and we were unable to recover it. 00:26:12.292 [2024-04-18 21:19:27.961398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.961762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.961791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.292 qpair failed and we were unable to recover it. 00:26:12.292 [2024-04-18 21:19:27.962116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.962450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.962478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.292 qpair failed and we were unable to recover it. 00:26:12.292 [2024-04-18 21:19:27.962803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.963126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.963155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.292 qpair failed and we were unable to recover it. 00:26:12.292 [2024-04-18 21:19:27.963411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.963802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.963832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.292 qpair failed and we were unable to recover it. 00:26:12.292 [2024-04-18 21:19:27.964136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.964504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.964542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.292 qpair failed and we were unable to recover it. 00:26:12.292 [2024-04-18 21:19:27.964933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.965239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.965267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.292 qpair failed and we were unable to recover it. 00:26:12.292 [2024-04-18 21:19:27.965675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.965969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.965997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.292 qpair failed and we were unable to recover it. 00:26:12.292 [2024-04-18 21:19:27.966391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.966760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.966789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.292 qpair failed and we were unable to recover it. 00:26:12.292 [2024-04-18 21:19:27.967099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.967412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.967441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.292 qpair failed and we were unable to recover it. 00:26:12.292 [2024-04-18 21:19:27.967762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.968176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.968203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.292 qpair failed and we were unable to recover it. 00:26:12.292 [2024-04-18 21:19:27.968563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.968946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.968975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.292 qpair failed and we were unable to recover it. 00:26:12.292 [2024-04-18 21:19:27.969311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.969661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.969691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.292 qpair failed and we were unable to recover it. 00:26:12.292 [2024-04-18 21:19:27.969994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.970310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.970338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.292 qpair failed and we were unable to recover it. 00:26:12.292 [2024-04-18 21:19:27.970738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.970988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.971016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.292 qpair failed and we were unable to recover it. 00:26:12.292 [2024-04-18 21:19:27.971414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.971777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.971808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.292 qpair failed and we were unable to recover it. 00:26:12.292 [2024-04-18 21:19:27.972064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.972421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.972450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.292 qpair failed and we were unable to recover it. 00:26:12.292 [2024-04-18 21:19:27.972823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.973205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.973233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.292 qpair failed and we were unable to recover it. 00:26:12.292 [2024-04-18 21:19:27.973524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.973884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.973913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.292 qpair failed and we were unable to recover it. 00:26:12.292 [2024-04-18 21:19:27.974279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.974529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.974538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.292 qpair failed and we were unable to recover it. 00:26:12.292 [2024-04-18 21:19:27.974834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.975129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.975158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.292 qpair failed and we were unable to recover it. 00:26:12.292 [2024-04-18 21:19:27.975569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.975956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.975985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.292 qpair failed and we were unable to recover it. 00:26:12.292 [2024-04-18 21:19:27.976375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.976665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.976694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.292 qpair failed and we were unable to recover it. 00:26:12.292 [2024-04-18 21:19:27.977089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.292 [2024-04-18 21:19:27.977479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.977507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.293 qpair failed and we were unable to recover it. 00:26:12.293 [2024-04-18 21:19:27.977929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.978196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.978224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.293 qpair failed and we were unable to recover it. 00:26:12.293 [2024-04-18 21:19:27.978538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.978902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.978931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.293 qpair failed and we were unable to recover it. 00:26:12.293 [2024-04-18 21:19:27.979303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.979694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.979723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.293 qpair failed and we were unable to recover it. 00:26:12.293 [2024-04-18 21:19:27.980112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.980439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.980468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.293 qpair failed and we were unable to recover it. 00:26:12.293 [2024-04-18 21:19:27.980868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.981128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.981156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.293 qpair failed and we were unable to recover it. 00:26:12.293 [2024-04-18 21:19:27.981462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.981816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.981825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.293 qpair failed and we were unable to recover it. 00:26:12.293 [2024-04-18 21:19:27.982098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.982460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.982488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.293 qpair failed and we were unable to recover it. 00:26:12.293 [2024-04-18 21:19:27.982894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.983280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.983309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.293 qpair failed and we were unable to recover it. 00:26:12.293 [2024-04-18 21:19:27.983687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.984070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.984099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.293 qpair failed and we were unable to recover it. 00:26:12.293 [2024-04-18 21:19:27.984406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.984772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.984803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.293 qpair failed and we were unable to recover it. 00:26:12.293 [2024-04-18 21:19:27.985165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.985543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.985573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.293 qpair failed and we were unable to recover it. 00:26:12.293 [2024-04-18 21:19:27.985960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.986221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.986250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.293 qpair failed and we were unable to recover it. 00:26:12.293 [2024-04-18 21:19:27.986644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.987019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.987047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.293 qpair failed and we were unable to recover it. 00:26:12.293 [2024-04-18 21:19:27.987349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.987732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.987762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.293 qpair failed and we were unable to recover it. 00:26:12.293 [2024-04-18 21:19:27.988080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.988488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.988525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.293 qpair failed and we were unable to recover it. 00:26:12.293 [2024-04-18 21:19:27.988892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.989283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.989313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.293 qpair failed and we were unable to recover it. 00:26:12.293 [2024-04-18 21:19:27.989699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.989994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.990030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.293 qpair failed and we were unable to recover it. 00:26:12.293 [2024-04-18 21:19:27.990360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.990740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.990769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.293 qpair failed and we were unable to recover it. 00:26:12.293 [2024-04-18 21:19:27.991088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.991448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.991477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.293 qpair failed and we were unable to recover it. 00:26:12.293 [2024-04-18 21:19:27.991872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.992263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.992292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.293 qpair failed and we were unable to recover it. 00:26:12.293 [2024-04-18 21:19:27.992657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.993048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.993077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.293 qpair failed and we were unable to recover it. 00:26:12.293 [2024-04-18 21:19:27.993468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.993856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.993887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.293 qpair failed and we were unable to recover it. 00:26:12.293 [2024-04-18 21:19:27.994274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.994659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.994688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.293 qpair failed and we were unable to recover it. 00:26:12.293 [2024-04-18 21:19:27.995077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.995406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.995435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.293 qpair failed and we were unable to recover it. 00:26:12.293 [2024-04-18 21:19:27.995797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.996166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.996194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.293 qpair failed and we were unable to recover it. 00:26:12.293 [2024-04-18 21:19:27.996543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.996923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.996952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.293 qpair failed and we were unable to recover it. 00:26:12.293 [2024-04-18 21:19:27.997253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.997619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.997649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.293 qpair failed and we were unable to recover it. 00:26:12.293 [2024-04-18 21:19:27.998040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.998422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.998451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.293 qpair failed and we were unable to recover it. 00:26:12.293 [2024-04-18 21:19:27.998821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.293 [2024-04-18 21:19:27.999202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.294 [2024-04-18 21:19:27.999230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.294 qpair failed and we were unable to recover it. 00:26:12.294 [2024-04-18 21:19:27.999534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.294 [2024-04-18 21:19:27.999894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.294 [2024-04-18 21:19:27.999923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.294 qpair failed and we were unable to recover it. 00:26:12.294 [2024-04-18 21:19:28.000309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.294 [2024-04-18 21:19:28.000696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.294 [2024-04-18 21:19:28.000727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.294 qpair failed and we were unable to recover it. 00:26:12.294 [2024-04-18 21:19:28.001113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.294 [2024-04-18 21:19:28.001438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.294 [2024-04-18 21:19:28.001467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.294 qpair failed and we were unable to recover it. 00:26:12.294 [2024-04-18 21:19:28.001790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.294 [2024-04-18 21:19:28.002106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.294 [2024-04-18 21:19:28.002134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.294 qpair failed and we were unable to recover it. 00:26:12.294 [2024-04-18 21:19:28.002528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.294 [2024-04-18 21:19:28.002868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.294 [2024-04-18 21:19:28.002897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.294 qpair failed and we were unable to recover it. 00:26:12.294 [2024-04-18 21:19:28.003289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.294 [2024-04-18 21:19:28.003662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.294 [2024-04-18 21:19:28.003692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.294 qpair failed and we were unable to recover it. 00:26:12.294 [2024-04-18 21:19:28.004009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.294 [2024-04-18 21:19:28.004321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.294 [2024-04-18 21:19:28.004349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.294 qpair failed and we were unable to recover it. 00:26:12.294 [2024-04-18 21:19:28.004758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.294 [2024-04-18 21:19:28.005144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.294 [2024-04-18 21:19:28.005172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.294 qpair failed and we were unable to recover it. 00:26:12.294 [2024-04-18 21:19:28.005564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.294 [2024-04-18 21:19:28.005956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.294 [2024-04-18 21:19:28.005985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.294 qpair failed and we were unable to recover it. 00:26:12.294 [2024-04-18 21:19:28.006384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.294 [2024-04-18 21:19:28.006745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.294 [2024-04-18 21:19:28.006774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.294 qpair failed and we were unable to recover it. 00:26:12.294 [2024-04-18 21:19:28.007161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.294 [2024-04-18 21:19:28.007549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.294 [2024-04-18 21:19:28.007579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.294 qpair failed and we were unable to recover it. 00:26:12.294 [2024-04-18 21:19:28.007966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.294 [2024-04-18 21:19:28.008259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.294 [2024-04-18 21:19:28.008300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.294 qpair failed and we were unable to recover it. 00:26:12.294 [2024-04-18 21:19:28.008572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.294 [2024-04-18 21:19:28.008902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.294 [2024-04-18 21:19:28.008931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.294 qpair failed and we were unable to recover it. 00:26:12.294 [2024-04-18 21:19:28.009272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.294 [2024-04-18 21:19:28.009532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.294 [2024-04-18 21:19:28.009542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.294 qpair failed and we were unable to recover it. 00:26:12.294 [2024-04-18 21:19:28.009888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.294 [2024-04-18 21:19:28.010274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.294 [2024-04-18 21:19:28.010283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.294 qpair failed and we were unable to recover it. 00:26:12.294 [2024-04-18 21:19:28.010642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.294 [2024-04-18 21:19:28.011033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.294 [2024-04-18 21:19:28.011062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.294 qpair failed and we were unable to recover it. 00:26:12.294 [2024-04-18 21:19:28.011464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.294 [2024-04-18 21:19:28.011833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.294 [2024-04-18 21:19:28.011863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.294 qpair failed and we were unable to recover it. 00:26:12.294 [2024-04-18 21:19:28.012250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.294 [2024-04-18 21:19:28.012596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.294 [2024-04-18 21:19:28.012626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.294 qpair failed and we were unable to recover it. 00:26:12.294 [2024-04-18 21:19:28.012926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.294 [2024-04-18 21:19:28.013258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.294 [2024-04-18 21:19:28.013286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.294 qpair failed and we were unable to recover it. 00:26:12.294 [2024-04-18 21:19:28.013657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.294 [2024-04-18 21:19:28.013988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.294 [2024-04-18 21:19:28.014017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.294 qpair failed and we were unable to recover it. 00:26:12.294 [2024-04-18 21:19:28.014411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.294 [2024-04-18 21:19:28.014725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.294 [2024-04-18 21:19:28.014736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.294 qpair failed and we were unable to recover it. 00:26:12.294 [2024-04-18 21:19:28.015061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.294 [2024-04-18 21:19:28.015418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.294 [2024-04-18 21:19:28.015446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.294 qpair failed and we were unable to recover it. 00:26:12.294 [2024-04-18 21:19:28.015749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.294 [2024-04-18 21:19:28.016080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.294 [2024-04-18 21:19:28.016109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.294 qpair failed and we were unable to recover it. 00:26:12.294 [2024-04-18 21:19:28.016502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.294 [2024-04-18 21:19:28.016936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.294 [2024-04-18 21:19:28.016965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.295 qpair failed and we were unable to recover it. 00:26:12.295 [2024-04-18 21:19:28.017266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.017525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.017555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.295 qpair failed and we were unable to recover it. 00:26:12.295 [2024-04-18 21:19:28.017949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.018274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.018303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.295 qpair failed and we were unable to recover it. 00:26:12.295 [2024-04-18 21:19:28.018621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.018987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.019017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.295 qpair failed and we were unable to recover it. 00:26:12.295 [2024-04-18 21:19:28.019407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.019770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.019800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.295 qpair failed and we were unable to recover it. 00:26:12.295 [2024-04-18 21:19:28.020117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.020506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.020555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.295 qpair failed and we were unable to recover it. 00:26:12.295 [2024-04-18 21:19:28.020960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.021194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.021223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.295 qpair failed and we were unable to recover it. 00:26:12.295 [2024-04-18 21:19:28.021616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.022000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.022035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.295 qpair failed and we were unable to recover it. 00:26:12.295 [2024-04-18 21:19:28.022431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.022810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.022820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.295 qpair failed and we were unable to recover it. 00:26:12.295 [2024-04-18 21:19:28.023176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.023562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.023592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.295 qpair failed and we were unable to recover it. 00:26:12.295 [2024-04-18 21:19:28.023910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.024247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.024275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.295 qpair failed and we were unable to recover it. 00:26:12.295 [2024-04-18 21:19:28.024640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.025031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.025060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.295 qpair failed and we were unable to recover it. 00:26:12.295 [2024-04-18 21:19:28.025455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.025846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.025876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.295 qpair failed and we were unable to recover it. 00:26:12.295 [2024-04-18 21:19:28.026265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.026562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.026593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.295 qpair failed and we were unable to recover it. 00:26:12.295 [2024-04-18 21:19:28.027008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.027396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.027425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.295 qpair failed and we were unable to recover it. 00:26:12.295 [2024-04-18 21:19:28.027738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.028084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.028094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.295 qpair failed and we were unable to recover it. 00:26:12.295 [2024-04-18 21:19:28.028462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.028858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.028887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.295 qpair failed and we were unable to recover it. 00:26:12.295 [2024-04-18 21:19:28.029256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.029589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.029628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.295 qpair failed and we were unable to recover it. 00:26:12.295 [2024-04-18 21:19:28.029931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.030245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.030274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.295 qpair failed and we were unable to recover it. 00:26:12.295 [2024-04-18 21:19:28.030671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.031075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.031103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.295 qpair failed and we were unable to recover it. 00:26:12.295 [2024-04-18 21:19:28.031499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.031826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.031854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.295 qpair failed and we were unable to recover it. 00:26:12.295 [2024-04-18 21:19:28.032110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.032474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.032503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.295 qpair failed and we were unable to recover it. 00:26:12.295 [2024-04-18 21:19:28.032819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.033230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.033258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.295 qpair failed and we were unable to recover it. 00:26:12.295 [2024-04-18 21:19:28.033559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.033829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.033858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.295 qpair failed and we were unable to recover it. 00:26:12.295 [2024-04-18 21:19:28.034246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.034581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.034611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.295 qpair failed and we were unable to recover it. 00:26:12.295 [2024-04-18 21:19:28.035006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.035395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.035424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.295 qpair failed and we were unable to recover it. 00:26:12.295 [2024-04-18 21:19:28.035746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.036087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.036116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.295 qpair failed and we were unable to recover it. 00:26:12.295 [2024-04-18 21:19:28.036486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.036859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.036888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.295 qpair failed and we were unable to recover it. 00:26:12.295 [2024-04-18 21:19:28.037228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.037542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.295 [2024-04-18 21:19:28.037573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.295 qpair failed and we were unable to recover it. 00:26:12.296 [2024-04-18 21:19:28.037980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.038211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.038239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.296 qpair failed and we were unable to recover it. 00:26:12.296 [2024-04-18 21:19:28.038633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.039023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.039051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.296 qpair failed and we were unable to recover it. 00:26:12.296 [2024-04-18 21:19:28.039374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.039731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.039740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.296 qpair failed and we were unable to recover it. 00:26:12.296 [2024-04-18 21:19:28.040014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.040292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.040301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.296 qpair failed and we were unable to recover it. 00:26:12.296 [2024-04-18 21:19:28.040623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.041066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.041095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.296 qpair failed and we were unable to recover it. 00:26:12.296 [2024-04-18 21:19:28.041471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.041851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.041881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.296 qpair failed and we were unable to recover it. 00:26:12.296 [2024-04-18 21:19:28.042205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.042590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.042620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.296 qpair failed and we were unable to recover it. 00:26:12.296 [2024-04-18 21:19:28.042953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.043261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.043289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.296 qpair failed and we were unable to recover it. 00:26:12.296 [2024-04-18 21:19:28.043656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.043961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.043990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.296 qpair failed and we were unable to recover it. 00:26:12.296 [2024-04-18 21:19:28.044416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.044802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.044832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.296 qpair failed and we were unable to recover it. 00:26:12.296 [2024-04-18 21:19:28.045228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.045620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.045650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.296 qpair failed and we were unable to recover it. 00:26:12.296 [2024-04-18 21:19:28.046055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.046443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.046471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.296 qpair failed and we were unable to recover it. 00:26:12.296 [2024-04-18 21:19:28.046832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.047222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.047251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.296 qpair failed and we were unable to recover it. 00:26:12.296 [2024-04-18 21:19:28.047646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.047930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.047939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.296 qpair failed and we were unable to recover it. 00:26:12.296 [2024-04-18 21:19:28.048296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.048685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.048715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.296 qpair failed and we were unable to recover it. 00:26:12.296 [2024-04-18 21:19:28.049103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.049492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.049528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.296 qpair failed and we were unable to recover it. 00:26:12.296 [2024-04-18 21:19:28.049930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.050296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.050325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.296 qpair failed and we were unable to recover it. 00:26:12.296 [2024-04-18 21:19:28.050717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.051113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.051142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.296 qpair failed and we were unable to recover it. 00:26:12.296 [2024-04-18 21:19:28.051564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.051973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.052005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.296 qpair failed and we were unable to recover it. 00:26:12.296 [2024-04-18 21:19:28.052413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.052807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.052836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.296 qpair failed and we were unable to recover it. 00:26:12.296 [2024-04-18 21:19:28.053229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.053627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.053658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.296 qpair failed and we were unable to recover it. 00:26:12.296 [2024-04-18 21:19:28.054030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.054413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.054443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.296 qpair failed and we were unable to recover it. 00:26:12.296 [2024-04-18 21:19:28.054838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.055231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.055259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.296 qpair failed and we were unable to recover it. 00:26:12.296 [2024-04-18 21:19:28.055644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.056015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.056044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.296 qpair failed and we were unable to recover it. 00:26:12.296 [2024-04-18 21:19:28.056351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.056611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.056621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.296 qpair failed and we were unable to recover it. 00:26:12.296 [2024-04-18 21:19:28.056998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.057361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.057390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.296 qpair failed and we were unable to recover it. 00:26:12.296 [2024-04-18 21:19:28.057763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.058150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.058179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.296 qpair failed and we were unable to recover it. 00:26:12.296 [2024-04-18 21:19:28.058531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.058868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.058898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.296 qpair failed and we were unable to recover it. 00:26:12.296 [2024-04-18 21:19:28.059268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.059587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.296 [2024-04-18 21:19:28.059617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.296 qpair failed and we were unable to recover it. 00:26:12.296 [2024-04-18 21:19:28.060033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.060422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.060451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.297 qpair failed and we were unable to recover it. 00:26:12.297 [2024-04-18 21:19:28.060822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.061228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.061257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.297 qpair failed and we were unable to recover it. 00:26:12.297 [2024-04-18 21:19:28.061575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.061939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.061968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.297 qpair failed and we were unable to recover it. 00:26:12.297 [2024-04-18 21:19:28.062316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.062620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.062649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.297 qpair failed and we were unable to recover it. 00:26:12.297 [2024-04-18 21:19:28.062995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.063362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.063391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.297 qpair failed and we were unable to recover it. 00:26:12.297 [2024-04-18 21:19:28.063785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.064174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.064203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.297 qpair failed and we were unable to recover it. 00:26:12.297 [2024-04-18 21:19:28.064600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.064991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.065019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.297 qpair failed and we were unable to recover it. 00:26:12.297 [2024-04-18 21:19:28.065366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.065753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.065783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.297 qpair failed and we were unable to recover it. 00:26:12.297 [2024-04-18 21:19:28.066175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.066567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.066597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.297 qpair failed and we were unable to recover it. 00:26:12.297 [2024-04-18 21:19:28.066994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.067294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.067322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.297 qpair failed and we were unable to recover it. 00:26:12.297 [2024-04-18 21:19:28.067712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.068113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.068142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.297 qpair failed and we were unable to recover it. 00:26:12.297 [2024-04-18 21:19:28.068496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.068877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.068907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.297 qpair failed and we were unable to recover it. 00:26:12.297 [2024-04-18 21:19:28.069303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.069617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.069647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.297 qpair failed and we were unable to recover it. 00:26:12.297 [2024-04-18 21:19:28.070052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.070354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.070384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.297 qpair failed and we were unable to recover it. 00:26:12.297 [2024-04-18 21:19:28.070702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.071045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.071082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.297 qpair failed and we were unable to recover it. 00:26:12.297 [2024-04-18 21:19:28.071379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.071601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.071611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.297 qpair failed and we were unable to recover it. 00:26:12.297 [2024-04-18 21:19:28.071872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.072203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.072232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.297 qpair failed and we were unable to recover it. 00:26:12.297 [2024-04-18 21:19:28.072573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.072968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.072996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.297 qpair failed and we were unable to recover it. 00:26:12.297 [2024-04-18 21:19:28.073337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.073730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.073761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.297 qpair failed and we were unable to recover it. 00:26:12.297 [2024-04-18 21:19:28.074066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.074481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.074519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.297 qpair failed and we were unable to recover it. 00:26:12.297 [2024-04-18 21:19:28.074871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.075245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.075274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.297 qpair failed and we were unable to recover it. 00:26:12.297 [2024-04-18 21:19:28.075645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.076041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.076069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.297 qpair failed and we were unable to recover it. 00:26:12.297 [2024-04-18 21:19:28.076472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.076874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.076904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.297 qpair failed and we were unable to recover it. 00:26:12.297 [2024-04-18 21:19:28.077300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.077621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.077650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.297 qpair failed and we were unable to recover it. 00:26:12.297 [2024-04-18 21:19:28.077968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.078335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.078363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.297 qpair failed and we were unable to recover it. 00:26:12.297 [2024-04-18 21:19:28.078736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.079089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.079118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.297 qpair failed and we were unable to recover it. 00:26:12.297 [2024-04-18 21:19:28.079449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.079804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.079813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.297 qpair failed and we were unable to recover it. 00:26:12.297 [2024-04-18 21:19:28.080104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.080491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.080527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.297 qpair failed and we were unable to recover it. 00:26:12.297 [2024-04-18 21:19:28.080897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.081233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.297 [2024-04-18 21:19:28.081262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.297 qpair failed and we were unable to recover it. 00:26:12.297 [2024-04-18 21:19:28.081636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.081946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.081974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.298 qpair failed and we were unable to recover it. 00:26:12.298 [2024-04-18 21:19:28.082383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.082750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.082779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.298 qpair failed and we were unable to recover it. 00:26:12.298 [2024-04-18 21:19:28.083175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.083569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.083598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.298 qpair failed and we were unable to recover it. 00:26:12.298 [2024-04-18 21:19:28.083970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.084359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.084388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.298 qpair failed and we were unable to recover it. 00:26:12.298 [2024-04-18 21:19:28.084792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.085180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.085209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.298 qpair failed and we were unable to recover it. 00:26:12.298 [2024-04-18 21:19:28.085608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.085872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.085901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.298 qpair failed and we were unable to recover it. 00:26:12.298 [2024-04-18 21:19:28.086308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.086698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.086728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.298 qpair failed and we were unable to recover it. 00:26:12.298 [2024-04-18 21:19:28.087121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.087519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.087550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.298 qpair failed and we were unable to recover it. 00:26:12.298 [2024-04-18 21:19:28.087870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.088262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.088291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.298 qpair failed and we were unable to recover it. 00:26:12.298 [2024-04-18 21:19:28.088553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.088938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.088966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.298 qpair failed and we were unable to recover it. 00:26:12.298 [2024-04-18 21:19:28.089361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.089752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.089782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.298 qpair failed and we were unable to recover it. 00:26:12.298 [2024-04-18 21:19:28.090095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.090508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.090547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.298 qpair failed and we were unable to recover it. 00:26:12.298 [2024-04-18 21:19:28.090987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.091377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.091405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.298 qpair failed and we were unable to recover it. 00:26:12.298 [2024-04-18 21:19:28.091795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.092096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.092124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.298 qpair failed and we were unable to recover it. 00:26:12.298 [2024-04-18 21:19:28.092523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.092852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.092881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.298 qpair failed and we were unable to recover it. 00:26:12.298 [2024-04-18 21:19:28.093184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.093570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.093601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.298 qpair failed and we were unable to recover it. 00:26:12.298 [2024-04-18 21:19:28.093995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.094358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.094386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.298 qpair failed and we were unable to recover it. 00:26:12.298 [2024-04-18 21:19:28.094759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.095077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.095106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.298 qpair failed and we were unable to recover it. 00:26:12.298 [2024-04-18 21:19:28.095482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.095880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.095910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.298 qpair failed and we were unable to recover it. 00:26:12.298 [2024-04-18 21:19:28.096234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.096602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.096632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.298 qpair failed and we were unable to recover it. 00:26:12.298 [2024-04-18 21:19:28.097025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.097417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.097445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.298 qpair failed and we were unable to recover it. 00:26:12.298 [2024-04-18 21:19:28.097816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.098212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.098241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.298 qpair failed and we were unable to recover it. 00:26:12.298 [2024-04-18 21:19:28.098627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.099016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.099044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.298 qpair failed and we were unable to recover it. 00:26:12.298 [2024-04-18 21:19:28.099415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.099676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.099685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.298 qpair failed and we were unable to recover it. 00:26:12.298 [2024-04-18 21:19:28.100050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.100443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.100471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.298 qpair failed and we were unable to recover it. 00:26:12.298 [2024-04-18 21:19:28.100784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.101143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.101172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.298 qpair failed and we were unable to recover it. 00:26:12.298 [2024-04-18 21:19:28.101568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.101815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.101844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.298 qpair failed and we were unable to recover it. 00:26:12.298 [2024-04-18 21:19:28.102166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.102470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.102498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.298 qpair failed and we were unable to recover it. 00:26:12.298 [2024-04-18 21:19:28.102815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.103045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.298 [2024-04-18 21:19:28.103055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.298 qpair failed and we were unable to recover it. 00:26:12.299 [2024-04-18 21:19:28.103331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.299 [2024-04-18 21:19:28.103707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.299 [2024-04-18 21:19:28.103717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.299 qpair failed and we were unable to recover it. 00:26:12.299 [2024-04-18 21:19:28.104093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.299 [2024-04-18 21:19:28.104472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.299 [2024-04-18 21:19:28.104482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.299 qpair failed and we were unable to recover it. 00:26:12.299 [2024-04-18 21:19:28.104840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.299 [2024-04-18 21:19:28.105192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.299 [2024-04-18 21:19:28.105201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.299 qpair failed and we were unable to recover it. 00:26:12.299 [2024-04-18 21:19:28.105490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.299 [2024-04-18 21:19:28.105821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.299 [2024-04-18 21:19:28.105831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.299 qpair failed and we were unable to recover it. 00:26:12.299 [2024-04-18 21:19:28.106188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.299 [2024-04-18 21:19:28.106459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.299 [2024-04-18 21:19:28.106469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.299 qpair failed and we were unable to recover it. 00:26:12.299 [2024-04-18 21:19:28.106844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.299 [2024-04-18 21:19:28.107128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.299 [2024-04-18 21:19:28.107138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.299 qpair failed and we were unable to recover it. 00:26:12.299 [2024-04-18 21:19:28.107351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.299 [2024-04-18 21:19:28.107635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.299 [2024-04-18 21:19:28.107646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.299 qpair failed and we were unable to recover it. 00:26:12.299 [2024-04-18 21:19:28.107940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.299 [2024-04-18 21:19:28.108290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.299 [2024-04-18 21:19:28.108300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.299 qpair failed and we were unable to recover it. 00:26:12.299 [2024-04-18 21:19:28.108576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.299 [2024-04-18 21:19:28.108929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.299 [2024-04-18 21:19:28.108940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.299 qpair failed and we were unable to recover it. 00:26:12.299 [2024-04-18 21:19:28.109216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.299 [2024-04-18 21:19:28.109571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.299 [2024-04-18 21:19:28.109581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.299 qpair failed and we were unable to recover it. 00:26:12.299 [2024-04-18 21:19:28.109862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.299 [2024-04-18 21:19:28.110062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.299 [2024-04-18 21:19:28.110072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.299 qpair failed and we were unable to recover it. 00:26:12.299 [2024-04-18 21:19:28.110345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.299 [2024-04-18 21:19:28.110702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.299 [2024-04-18 21:19:28.110712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.299 qpair failed and we were unable to recover it. 00:26:12.299 [2024-04-18 21:19:28.110950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.299 [2024-04-18 21:19:28.111278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.299 [2024-04-18 21:19:28.111288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.299 qpair failed and we were unable to recover it. 00:26:12.299 [2024-04-18 21:19:28.111624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.299 [2024-04-18 21:19:28.111956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.299 [2024-04-18 21:19:28.111966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.299 qpair failed and we were unable to recover it. 00:26:12.299 [2024-04-18 21:19:28.112273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.299 [2024-04-18 21:19:28.112564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.299 [2024-04-18 21:19:28.112574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.299 qpair failed and we were unable to recover it. 00:26:12.299 [2024-04-18 21:19:28.112907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.299 [2024-04-18 21:19:28.113261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.299 [2024-04-18 21:19:28.113270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.299 qpair failed and we were unable to recover it. 00:26:12.299 [2024-04-18 21:19:28.113590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.299 [2024-04-18 21:19:28.113946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.299 [2024-04-18 21:19:28.113956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.299 qpair failed and we were unable to recover it. 00:26:12.299 [2024-04-18 21:19:28.114320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.299 [2024-04-18 21:19:28.114699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.299 [2024-04-18 21:19:28.114710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.299 qpair failed and we were unable to recover it. 00:26:12.299 [2024-04-18 21:19:28.114979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.299 [2024-04-18 21:19:28.115276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.299 [2024-04-18 21:19:28.115286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.299 qpair failed and we were unable to recover it. 00:26:12.299 [2024-04-18 21:19:28.115641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.299 [2024-04-18 21:19:28.115982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.299 [2024-04-18 21:19:28.115992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.299 qpair failed and we were unable to recover it. 00:26:12.299 [2024-04-18 21:19:28.116332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.299 [2024-04-18 21:19:28.116614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.299 [2024-04-18 21:19:28.116624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.299 qpair failed and we were unable to recover it. 00:26:12.299 [2024-04-18 21:19:28.116895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.299 [2024-04-18 21:19:28.117253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.299 [2024-04-18 21:19:28.117262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.299 qpair failed and we were unable to recover it. 00:26:12.299 [2024-04-18 21:19:28.117541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.299 [2024-04-18 21:19:28.117822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.299 [2024-04-18 21:19:28.117831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.299 qpair failed and we were unable to recover it. 00:26:12.299 [2024-04-18 21:19:28.118114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.299 [2024-04-18 21:19:28.118449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.299 [2024-04-18 21:19:28.118458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.299 qpair failed and we were unable to recover it. 00:26:12.299 [2024-04-18 21:19:28.118832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.299 [2024-04-18 21:19:28.119100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.299 [2024-04-18 21:19:28.119109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.300 qpair failed and we were unable to recover it. 00:26:12.300 [2024-04-18 21:19:28.119326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.119589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.119599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.300 qpair failed and we were unable to recover it. 00:26:12.300 [2024-04-18 21:19:28.119970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.120203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.120232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.300 qpair failed and we were unable to recover it. 00:26:12.300 [2024-04-18 21:19:28.120632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.120994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.121027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.300 qpair failed and we were unable to recover it. 00:26:12.300 [2024-04-18 21:19:28.121281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.121609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.121639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.300 qpair failed and we were unable to recover it. 00:26:12.300 [2024-04-18 21:19:28.122058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.122430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.122459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.300 qpair failed and we were unable to recover it. 00:26:12.300 [2024-04-18 21:19:28.122850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.123244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.123272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.300 qpair failed and we were unable to recover it. 00:26:12.300 [2024-04-18 21:19:28.123637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.123920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.123929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.300 qpair failed and we were unable to recover it. 00:26:12.300 [2024-04-18 21:19:28.124204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.124576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.124610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.300 qpair failed and we were unable to recover it. 00:26:12.300 [2024-04-18 21:19:28.124977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.125368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.125398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.300 qpair failed and we were unable to recover it. 00:26:12.300 [2024-04-18 21:19:28.125712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.126126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.126155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.300 qpair failed and we were unable to recover it. 00:26:12.300 [2024-04-18 21:19:28.126552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.126869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.126897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.300 qpair failed and we were unable to recover it. 00:26:12.300 [2024-04-18 21:19:28.127298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.127632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.127642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.300 qpair failed and we were unable to recover it. 00:26:12.300 [2024-04-18 21:19:28.127999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.128341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.128350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.300 qpair failed and we were unable to recover it. 00:26:12.300 [2024-04-18 21:19:28.128693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.128975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.128985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.300 qpair failed and we were unable to recover it. 00:26:12.300 [2024-04-18 21:19:28.129226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.129623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.129652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.300 qpair failed and we were unable to recover it. 00:26:12.300 [2024-04-18 21:19:28.130029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.130404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.130432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.300 qpair failed and we were unable to recover it. 00:26:12.300 [2024-04-18 21:19:28.130832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.131207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.131235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.300 qpair failed and we were unable to recover it. 00:26:12.300 [2024-04-18 21:19:28.131556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.131867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.131902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.300 qpair failed and we were unable to recover it. 00:26:12.300 [2024-04-18 21:19:28.132253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.132647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.132677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.300 qpair failed and we were unable to recover it. 00:26:12.300 [2024-04-18 21:19:28.132986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.133340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.133349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.300 qpair failed and we were unable to recover it. 00:26:12.300 [2024-04-18 21:19:28.133607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.133967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.133996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.300 qpair failed and we were unable to recover it. 00:26:12.300 [2024-04-18 21:19:28.134391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.134783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.134813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.300 qpair failed and we were unable to recover it. 00:26:12.300 [2024-04-18 21:19:28.135129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.135531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.135560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.300 qpair failed and we were unable to recover it. 00:26:12.300 [2024-04-18 21:19:28.135954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.136307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.136335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.300 qpair failed and we were unable to recover it. 00:26:12.300 [2024-04-18 21:19:28.136719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.137110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.137138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.300 qpair failed and we were unable to recover it. 00:26:12.300 [2024-04-18 21:19:28.137447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.137865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.137895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.300 qpair failed and we were unable to recover it. 00:26:12.300 [2024-04-18 21:19:28.138232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.138600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.138630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.300 qpair failed and we were unable to recover it. 00:26:12.300 [2024-04-18 21:19:28.138880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.139229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.139267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.300 qpair failed and we were unable to recover it. 00:26:12.300 [2024-04-18 21:19:28.139588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.139944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.300 [2024-04-18 21:19:28.139954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.301 qpair failed and we were unable to recover it. 00:26:12.301 [2024-04-18 21:19:28.140297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.140665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.140695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.301 qpair failed and we were unable to recover it. 00:26:12.301 [2024-04-18 21:19:28.141094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.141483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.141523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.301 qpair failed and we were unable to recover it. 00:26:12.301 [2024-04-18 21:19:28.141903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.142243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.142272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.301 qpair failed and we were unable to recover it. 00:26:12.301 [2024-04-18 21:19:28.142603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.142918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.142928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.301 qpair failed and we were unable to recover it. 00:26:12.301 [2024-04-18 21:19:28.143346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.143662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.143691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.301 qpair failed and we were unable to recover it. 00:26:12.301 [2024-04-18 21:19:28.144100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.144426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.144455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.301 qpair failed and we were unable to recover it. 00:26:12.301 [2024-04-18 21:19:28.144729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.144977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.145006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.301 qpair failed and we were unable to recover it. 00:26:12.301 [2024-04-18 21:19:28.145327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.145723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.145752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.301 qpair failed and we were unable to recover it. 00:26:12.301 [2024-04-18 21:19:28.146099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.146486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.146533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.301 qpair failed and we were unable to recover it. 00:26:12.301 [2024-04-18 21:19:28.146928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.147263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.147272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.301 qpair failed and we were unable to recover it. 00:26:12.301 [2024-04-18 21:19:28.147569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.147921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.147950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.301 qpair failed and we were unable to recover it. 00:26:12.301 [2024-04-18 21:19:28.148205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.148594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.148624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.301 qpair failed and we were unable to recover it. 00:26:12.301 [2024-04-18 21:19:28.149021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.149408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.149437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.301 qpair failed and we were unable to recover it. 00:26:12.301 [2024-04-18 21:19:28.149754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.150144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.150172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.301 qpair failed and we were unable to recover it. 00:26:12.301 [2024-04-18 21:19:28.150481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.150889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.150898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.301 qpair failed and we were unable to recover it. 00:26:12.301 [2024-04-18 21:19:28.151255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.151624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.151653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.301 qpair failed and we were unable to recover it. 00:26:12.301 [2024-04-18 21:19:28.152103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.152313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.152323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.301 qpair failed and we were unable to recover it. 00:26:12.301 [2024-04-18 21:19:28.152678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.152985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.153014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.301 qpair failed and we were unable to recover it. 00:26:12.301 [2024-04-18 21:19:28.153430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.153825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.153855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.301 qpair failed and we were unable to recover it. 00:26:12.301 [2024-04-18 21:19:28.154249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.154638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.154668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.301 qpair failed and we were unable to recover it. 00:26:12.301 [2024-04-18 21:19:28.155083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.155399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.155428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.301 qpair failed and we were unable to recover it. 00:26:12.301 [2024-04-18 21:19:28.155746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.156133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.156154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.301 qpair failed and we were unable to recover it. 00:26:12.301 [2024-04-18 21:19:28.156505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.156893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.156903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.301 qpair failed and we were unable to recover it. 00:26:12.301 [2024-04-18 21:19:28.157137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.157482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.157519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.301 qpair failed and we were unable to recover it. 00:26:12.301 [2024-04-18 21:19:28.157858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.158187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.158215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.301 qpair failed and we were unable to recover it. 00:26:12.301 [2024-04-18 21:19:28.158535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.158852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.158880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.301 qpair failed and we were unable to recover it. 00:26:12.301 [2024-04-18 21:19:28.159294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.159608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.159637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.301 qpair failed and we were unable to recover it. 00:26:12.301 [2024-04-18 21:19:28.160045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.160432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.160462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.301 qpair failed and we were unable to recover it. 00:26:12.301 [2024-04-18 21:19:28.160846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.301 [2024-04-18 21:19:28.161242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.161270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.302 qpair failed and we were unable to recover it. 00:26:12.302 [2024-04-18 21:19:28.161704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.162087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.162097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.302 qpair failed and we were unable to recover it. 00:26:12.302 [2024-04-18 21:19:28.162478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.162791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.162821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.302 qpair failed and we were unable to recover it. 00:26:12.302 [2024-04-18 21:19:28.163093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.163448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.163457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.302 qpair failed and we were unable to recover it. 00:26:12.302 [2024-04-18 21:19:28.163758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.164079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.164107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.302 qpair failed and we were unable to recover it. 00:26:12.302 [2024-04-18 21:19:28.164453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.164764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.164774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.302 qpair failed and we were unable to recover it. 00:26:12.302 [2024-04-18 21:19:28.165151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.165480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.165491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.302 qpair failed and we were unable to recover it. 00:26:12.302 [2024-04-18 21:19:28.165784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.166176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.166205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.302 qpair failed and we were unable to recover it. 00:26:12.302 [2024-04-18 21:19:28.166529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.166833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.166861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.302 qpair failed and we were unable to recover it. 00:26:12.302 [2024-04-18 21:19:28.167209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.167466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.167495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.302 qpair failed and we were unable to recover it. 00:26:12.302 [2024-04-18 21:19:28.167927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.168296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.168324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.302 qpair failed and we were unable to recover it. 00:26:12.302 [2024-04-18 21:19:28.168731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.169127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.169155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.302 qpair failed and we were unable to recover it. 00:26:12.302 [2024-04-18 21:19:28.169551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.169814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.169842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.302 qpair failed and we were unable to recover it. 00:26:12.302 [2024-04-18 21:19:28.170182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.170525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.170554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.302 qpair failed and we were unable to recover it. 00:26:12.302 [2024-04-18 21:19:28.170973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.171364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.171393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.302 qpair failed and we were unable to recover it. 00:26:12.302 [2024-04-18 21:19:28.171721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.172119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.172147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.302 qpair failed and we were unable to recover it. 00:26:12.302 [2024-04-18 21:19:28.172486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.172883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.172912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.302 qpair failed and we were unable to recover it. 00:26:12.302 [2024-04-18 21:19:28.173252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.173563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.173592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.302 qpair failed and we were unable to recover it. 00:26:12.302 [2024-04-18 21:19:28.173916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.174225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.174253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.302 qpair failed and we were unable to recover it. 00:26:12.302 [2024-04-18 21:19:28.174676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.175056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.175085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.302 qpair failed and we were unable to recover it. 00:26:12.302 [2024-04-18 21:19:28.175480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.175844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.175855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.302 qpair failed and we were unable to recover it. 00:26:12.302 [2024-04-18 21:19:28.176245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.176568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.176599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.302 qpair failed and we were unable to recover it. 00:26:12.302 [2024-04-18 21:19:28.176875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.177194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.177222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.302 qpair failed and we were unable to recover it. 00:26:12.302 [2024-04-18 21:19:28.177631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.178005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.178034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.302 qpair failed and we were unable to recover it. 00:26:12.302 [2024-04-18 21:19:28.178445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.178770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.178800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.302 qpair failed and we were unable to recover it. 00:26:12.302 [2024-04-18 21:19:28.179050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.179461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.179491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.302 qpair failed and we were unable to recover it. 00:26:12.302 [2024-04-18 21:19:28.179872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.180200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.180229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.302 qpair failed and we were unable to recover it. 00:26:12.302 [2024-04-18 21:19:28.180556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.180806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.180835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.302 qpair failed and we were unable to recover it. 00:26:12.302 [2024-04-18 21:19:28.181248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.181642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.302 [2024-04-18 21:19:28.181671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.303 qpair failed and we were unable to recover it. 00:26:12.303 [2024-04-18 21:19:28.181996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.182416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.182444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.303 qpair failed and we were unable to recover it. 00:26:12.303 [2024-04-18 21:19:28.182874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.183288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.183317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.303 qpair failed and we were unable to recover it. 00:26:12.303 [2024-04-18 21:19:28.183729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.183975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.184007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.303 qpair failed and we were unable to recover it. 00:26:12.303 [2024-04-18 21:19:28.184404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.184792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.184823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.303 qpair failed and we were unable to recover it. 00:26:12.303 [2024-04-18 21:19:28.185152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.185529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.185559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.303 qpair failed and we were unable to recover it. 00:26:12.303 [2024-04-18 21:19:28.185842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.186151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.186192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.303 qpair failed and we were unable to recover it. 00:26:12.303 [2024-04-18 21:19:28.186537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.186853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.186881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.303 qpair failed and we were unable to recover it. 00:26:12.303 [2024-04-18 21:19:28.187199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.187569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.187599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.303 qpair failed and we were unable to recover it. 00:26:12.303 [2024-04-18 21:19:28.187977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.188343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.188371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.303 qpair failed and we were unable to recover it. 00:26:12.303 [2024-04-18 21:19:28.188748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.189067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.189096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.303 qpair failed and we were unable to recover it. 00:26:12.303 [2024-04-18 21:19:28.189472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.189877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.189908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.303 qpair failed and we were unable to recover it. 00:26:12.303 [2024-04-18 21:19:28.190296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.190613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.190643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.303 qpair failed and we were unable to recover it. 00:26:12.303 [2024-04-18 21:19:28.190908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.191283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.191312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.303 qpair failed and we were unable to recover it. 00:26:12.303 [2024-04-18 21:19:28.191718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.192116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.192145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.303 qpair failed and we were unable to recover it. 00:26:12.303 [2024-04-18 21:19:28.192454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.192874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.192904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.303 qpair failed and we were unable to recover it. 00:26:12.303 [2024-04-18 21:19:28.193259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.193640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.193671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.303 qpair failed and we were unable to recover it. 00:26:12.303 [2024-04-18 21:19:28.193944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.194348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.194378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.303 qpair failed and we were unable to recover it. 00:26:12.303 [2024-04-18 21:19:28.194760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.195091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.195119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.303 qpair failed and we were unable to recover it. 00:26:12.303 [2024-04-18 21:19:28.195422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.195799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.195829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.303 qpair failed and we were unable to recover it. 00:26:12.303 [2024-04-18 21:19:28.196020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.196322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.196352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.303 qpair failed and we were unable to recover it. 00:26:12.303 [2024-04-18 21:19:28.196687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.197040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.197069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.303 qpair failed and we were unable to recover it. 00:26:12.303 [2024-04-18 21:19:28.197454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.197840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.197870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.303 qpair failed and we were unable to recover it. 00:26:12.303 [2024-04-18 21:19:28.198276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.198606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.198637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.303 qpair failed and we were unable to recover it. 00:26:12.303 [2024-04-18 21:19:28.199045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.199306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.199335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.303 qpair failed and we were unable to recover it. 00:26:12.303 [2024-04-18 21:19:28.199772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.200124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.200153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.303 qpair failed and we were unable to recover it. 00:26:12.303 [2024-04-18 21:19:28.200488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.200879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.200909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.303 qpair failed and we were unable to recover it. 00:26:12.303 [2024-04-18 21:19:28.201332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.201652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.201683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.303 qpair failed and we were unable to recover it. 00:26:12.303 [2024-04-18 21:19:28.202090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.202406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-04-18 21:19:28.202435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.304 qpair failed and we were unable to recover it. 00:26:12.304 [2024-04-18 21:19:28.202746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.304 [2024-04-18 21:19:28.203149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.304 [2024-04-18 21:19:28.203178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.304 qpair failed and we were unable to recover it. 00:26:12.304 [2024-04-18 21:19:28.203588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.304 [2024-04-18 21:19:28.203963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.304 [2024-04-18 21:19:28.203992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.304 qpair failed and we were unable to recover it. 00:26:12.304 [2024-04-18 21:19:28.204299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.304 [2024-04-18 21:19:28.204674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.304 [2024-04-18 21:19:28.204706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.304 qpair failed and we were unable to recover it. 00:26:12.304 [2024-04-18 21:19:28.205100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.304 [2024-04-18 21:19:28.205374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.304 [2024-04-18 21:19:28.205402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.304 qpair failed and we were unable to recover it. 00:26:12.304 [2024-04-18 21:19:28.205787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.304 [2024-04-18 21:19:28.206166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.304 [2024-04-18 21:19:28.206176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.304 qpair failed and we were unable to recover it. 00:26:12.304 [2024-04-18 21:19:28.206482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.304 [2024-04-18 21:19:28.206750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.304 [2024-04-18 21:19:28.206761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.304 qpair failed and we were unable to recover it. 00:26:12.304 [2024-04-18 21:19:28.207118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.304 [2024-04-18 21:19:28.207319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.304 [2024-04-18 21:19:28.207330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.304 qpair failed and we were unable to recover it. 00:26:12.304 [2024-04-18 21:19:28.207602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.304 [2024-04-18 21:19:28.207891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.304 [2024-04-18 21:19:28.207901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.304 qpair failed and we were unable to recover it. 00:26:12.304 [2024-04-18 21:19:28.208206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.304 [2024-04-18 21:19:28.208539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.304 [2024-04-18 21:19:28.208550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.304 qpair failed and we were unable to recover it. 00:26:12.304 [2024-04-18 21:19:28.208823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.591 [2024-04-18 21:19:28.209173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.591 [2024-04-18 21:19:28.209183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.591 qpair failed and we were unable to recover it. 00:26:12.591 [2024-04-18 21:19:28.209410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.591 [2024-04-18 21:19:28.209548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.591 [2024-04-18 21:19:28.209559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.591 qpair failed and we were unable to recover it. 00:26:12.591 [2024-04-18 21:19:28.209923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.591 [2024-04-18 21:19:28.210191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.591 [2024-04-18 21:19:28.210202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.591 qpair failed and we were unable to recover it. 00:26:12.591 [2024-04-18 21:19:28.210415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.591 [2024-04-18 21:19:28.210621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.591 [2024-04-18 21:19:28.210631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.591 qpair failed and we were unable to recover it. 00:26:12.591 [2024-04-18 21:19:28.210997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.591 [2024-04-18 21:19:28.211358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.591 [2024-04-18 21:19:28.211369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.591 qpair failed and we were unable to recover it. 00:26:12.591 [2024-04-18 21:19:28.211670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.591 [2024-04-18 21:19:28.211898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.591 [2024-04-18 21:19:28.211909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.591 qpair failed and we were unable to recover it. 00:26:12.591 [2024-04-18 21:19:28.212193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.591 [2024-04-18 21:19:28.212471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.591 [2024-04-18 21:19:28.212482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.591 qpair failed and we were unable to recover it. 00:26:12.591 [2024-04-18 21:19:28.212696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.591 [2024-04-18 21:19:28.212932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.591 [2024-04-18 21:19:28.212942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.591 qpair failed and we were unable to recover it. 00:26:12.591 [2024-04-18 21:19:28.213230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.591 [2024-04-18 21:19:28.213504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.591 [2024-04-18 21:19:28.213521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.591 qpair failed and we were unable to recover it. 00:26:12.591 [2024-04-18 21:19:28.213808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.591 [2024-04-18 21:19:28.214095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.591 [2024-04-18 21:19:28.214105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.591 qpair failed and we were unable to recover it. 00:26:12.591 [2024-04-18 21:19:28.214432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.591 [2024-04-18 21:19:28.214727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.591 [2024-04-18 21:19:28.214737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.591 qpair failed and we were unable to recover it. 00:26:12.591 [2024-04-18 21:19:28.215100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.591 [2024-04-18 21:19:28.215377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.591 [2024-04-18 21:19:28.215388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.591 qpair failed and we were unable to recover it. 00:26:12.591 [2024-04-18 21:19:28.215669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.591 [2024-04-18 21:19:28.215947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.591 [2024-04-18 21:19:28.215957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.591 qpair failed and we were unable to recover it. 00:26:12.591 [2024-04-18 21:19:28.216248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.591 [2024-04-18 21:19:28.216591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.591 [2024-04-18 21:19:28.216601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.591 qpair failed and we were unable to recover it. 00:26:12.591 [2024-04-18 21:19:28.216818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.591 [2024-04-18 21:19:28.217014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.591 [2024-04-18 21:19:28.217025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.591 qpair failed and we were unable to recover it. 00:26:12.591 [2024-04-18 21:19:28.217129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.591 [2024-04-18 21:19:28.217337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.591 [2024-04-18 21:19:28.217348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.591 qpair failed and we were unable to recover it. 00:26:12.591 [2024-04-18 21:19:28.217565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.591 [2024-04-18 21:19:28.217799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.591 [2024-04-18 21:19:28.217810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.591 qpair failed and we were unable to recover it. 00:26:12.592 [2024-04-18 21:19:28.218170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.218382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.218392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.592 qpair failed and we were unable to recover it. 00:26:12.592 [2024-04-18 21:19:28.218618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.218957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.218967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.592 qpair failed and we were unable to recover it. 00:26:12.592 [2024-04-18 21:19:28.219302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.219524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.219534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.592 qpair failed and we were unable to recover it. 00:26:12.592 [2024-04-18 21:19:28.219735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.220020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.220030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.592 qpair failed and we were unable to recover it. 00:26:12.592 [2024-04-18 21:19:28.220365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.220647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.220657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.592 qpair failed and we were unable to recover it. 00:26:12.592 [2024-04-18 21:19:28.220990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.221324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.221334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.592 qpair failed and we were unable to recover it. 00:26:12.592 [2024-04-18 21:19:28.221690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.221902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.221912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.592 qpair failed and we were unable to recover it. 00:26:12.592 [2024-04-18 21:19:28.222252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.222584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.222594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.592 qpair failed and we were unable to recover it. 00:26:12.592 [2024-04-18 21:19:28.222804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.223193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.223203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.592 qpair failed and we were unable to recover it. 00:26:12.592 [2024-04-18 21:19:28.223491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.223699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.223710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.592 qpair failed and we were unable to recover it. 00:26:12.592 [2024-04-18 21:19:28.224072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.224454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.224464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.592 qpair failed and we were unable to recover it. 00:26:12.592 [2024-04-18 21:19:28.224738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.225097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.225106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.592 qpair failed and we were unable to recover it. 00:26:12.592 [2024-04-18 21:19:28.225500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.225803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.225813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.592 qpair failed and we were unable to recover it. 00:26:12.592 [2024-04-18 21:19:28.226166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.226522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.226532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.592 qpair failed and we were unable to recover it. 00:26:12.592 [2024-04-18 21:19:28.226734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.227009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.227018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.592 qpair failed and we were unable to recover it. 00:26:12.592 [2024-04-18 21:19:28.227403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.227669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.227680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.592 qpair failed and we were unable to recover it. 00:26:12.592 [2024-04-18 21:19:28.227912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.228270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.228280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.592 qpair failed and we were unable to recover it. 00:26:12.592 [2024-04-18 21:19:28.228661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.228961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.228970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.592 qpair failed and we were unable to recover it. 00:26:12.592 [2024-04-18 21:19:28.229323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.229594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.229605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.592 qpair failed and we were unable to recover it. 00:26:12.592 [2024-04-18 21:19:28.229900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.230259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.230269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.592 qpair failed and we were unable to recover it. 00:26:12.592 [2024-04-18 21:19:28.230646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.230955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.230964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.592 qpair failed and we were unable to recover it. 00:26:12.592 [2024-04-18 21:19:28.231326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.231549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.231559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.592 qpair failed and we were unable to recover it. 00:26:12.592 [2024-04-18 21:19:28.231907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.232265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.232274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.592 qpair failed and we were unable to recover it. 00:26:12.592 [2024-04-18 21:19:28.232605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.232959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.232969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.592 qpair failed and we were unable to recover it. 00:26:12.592 [2024-04-18 21:19:28.233254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.233604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.233614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.592 qpair failed and we were unable to recover it. 00:26:12.592 [2024-04-18 21:19:28.233835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.234190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.234199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.592 qpair failed and we were unable to recover it. 00:26:12.592 [2024-04-18 21:19:28.234583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.234940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.234950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.592 qpair failed and we were unable to recover it. 00:26:12.592 [2024-04-18 21:19:28.235235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.235654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.235665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.592 qpair failed and we were unable to recover it. 00:26:12.592 [2024-04-18 21:19:28.236020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.236378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.592 [2024-04-18 21:19:28.236390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.592 qpair failed and we were unable to recover it. 00:26:12.593 [2024-04-18 21:19:28.236678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.237109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.237119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.593 qpair failed and we were unable to recover it. 00:26:12.593 [2024-04-18 21:19:28.237405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.237698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.237708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.593 qpair failed and we were unable to recover it. 00:26:12.593 [2024-04-18 21:19:28.238050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.238313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.238322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.593 qpair failed and we were unable to recover it. 00:26:12.593 [2024-04-18 21:19:28.238714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.239114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.239142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.593 qpair failed and we were unable to recover it. 00:26:12.593 [2024-04-18 21:19:28.239450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.239866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.239897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.593 qpair failed and we were unable to recover it. 00:26:12.593 [2024-04-18 21:19:28.240293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.240615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.240645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.593 qpair failed and we were unable to recover it. 00:26:12.593 [2024-04-18 21:19:28.240984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.241239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.241267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.593 qpair failed and we were unable to recover it. 00:26:12.593 [2024-04-18 21:19:28.241673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.242068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.242096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.593 qpair failed and we were unable to recover it. 00:26:12.593 [2024-04-18 21:19:28.242436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.242831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.242860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.593 qpair failed and we were unable to recover it. 00:26:12.593 [2024-04-18 21:19:28.243265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.243498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.243550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.593 qpair failed and we were unable to recover it. 00:26:12.593 [2024-04-18 21:19:28.243954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.244335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.244363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.593 qpair failed and we were unable to recover it. 00:26:12.593 [2024-04-18 21:19:28.244779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.245095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.245123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.593 qpair failed and we were unable to recover it. 00:26:12.593 [2024-04-18 21:19:28.245448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.245752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.245782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.593 qpair failed and we were unable to recover it. 00:26:12.593 [2024-04-18 21:19:28.246211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.246589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.246620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.593 qpair failed and we were unable to recover it. 00:26:12.593 [2024-04-18 21:19:28.246997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.247391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.247420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.593 qpair failed and we were unable to recover it. 00:26:12.593 [2024-04-18 21:19:28.247793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.248182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.248191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.593 qpair failed and we were unable to recover it. 00:26:12.593 [2024-04-18 21:19:28.248551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.248951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.248980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.593 qpair failed and we were unable to recover it. 00:26:12.593 [2024-04-18 21:19:28.249321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.249716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.249747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.593 qpair failed and we were unable to recover it. 00:26:12.593 [2024-04-18 21:19:28.250147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.250446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.250474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.593 qpair failed and we were unable to recover it. 00:26:12.593 [2024-04-18 21:19:28.250908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.251322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.251356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.593 qpair failed and we were unable to recover it. 00:26:12.593 [2024-04-18 21:19:28.251735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.252129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.252157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.593 qpair failed and we were unable to recover it. 00:26:12.593 [2024-04-18 21:19:28.252552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.252949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.252977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.593 qpair failed and we were unable to recover it. 00:26:12.593 [2024-04-18 21:19:28.253355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.253770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.253799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.593 qpair failed and we were unable to recover it. 00:26:12.593 [2024-04-18 21:19:28.254103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.254439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.254468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.593 qpair failed and we were unable to recover it. 00:26:12.593 [2024-04-18 21:19:28.254871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.255128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.255137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.593 qpair failed and we were unable to recover it. 00:26:12.593 [2024-04-18 21:19:28.255407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.255682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.255720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.593 qpair failed and we were unable to recover it. 00:26:12.593 [2024-04-18 21:19:28.256027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.256286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.256315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.593 qpair failed and we were unable to recover it. 00:26:12.593 [2024-04-18 21:19:28.256699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.256950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.256978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.593 qpair failed and we were unable to recover it. 00:26:12.593 [2024-04-18 21:19:28.257375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.593 [2024-04-18 21:19:28.257771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.257801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.594 qpair failed and we were unable to recover it. 00:26:12.594 [2024-04-18 21:19:28.258080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.258396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.258430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.594 qpair failed and we were unable to recover it. 00:26:12.594 [2024-04-18 21:19:28.258776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.259099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.259129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.594 qpair failed and we were unable to recover it. 00:26:12.594 [2024-04-18 21:19:28.259441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.259837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.259867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.594 qpair failed and we were unable to recover it. 00:26:12.594 [2024-04-18 21:19:28.260219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.260620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.260650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.594 qpair failed and we were unable to recover it. 00:26:12.594 [2024-04-18 21:19:28.261052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.261442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.261471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.594 qpair failed and we were unable to recover it. 00:26:12.594 [2024-04-18 21:19:28.261890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.262211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.262239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.594 qpair failed and we were unable to recover it. 00:26:12.594 [2024-04-18 21:19:28.262649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.262922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.262950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.594 qpair failed and we were unable to recover it. 00:26:12.594 [2024-04-18 21:19:28.263342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.263759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.263789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.594 qpair failed and we were unable to recover it. 00:26:12.594 [2024-04-18 21:19:28.264185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.264394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.264404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.594 qpair failed and we were unable to recover it. 00:26:12.594 [2024-04-18 21:19:28.264670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.264991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.265019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.594 qpair failed and we were unable to recover it. 00:26:12.594 [2024-04-18 21:19:28.265418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.265792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.265822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.594 qpair failed and we were unable to recover it. 00:26:12.594 [2024-04-18 21:19:28.266213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.266468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.266497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.594 qpair failed and we were unable to recover it. 00:26:12.594 [2024-04-18 21:19:28.266786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.267111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.267140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.594 qpair failed and we were unable to recover it. 00:26:12.594 [2024-04-18 21:19:28.267543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.267865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.267894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.594 qpair failed and we were unable to recover it. 00:26:12.594 [2024-04-18 21:19:28.268237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.268609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.268639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.594 qpair failed and we were unable to recover it. 00:26:12.594 [2024-04-18 21:19:28.268965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.269334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.269362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.594 qpair failed and we were unable to recover it. 00:26:12.594 [2024-04-18 21:19:28.269780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.270101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.270129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.594 qpair failed and we were unable to recover it. 00:26:12.594 [2024-04-18 21:19:28.270518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.270911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.270940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.594 qpair failed and we were unable to recover it. 00:26:12.594 [2024-04-18 21:19:28.271338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.271657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.271687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.594 qpair failed and we were unable to recover it. 00:26:12.594 [2024-04-18 21:19:28.272031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.272294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.272323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.594 qpair failed and we were unable to recover it. 00:26:12.594 [2024-04-18 21:19:28.272641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.272964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.272993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.594 qpair failed and we were unable to recover it. 00:26:12.594 [2024-04-18 21:19:28.273408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.273793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.273823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.594 qpair failed and we were unable to recover it. 00:26:12.594 [2024-04-18 21:19:28.274138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.274496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.274533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.594 qpair failed and we were unable to recover it. 00:26:12.594 [2024-04-18 21:19:28.274910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.275208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.275217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.594 qpair failed and we were unable to recover it. 00:26:12.594 [2024-04-18 21:19:28.275598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.275979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.276007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.594 qpair failed and we were unable to recover it. 00:26:12.594 [2024-04-18 21:19:28.276375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.276691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.276721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.594 qpair failed and we were unable to recover it. 00:26:12.594 [2024-04-18 21:19:28.277094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.277494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.277541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.594 qpair failed and we were unable to recover it. 00:26:12.594 [2024-04-18 21:19:28.277870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.278185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.278194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.594 qpair failed and we were unable to recover it. 00:26:12.594 [2024-04-18 21:19:28.278566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.594 [2024-04-18 21:19:28.278958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.595 [2024-04-18 21:19:28.278987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.595 qpair failed and we were unable to recover it. 00:26:12.595 [2024-04-18 21:19:28.279328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.595 [2024-04-18 21:19:28.279597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.595 [2024-04-18 21:19:28.279626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.595 qpair failed and we were unable to recover it. 00:26:12.595 [2024-04-18 21:19:28.279880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.595 [2024-04-18 21:19:28.280187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.595 [2024-04-18 21:19:28.280216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.595 qpair failed and we were unable to recover it. 00:26:12.595 [2024-04-18 21:19:28.280642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.595 [2024-04-18 21:19:28.281034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.595 [2024-04-18 21:19:28.281063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.595 qpair failed and we were unable to recover it. 00:26:12.595 [2024-04-18 21:19:28.281379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.595 [2024-04-18 21:19:28.281770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.595 [2024-04-18 21:19:28.281800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.595 qpair failed and we were unable to recover it. 00:26:12.595 [2024-04-18 21:19:28.282195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.595 [2024-04-18 21:19:28.282562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.595 [2024-04-18 21:19:28.282592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.595 qpair failed and we were unable to recover it. 00:26:12.595 [2024-04-18 21:19:28.282989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.595 [2024-04-18 21:19:28.283379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.595 [2024-04-18 21:19:28.283408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.595 qpair failed and we were unable to recover it. 00:26:12.595 [2024-04-18 21:19:28.283782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.595 [2024-04-18 21:19:28.284173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.595 [2024-04-18 21:19:28.284202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.595 qpair failed and we were unable to recover it. 00:26:12.595 [2024-04-18 21:19:28.284606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.595 [2024-04-18 21:19:28.284931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.595 [2024-04-18 21:19:28.284960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.595 qpair failed and we were unable to recover it. 00:26:12.595 [2024-04-18 21:19:28.285360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.595 [2024-04-18 21:19:28.285772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.595 [2024-04-18 21:19:28.285802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.595 qpair failed and we were unable to recover it. 00:26:12.595 [2024-04-18 21:19:28.286127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.595 [2024-04-18 21:19:28.286457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.595 [2024-04-18 21:19:28.286466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.595 qpair failed and we were unable to recover it. 00:26:12.595 [2024-04-18 21:19:28.286804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.595 [2024-04-18 21:19:28.287125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.595 [2024-04-18 21:19:28.287154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.595 qpair failed and we were unable to recover it. 00:26:12.595 [2024-04-18 21:19:28.287528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.595 [2024-04-18 21:19:28.287838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.595 [2024-04-18 21:19:28.287868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.595 qpair failed and we were unable to recover it. 00:26:12.595 [2024-04-18 21:19:28.288275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.595 [2024-04-18 21:19:28.288683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.595 [2024-04-18 21:19:28.288714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.595 qpair failed and we were unable to recover it. 00:26:12.595 [2024-04-18 21:19:28.289093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.595 [2024-04-18 21:19:28.289431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.595 [2024-04-18 21:19:28.289460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.595 qpair failed and we were unable to recover it. 00:26:12.595 [2024-04-18 21:19:28.289778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.595 [2024-04-18 21:19:28.290196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.595 [2024-04-18 21:19:28.290224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.595 qpair failed and we were unable to recover it. 00:26:12.595 [2024-04-18 21:19:28.290595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.595 [2024-04-18 21:19:28.290987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.595 [2024-04-18 21:19:28.291016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.595 qpair failed and we were unable to recover it. 00:26:12.595 [2024-04-18 21:19:28.291355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.595 [2024-04-18 21:19:28.291755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.595 [2024-04-18 21:19:28.291785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.595 qpair failed and we were unable to recover it. 00:26:12.595 [2024-04-18 21:19:28.292113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.595 [2024-04-18 21:19:28.292529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.595 [2024-04-18 21:19:28.292559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.595 qpair failed and we were unable to recover it. 00:26:12.595 [2024-04-18 21:19:28.292954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.595 [2024-04-18 21:19:28.293356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.595 [2024-04-18 21:19:28.293385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.595 qpair failed and we were unable to recover it. 00:26:12.595 [2024-04-18 21:19:28.293706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.595 [2024-04-18 21:19:28.294025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.595 [2024-04-18 21:19:28.294054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.595 qpair failed and we were unable to recover it. 00:26:12.595 [2024-04-18 21:19:28.294460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.595 [2024-04-18 21:19:28.294790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.595 [2024-04-18 21:19:28.294820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.595 qpair failed and we were unable to recover it. 00:26:12.595 [2024-04-18 21:19:28.295220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.595 [2024-04-18 21:19:28.295587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.595 [2024-04-18 21:19:28.295617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.595 qpair failed and we were unable to recover it. 00:26:12.595 [2024-04-18 21:19:28.296021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.595 [2024-04-18 21:19:28.296410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.595 [2024-04-18 21:19:28.296439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.595 qpair failed and we were unable to recover it. 00:26:12.595 [2024-04-18 21:19:28.296742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.595 [2024-04-18 21:19:28.297066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.595 [2024-04-18 21:19:28.297095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.596 qpair failed and we were unable to recover it. 00:26:12.596 [2024-04-18 21:19:28.297498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.297897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.297907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.596 qpair failed and we were unable to recover it. 00:26:12.596 [2024-04-18 21:19:28.298276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.298546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.298557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.596 qpair failed and we were unable to recover it. 00:26:12.596 [2024-04-18 21:19:28.298918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.299283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.299312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.596 qpair failed and we were unable to recover it. 00:26:12.596 [2024-04-18 21:19:28.299695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.300001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.300029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.596 qpair failed and we were unable to recover it. 00:26:12.596 [2024-04-18 21:19:28.300451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.300846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.300876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.596 qpair failed and we were unable to recover it. 00:26:12.596 [2024-04-18 21:19:28.301279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.301673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.301703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.596 qpair failed and we were unable to recover it. 00:26:12.596 [2024-04-18 21:19:28.302081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.302400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.302429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.596 qpair failed and we were unable to recover it. 00:26:12.596 [2024-04-18 21:19:28.302832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.303231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.303259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.596 qpair failed and we were unable to recover it. 00:26:12.596 [2024-04-18 21:19:28.303597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.303933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.303961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.596 qpair failed and we were unable to recover it. 00:26:12.596 [2024-04-18 21:19:28.304333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.304725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.304755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.596 qpair failed and we were unable to recover it. 00:26:12.596 [2024-04-18 21:19:28.305157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.305533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.305563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.596 qpair failed and we were unable to recover it. 00:26:12.596 [2024-04-18 21:19:28.305960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.306332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.306361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.596 qpair failed and we were unable to recover it. 00:26:12.596 [2024-04-18 21:19:28.306758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.307130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.307139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.596 qpair failed and we were unable to recover it. 00:26:12.596 [2024-04-18 21:19:28.307354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.307699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.307729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.596 qpair failed and we were unable to recover it. 00:26:12.596 [2024-04-18 21:19:28.307990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.308306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.308334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.596 qpair failed and we were unable to recover it. 00:26:12.596 [2024-04-18 21:19:28.308670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.309069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.309099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.596 qpair failed and we were unable to recover it. 00:26:12.596 [2024-04-18 21:19:28.309451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.309741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.309771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.596 qpair failed and we were unable to recover it. 00:26:12.596 [2024-04-18 21:19:28.310175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.310560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.310590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.596 qpair failed and we were unable to recover it. 00:26:12.596 [2024-04-18 21:19:28.310944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.311314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.311343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.596 qpair failed and we were unable to recover it. 00:26:12.596 [2024-04-18 21:19:28.311739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.312131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.312160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.596 qpair failed and we were unable to recover it. 00:26:12.596 [2024-04-18 21:19:28.312558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.312898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.312927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.596 qpair failed and we were unable to recover it. 00:26:12.596 [2024-04-18 21:19:28.313329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.313645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.313675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.596 qpair failed and we were unable to recover it. 00:26:12.596 [2024-04-18 21:19:28.314075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.314446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.314474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.596 qpair failed and we were unable to recover it. 00:26:12.596 [2024-04-18 21:19:28.314884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.315206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.315235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.596 qpair failed and we were unable to recover it. 00:26:12.596 [2024-04-18 21:19:28.315640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.316017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.316046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.596 qpair failed and we were unable to recover it. 00:26:12.596 [2024-04-18 21:19:28.316443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.316761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.316791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.596 qpair failed and we were unable to recover it. 00:26:12.596 [2024-04-18 21:19:28.317193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.317587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.317617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.596 qpair failed and we were unable to recover it. 00:26:12.596 [2024-04-18 21:19:28.318014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.596 [2024-04-18 21:19:28.318409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.318437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.597 qpair failed and we were unable to recover it. 00:26:12.597 [2024-04-18 21:19:28.318774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.319177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.319205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.597 qpair failed and we were unable to recover it. 00:26:12.597 [2024-04-18 21:19:28.319553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.319886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.319915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.597 qpair failed and we were unable to recover it. 00:26:12.597 [2024-04-18 21:19:28.320337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.320623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.320653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.597 qpair failed and we were unable to recover it. 00:26:12.597 [2024-04-18 21:19:28.321057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.321448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.321477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.597 qpair failed and we were unable to recover it. 00:26:12.597 [2024-04-18 21:19:28.321886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.322208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.322236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.597 qpair failed and we were unable to recover it. 00:26:12.597 [2024-04-18 21:19:28.322614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.323015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.323043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.597 qpair failed and we were unable to recover it. 00:26:12.597 [2024-04-18 21:19:28.323394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.323743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.323774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.597 qpair failed and we were unable to recover it. 00:26:12.597 [2024-04-18 21:19:28.324101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.324465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.324495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.597 qpair failed and we were unable to recover it. 00:26:12.597 [2024-04-18 21:19:28.324915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.325309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.325338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.597 qpair failed and we were unable to recover it. 00:26:12.597 [2024-04-18 21:19:28.325684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.326022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.326031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.597 qpair failed and we were unable to recover it. 00:26:12.597 [2024-04-18 21:19:28.326368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.326768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.326798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.597 qpair failed and we were unable to recover it. 00:26:12.597 [2024-04-18 21:19:28.327070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.327470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.327500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.597 qpair failed and we were unable to recover it. 00:26:12.597 [2024-04-18 21:19:28.327889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.328213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.328242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.597 qpair failed and we were unable to recover it. 00:26:12.597 [2024-04-18 21:19:28.328674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.329093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.329121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.597 qpair failed and we were unable to recover it. 00:26:12.597 [2024-04-18 21:19:28.329521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.329866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.329894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.597 qpair failed and we were unable to recover it. 00:26:12.597 [2024-04-18 21:19:28.330294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.330685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.330695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.597 qpair failed and we were unable to recover it. 00:26:12.597 [2024-04-18 21:19:28.331040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.331371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.331399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.597 qpair failed and we were unable to recover it. 00:26:12.597 [2024-04-18 21:19:28.331807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.332085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.332114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.597 qpair failed and we were unable to recover it. 00:26:12.597 [2024-04-18 21:19:28.332446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.332853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.332883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.597 qpair failed and we were unable to recover it. 00:26:12.597 [2024-04-18 21:19:28.333289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.333663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.333692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.597 qpair failed and we were unable to recover it. 00:26:12.597 [2024-04-18 21:19:28.334100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.334504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.334542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.597 qpair failed and we were unable to recover it. 00:26:12.597 [2024-04-18 21:19:28.334888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.335273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.335302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.597 qpair failed and we were unable to recover it. 00:26:12.597 [2024-04-18 21:19:28.335684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.336033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.336062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.597 qpair failed and we were unable to recover it. 00:26:12.597 [2024-04-18 21:19:28.336455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.336775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.336806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.597 qpair failed and we were unable to recover it. 00:26:12.597 [2024-04-18 21:19:28.337216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.337616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.337647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.597 qpair failed and we were unable to recover it. 00:26:12.597 [2024-04-18 21:19:28.337968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.338370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.338399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.597 qpair failed and we were unable to recover it. 00:26:12.597 [2024-04-18 21:19:28.338783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.339184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.339212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.597 qpair failed and we were unable to recover it. 00:26:12.597 [2024-04-18 21:19:28.339541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.339936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.597 [2024-04-18 21:19:28.339965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.597 qpair failed and we were unable to recover it. 00:26:12.598 [2024-04-18 21:19:28.340308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.598 [2024-04-18 21:19:28.340685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.598 [2024-04-18 21:19:28.340715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.598 qpair failed and we were unable to recover it. 00:26:12.598 [2024-04-18 21:19:28.341098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.598 [2024-04-18 21:19:28.341471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.598 [2024-04-18 21:19:28.341499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.598 qpair failed and we were unable to recover it. 00:26:12.598 [2024-04-18 21:19:28.341908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.598 [2024-04-18 21:19:28.342317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.598 [2024-04-18 21:19:28.342346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.598 qpair failed and we were unable to recover it. 00:26:12.598 [2024-04-18 21:19:28.342728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.598 [2024-04-18 21:19:28.343038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.598 [2024-04-18 21:19:28.343067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.598 qpair failed and we were unable to recover it. 00:26:12.598 [2024-04-18 21:19:28.343338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.598 [2024-04-18 21:19:28.343721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.598 [2024-04-18 21:19:28.343751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.598 qpair failed and we were unable to recover it. 00:26:12.598 [2024-04-18 21:19:28.344160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.598 [2024-04-18 21:19:28.344414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.598 [2024-04-18 21:19:28.344444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.598 qpair failed and we were unable to recover it. 00:26:12.598 [2024-04-18 21:19:28.344823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.598 [2024-04-18 21:19:28.345227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.598 [2024-04-18 21:19:28.345255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.598 qpair failed and we were unable to recover it. 00:26:12.598 [2024-04-18 21:19:28.345660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.598 [2024-04-18 21:19:28.345993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.598 [2024-04-18 21:19:28.346022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.598 qpair failed and we were unable to recover it. 00:26:12.598 [2024-04-18 21:19:28.346438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.598 [2024-04-18 21:19:28.346841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.598 [2024-04-18 21:19:28.346871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.598 qpair failed and we were unable to recover it. 00:26:12.598 [2024-04-18 21:19:28.347257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.598 [2024-04-18 21:19:28.347663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.598 [2024-04-18 21:19:28.347693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.598 qpair failed and we were unable to recover it. 00:26:12.598 [2024-04-18 21:19:28.348098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.598 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 3203927 Killed "${NVMF_APP[@]}" "$@" 00:26:12.598 [2024-04-18 21:19:28.348416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.598 [2024-04-18 21:19:28.348425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.598 qpair failed and we were unable to recover it. 00:26:12.598 [2024-04-18 21:19:28.348792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.598 21:19:28 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:26:12.598 [2024-04-18 21:19:28.349081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.598 [2024-04-18 21:19:28.349091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.598 qpair failed and we were unable to recover it. 00:26:12.598 21:19:28 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:12.598 [2024-04-18 21:19:28.349374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.598 21:19:28 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:12.598 [2024-04-18 21:19:28.349733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.598 [2024-04-18 21:19:28.349745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.598 qpair failed and we were unable to recover it. 00:26:12.598 21:19:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:12.598 [2024-04-18 21:19:28.350082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.598 21:19:28 -- common/autotest_common.sh@10 -- # set +x 00:26:12.598 [2024-04-18 21:19:28.350455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.598 [2024-04-18 21:19:28.350466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.598 qpair failed and we were unable to recover it. 00:26:12.598 [2024-04-18 21:19:28.350832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.598 [2024-04-18 21:19:28.351057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.598 [2024-04-18 21:19:28.351068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.598 qpair failed and we were unable to recover it. 00:26:12.598 [2024-04-18 21:19:28.351409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.598 [2024-04-18 21:19:28.351687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.598 [2024-04-18 21:19:28.351699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.598 qpair failed and we were unable to recover it. 00:26:12.598 [2024-04-18 21:19:28.352042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.598 [2024-04-18 21:19:28.352315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.598 [2024-04-18 21:19:28.352325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.598 qpair failed and we were unable to recover it. 00:26:12.598 [2024-04-18 21:19:28.352712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.598 [2024-04-18 21:19:28.353075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.598 [2024-04-18 21:19:28.353085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.598 qpair failed and we were unable to recover it. 00:26:12.598 [2024-04-18 21:19:28.353443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.598 [2024-04-18 21:19:28.353830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.598 [2024-04-18 21:19:28.353841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.598 qpair failed and we were unable to recover it. 00:26:12.598 [2024-04-18 21:19:28.354124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.598 [2024-04-18 21:19:28.354493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.598 [2024-04-18 21:19:28.354504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.598 qpair failed and we were unable to recover it. 00:26:12.598 [2024-04-18 21:19:28.354867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.598 [2024-04-18 21:19:28.355149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.598 [2024-04-18 21:19:28.355160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.598 qpair failed and we were unable to recover it. 00:26:12.598 [2024-04-18 21:19:28.355507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.598 [2024-04-18 21:19:28.355939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.598 [2024-04-18 21:19:28.355950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.598 qpair failed and we were unable to recover it. 00:26:12.598 [2024-04-18 21:19:28.356340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.598 21:19:28 -- nvmf/common.sh@470 -- # nvmfpid=3204739 00:26:12.598 [2024-04-18 21:19:28.356609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.598 [2024-04-18 21:19:28.356620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.598 qpair failed and we were unable to recover it. 00:26:12.598 21:19:28 -- nvmf/common.sh@471 -- # waitforlisten 3204739 00:26:12.598 21:19:28 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:12.598 [2024-04-18 21:19:28.356938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.598 21:19:28 -- common/autotest_common.sh@817 -- # '[' -z 3204739 ']' 00:26:12.598 [2024-04-18 21:19:28.357196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.598 [2024-04-18 21:19:28.357207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.598 qpair failed and we were unable to recover it. 00:26:12.598 21:19:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:12.598 [2024-04-18 21:19:28.357428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.598 21:19:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:12.598 [2024-04-18 21:19:28.357720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.598 [2024-04-18 21:19:28.357731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.598 qpair failed and we were unable to recover it. 00:26:12.598 21:19:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:12.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:12.598 [2024-04-18 21:19:28.358041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 21:19:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:12.599 [2024-04-18 21:19:28.358382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.358394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.599 qpair failed and we were unable to recover it. 00:26:12.599 21:19:28 -- common/autotest_common.sh@10 -- # set +x 00:26:12.599 [2024-04-18 21:19:28.358735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.359017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.359027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.599 qpair failed and we were unable to recover it. 00:26:12.599 [2024-04-18 21:19:28.359360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.359673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.359684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.599 qpair failed and we were unable to recover it. 00:26:12.599 [2024-04-18 21:19:28.360043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.360393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.360403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.599 qpair failed and we were unable to recover it. 00:26:12.599 [2024-04-18 21:19:28.360714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.361010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.361020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.599 qpair failed and we were unable to recover it. 00:26:12.599 [2024-04-18 21:19:28.361408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.361750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.361761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.599 qpair failed and we were unable to recover it. 00:26:12.599 [2024-04-18 21:19:28.362052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.362412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.362423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.599 qpair failed and we were unable to recover it. 00:26:12.599 [2024-04-18 21:19:28.362777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.363115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.363125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.599 qpair failed and we were unable to recover it. 00:26:12.599 [2024-04-18 21:19:28.363576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.363820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.363830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.599 qpair failed and we were unable to recover it. 00:26:12.599 [2024-04-18 21:19:28.364061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.364442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.364452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.599 qpair failed and we were unable to recover it. 00:26:12.599 [2024-04-18 21:19:28.364833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.365202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.365211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.599 qpair failed and we were unable to recover it. 00:26:12.599 [2024-04-18 21:19:28.365520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.365882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.365892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.599 qpair failed and we were unable to recover it. 00:26:12.599 [2024-04-18 21:19:28.366186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.366493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.366503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.599 qpair failed and we were unable to recover it. 00:26:12.599 [2024-04-18 21:19:28.366730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.367075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.367085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.599 qpair failed and we were unable to recover it. 00:26:12.599 [2024-04-18 21:19:28.367368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.367806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.367816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.599 qpair failed and we were unable to recover it. 00:26:12.599 [2024-04-18 21:19:28.368206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.368564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.368574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.599 qpair failed and we were unable to recover it. 00:26:12.599 [2024-04-18 21:19:28.368865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.369166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.369176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.599 qpair failed and we were unable to recover it. 00:26:12.599 [2024-04-18 21:19:28.369418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.369705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.369715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.599 qpair failed and we were unable to recover it. 00:26:12.599 [2024-04-18 21:19:28.370052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.370286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.370296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.599 qpair failed and we were unable to recover it. 00:26:12.599 [2024-04-18 21:19:28.370577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.370936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.370946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.599 qpair failed and we were unable to recover it. 00:26:12.599 [2024-04-18 21:19:28.371311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.371594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.371605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.599 qpair failed and we were unable to recover it. 00:26:12.599 [2024-04-18 21:19:28.371895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.372174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.372184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.599 qpair failed and we were unable to recover it. 00:26:12.599 [2024-04-18 21:19:28.372483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.372766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.372777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.599 qpair failed and we were unable to recover it. 00:26:12.599 [2024-04-18 21:19:28.373189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.373536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.373547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.599 qpair failed and we were unable to recover it. 00:26:12.599 [2024-04-18 21:19:28.373884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.374187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.374199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.599 qpair failed and we were unable to recover it. 00:26:12.599 [2024-04-18 21:19:28.374561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.374924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.374934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.599 qpair failed and we were unable to recover it. 00:26:12.599 [2024-04-18 21:19:28.375275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.375645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.375656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.599 qpair failed and we were unable to recover it. 00:26:12.599 [2024-04-18 21:19:28.375997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.376401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.376410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.599 qpair failed and we were unable to recover it. 00:26:12.599 [2024-04-18 21:19:28.376766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.599 [2024-04-18 21:19:28.377152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.600 [2024-04-18 21:19:28.377162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.600 qpair failed and we were unable to recover it. 00:26:12.600 [2024-04-18 21:19:28.377432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.600 [2024-04-18 21:19:28.377782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.600 [2024-04-18 21:19:28.377793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.600 qpair failed and we were unable to recover it. 00:26:12.600 [2024-04-18 21:19:28.378080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.600 [2024-04-18 21:19:28.378417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.600 [2024-04-18 21:19:28.378428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.600 qpair failed and we were unable to recover it. 00:26:12.600 [2024-04-18 21:19:28.378737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.600 [2024-04-18 21:19:28.379075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.600 [2024-04-18 21:19:28.379085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.600 qpair failed and we were unable to recover it. 00:26:12.600 [2024-04-18 21:19:28.379450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.600 [2024-04-18 21:19:28.379664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.600 [2024-04-18 21:19:28.379674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.600 qpair failed and we were unable to recover it. 00:26:12.600 [2024-04-18 21:19:28.379965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.600 [2024-04-18 21:19:28.380318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.600 [2024-04-18 21:19:28.380328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.600 qpair failed and we were unable to recover it. 00:26:12.600 [2024-04-18 21:19:28.380561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.600 [2024-04-18 21:19:28.380831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.600 [2024-04-18 21:19:28.380844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.600 qpair failed and we were unable to recover it. 00:26:12.600 [2024-04-18 21:19:28.381218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.600 [2024-04-18 21:19:28.381443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.600 [2024-04-18 21:19:28.381452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.600 qpair failed and we were unable to recover it. 00:26:12.600 [2024-04-18 21:19:28.381816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.600 [2024-04-18 21:19:28.382200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.600 [2024-04-18 21:19:28.382210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.600 qpair failed and we were unable to recover it. 00:26:12.600 [2024-04-18 21:19:28.382547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.600 [2024-04-18 21:19:28.382827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.600 [2024-04-18 21:19:28.382837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.600 qpair failed and we were unable to recover it. 00:26:12.600 [2024-04-18 21:19:28.383201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.600 [2024-04-18 21:19:28.383567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.600 [2024-04-18 21:19:28.383577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.600 qpair failed and we were unable to recover it. 00:26:12.600 [2024-04-18 21:19:28.383850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.600 [2024-04-18 21:19:28.384143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.600 [2024-04-18 21:19:28.384152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.600 qpair failed and we were unable to recover it. 00:26:12.600 [2024-04-18 21:19:28.384532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.600 [2024-04-18 21:19:28.384715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.600 [2024-04-18 21:19:28.384725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.600 qpair failed and we were unable to recover it. 00:26:12.600 [2024-04-18 21:19:28.384968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.600 [2024-04-18 21:19:28.385400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.600 [2024-04-18 21:19:28.385416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.600 qpair failed and we were unable to recover it. 00:26:12.600 [2024-04-18 21:19:28.385776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.600 [2024-04-18 21:19:28.386024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.600 [2024-04-18 21:19:28.386034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.600 qpair failed and we were unable to recover it. 00:26:12.600 [2024-04-18 21:19:28.386388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.600 [2024-04-18 21:19:28.386757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.600 [2024-04-18 21:19:28.386768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.600 qpair failed and we were unable to recover it. 00:26:12.600 [2024-04-18 21:19:28.387118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.600 [2024-04-18 21:19:28.387404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.600 [2024-04-18 21:19:28.387417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.600 qpair failed and we were unable to recover it. 00:26:12.600 [2024-04-18 21:19:28.387700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.600 [2024-04-18 21:19:28.387960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.600 [2024-04-18 21:19:28.387970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.600 qpair failed and we were unable to recover it. 00:26:12.600 [2024-04-18 21:19:28.388266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.600 [2024-04-18 21:19:28.388604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.600 [2024-04-18 21:19:28.388615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.600 qpair failed and we were unable to recover it. 00:26:12.600 [2024-04-18 21:19:28.388920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.600 [2024-04-18 21:19:28.389284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.600 [2024-04-18 21:19:28.389294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.600 qpair failed and we were unable to recover it. 00:26:12.600 [2024-04-18 21:19:28.389677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.600 [2024-04-18 21:19:28.390032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.600 [2024-04-18 21:19:28.390043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.600 qpair failed and we were unable to recover it. 00:26:12.600 [2024-04-18 21:19:28.390403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.600 [2024-04-18 21:19:28.390679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.600 [2024-04-18 21:19:28.390690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.600 qpair failed and we were unable to recover it. 00:26:12.600 [2024-04-18 21:19:28.391004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.600 [2024-04-18 21:19:28.391346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.600 [2024-04-18 21:19:28.391356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.600 qpair failed and we were unable to recover it. 00:26:12.600 [2024-04-18 21:19:28.391644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.600 [2024-04-18 21:19:28.391979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.600 [2024-04-18 21:19:28.391989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.600 qpair failed and we were unable to recover it. 00:26:12.600 [2024-04-18 21:19:28.392280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.600 [2024-04-18 21:19:28.392636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.600 [2024-04-18 21:19:28.392647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.600 qpair failed and we were unable to recover it. 00:26:12.600 [2024-04-18 21:19:28.392981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.393298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.393307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.601 qpair failed and we were unable to recover it. 00:26:12.601 [2024-04-18 21:19:28.393575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.393907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.393919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.601 qpair failed and we were unable to recover it. 00:26:12.601 [2024-04-18 21:19:28.394148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.394469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.394479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.601 qpair failed and we were unable to recover it. 00:26:12.601 [2024-04-18 21:19:28.394840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.395180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.395190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.601 qpair failed and we were unable to recover it. 00:26:12.601 [2024-04-18 21:19:28.395399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.395756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.395766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.601 qpair failed and we were unable to recover it. 00:26:12.601 [2024-04-18 21:19:28.396066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.396346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.396357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.601 qpair failed and we were unable to recover it. 00:26:12.601 [2024-04-18 21:19:28.396717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.396980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.396990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.601 qpair failed and we were unable to recover it. 00:26:12.601 [2024-04-18 21:19:28.397348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.397628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.397638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.601 qpair failed and we were unable to recover it. 00:26:12.601 [2024-04-18 21:19:28.397974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.398240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.398251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.601 qpair failed and we were unable to recover it. 00:26:12.601 [2024-04-18 21:19:28.398534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.398890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.398899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.601 qpair failed and we were unable to recover it. 00:26:12.601 [2024-04-18 21:19:28.399237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.399611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.399622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.601 qpair failed and we were unable to recover it. 00:26:12.601 [2024-04-18 21:19:28.399954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.400166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.400176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.601 qpair failed and we were unable to recover it. 00:26:12.601 [2024-04-18 21:19:28.400522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.400906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.400916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.601 qpair failed and we were unable to recover it. 00:26:12.601 [2024-04-18 21:19:28.401247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.401576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.401586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.601 qpair failed and we were unable to recover it. 00:26:12.601 [2024-04-18 21:19:28.401900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.402188] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:26:12.601 [2024-04-18 21:19:28.402233] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:[2024-04-18 21:19:28.402233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:12.601 [2024-04-18 21:19:28.402243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.601 qpair failed and we were unable to recover it. 00:26:12.601 [2024-04-18 21:19:28.402535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.402887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.402897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.601 qpair failed and we were unable to recover it. 00:26:12.601 [2024-04-18 21:19:28.403233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.403530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.403540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.601 qpair failed and we were unable to recover it. 00:26:12.601 [2024-04-18 21:19:28.403781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.404176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.404186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.601 qpair failed and we were unable to recover it. 00:26:12.601 [2024-04-18 21:19:28.404541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.404928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.404937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.601 qpair failed and we were unable to recover it. 00:26:12.601 [2024-04-18 21:19:28.405301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.405658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.405668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.601 qpair failed and we were unable to recover it. 00:26:12.601 [2024-04-18 21:19:28.406027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.406403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.406412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.601 qpair failed and we were unable to recover it. 00:26:12.601 [2024-04-18 21:19:28.406753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.407096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.407106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.601 qpair failed and we were unable to recover it. 00:26:12.601 [2024-04-18 21:19:28.407383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.407582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.407593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.601 qpair failed and we were unable to recover it. 00:26:12.601 [2024-04-18 21:19:28.407939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.408216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.408226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.601 qpair failed and we were unable to recover it. 00:26:12.601 [2024-04-18 21:19:28.408596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.408949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.408959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.601 qpair failed and we were unable to recover it. 00:26:12.601 [2024-04-18 21:19:28.409343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.409642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.409652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.601 qpair failed and we were unable to recover it. 00:26:12.601 [2024-04-18 21:19:28.409989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.410340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.410349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.601 qpair failed and we were unable to recover it. 00:26:12.601 [2024-04-18 21:19:28.410628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.410955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.601 [2024-04-18 21:19:28.410965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.601 qpair failed and we were unable to recover it. 00:26:12.601 [2024-04-18 21:19:28.411297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.411649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.411659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.602 qpair failed and we were unable to recover it. 00:26:12.602 [2024-04-18 21:19:28.411884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.412241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.412251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.602 qpair failed and we were unable to recover it. 00:26:12.602 [2024-04-18 21:19:28.412585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.412881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.412891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.602 qpair failed and we were unable to recover it. 00:26:12.602 [2024-04-18 21:19:28.413254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.413532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.413542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.602 qpair failed and we were unable to recover it. 00:26:12.602 [2024-04-18 21:19:28.413786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.414129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.414139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.602 qpair failed and we were unable to recover it. 00:26:12.602 [2024-04-18 21:19:28.414470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.414841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.414851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.602 qpair failed and we were unable to recover it. 00:26:12.602 [2024-04-18 21:19:28.415144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.415445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.415455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.602 qpair failed and we were unable to recover it. 00:26:12.602 [2024-04-18 21:19:28.415825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.416117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.416126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.602 qpair failed and we were unable to recover it. 00:26:12.602 [2024-04-18 21:19:28.416463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.416761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.416771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.602 qpair failed and we were unable to recover it. 00:26:12.602 [2024-04-18 21:19:28.417125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.417396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.417406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.602 qpair failed and we were unable to recover it. 00:26:12.602 [2024-04-18 21:19:28.417689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.418051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.418060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.602 qpair failed and we were unable to recover it. 00:26:12.602 [2024-04-18 21:19:28.418454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.418735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.418745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.602 qpair failed and we were unable to recover it. 00:26:12.602 [2024-04-18 21:19:28.419095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.419450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.419460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.602 qpair failed and we were unable to recover it. 00:26:12.602 [2024-04-18 21:19:28.419841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.420192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.420202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.602 qpair failed and we were unable to recover it. 00:26:12.602 [2024-04-18 21:19:28.420487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.420742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.420752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.602 qpair failed and we were unable to recover it. 00:26:12.602 [2024-04-18 21:19:28.421013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.421291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.421301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.602 qpair failed and we were unable to recover it. 00:26:12.602 [2024-04-18 21:19:28.421580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.421878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.421889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.602 qpair failed and we were unable to recover it. 00:26:12.602 [2024-04-18 21:19:28.422253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.422622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.422633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.602 qpair failed and we were unable to recover it. 00:26:12.602 [2024-04-18 21:19:28.422989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.423273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.423284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.602 qpair failed and we were unable to recover it. 00:26:12.602 [2024-04-18 21:19:28.423496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.423798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.423808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.602 qpair failed and we were unable to recover it. 00:26:12.602 [2024-04-18 21:19:28.424141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.424404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.424414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.602 qpair failed and we were unable to recover it. 00:26:12.602 [2024-04-18 21:19:28.424760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.425116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.425125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.602 qpair failed and we were unable to recover it. 00:26:12.602 [2024-04-18 21:19:28.425485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.425771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.425781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.602 qpair failed and we were unable to recover it. 00:26:12.602 [2024-04-18 21:19:28.426023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.426411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.426421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.602 qpair failed and we were unable to recover it. 00:26:12.602 [2024-04-18 21:19:28.426753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.427030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.427040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.602 qpair failed and we were unable to recover it. 00:26:12.602 [2024-04-18 21:19:28.427316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.427666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.427676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.602 qpair failed and we were unable to recover it. 00:26:12.602 [2024-04-18 21:19:28.428029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.428422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.428432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.602 qpair failed and we were unable to recover it. 00:26:12.602 [2024-04-18 21:19:28.428809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.429090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.429100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.602 qpair failed and we were unable to recover it. 00:26:12.602 [2024-04-18 21:19:28.429424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.429782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.602 [2024-04-18 21:19:28.429792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.602 qpair failed and we were unable to recover it. 00:26:12.602 [2024-04-18 21:19:28.430142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.430402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.430412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.603 qpair failed and we were unable to recover it. 00:26:12.603 [2024-04-18 21:19:28.430757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.431036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.431046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.603 qpair failed and we were unable to recover it. 00:26:12.603 [2024-04-18 21:19:28.431337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.431663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.431673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.603 qpair failed and we were unable to recover it. 00:26:12.603 [2024-04-18 21:19:28.432000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.432285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.432294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.603 qpair failed and we were unable to recover it. 00:26:12.603 [2024-04-18 21:19:28.432662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.432963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.432973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.603 qpair failed and we were unable to recover it. 00:26:12.603 [2024-04-18 21:19:28.433235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.433589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.433599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.603 qpair failed and we were unable to recover it. 00:26:12.603 [2024-04-18 21:19:28.433872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 EAL: No free 2048 kB hugepages reported on node 1 00:26:12.603 [2024-04-18 21:19:28.434199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.434210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.603 qpair failed and we were unable to recover it. 00:26:12.603 [2024-04-18 21:19:28.434578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.434853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.434864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.603 qpair failed and we were unable to recover it. 00:26:12.603 [2024-04-18 21:19:28.435135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.435505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.435527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.603 qpair failed and we were unable to recover it. 00:26:12.603 [2024-04-18 21:19:28.435895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.436170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.436180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.603 qpair failed and we were unable to recover it. 00:26:12.603 [2024-04-18 21:19:28.436537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.436841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.436851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.603 qpair failed and we were unable to recover it. 00:26:12.603 [2024-04-18 21:19:28.437216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.437544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.437554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.603 qpair failed and we were unable to recover it. 00:26:12.603 [2024-04-18 21:19:28.437833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.438187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.438197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.603 qpair failed and we were unable to recover it. 00:26:12.603 [2024-04-18 21:19:28.438479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.438836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.438846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.603 qpair failed and we were unable to recover it. 00:26:12.603 [2024-04-18 21:19:28.439230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.439584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.439594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.603 qpair failed and we were unable to recover it. 00:26:12.603 [2024-04-18 21:19:28.439942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.440208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.440219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.603 qpair failed and we were unable to recover it. 00:26:12.603 [2024-04-18 21:19:28.440570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.440921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.440931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.603 qpair failed and we were unable to recover it. 00:26:12.603 [2024-04-18 21:19:28.441282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.441631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.441641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.603 qpair failed and we were unable to recover it. 00:26:12.603 [2024-04-18 21:19:28.441992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.442346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.442355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.603 qpair failed and we were unable to recover it. 00:26:12.603 [2024-04-18 21:19:28.442707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.442985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.442995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.603 qpair failed and we were unable to recover it. 00:26:12.603 [2024-04-18 21:19:28.443281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.443632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.443642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.603 qpair failed and we were unable to recover it. 00:26:12.603 [2024-04-18 21:19:28.443902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.444254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.444263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.603 qpair failed and we were unable to recover it. 00:26:12.603 [2024-04-18 21:19:28.444543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.444835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.444844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.603 qpair failed and we were unable to recover it. 00:26:12.603 [2024-04-18 21:19:28.445118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.445404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.445414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.603 qpair failed and we were unable to recover it. 00:26:12.603 [2024-04-18 21:19:28.445775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.446038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.446047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.603 qpair failed and we were unable to recover it. 00:26:12.603 [2024-04-18 21:19:28.446354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.446698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.446708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.603 qpair failed and we were unable to recover it. 00:26:12.603 [2024-04-18 21:19:28.446978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.447326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.447335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.603 qpair failed and we were unable to recover it. 00:26:12.603 [2024-04-18 21:19:28.447624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.447977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.447985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.603 qpair failed and we were unable to recover it. 00:26:12.603 [2024-04-18 21:19:28.448288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.603 [2024-04-18 21:19:28.448551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.448560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.604 qpair failed and we were unable to recover it. 00:26:12.604 [2024-04-18 21:19:28.448853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.449142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.449150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.604 qpair failed and we were unable to recover it. 00:26:12.604 [2024-04-18 21:19:28.449487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.449747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.449757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.604 qpair failed and we were unable to recover it. 00:26:12.604 [2024-04-18 21:19:28.450025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.450378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.450388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.604 qpair failed and we were unable to recover it. 00:26:12.604 [2024-04-18 21:19:28.450715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.451001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.451011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.604 qpair failed and we were unable to recover it. 00:26:12.604 [2024-04-18 21:19:28.451287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.451584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.451593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.604 qpair failed and we were unable to recover it. 00:26:12.604 [2024-04-18 21:19:28.451940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.452169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.452179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.604 qpair failed and we were unable to recover it. 00:26:12.604 [2024-04-18 21:19:28.452519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.452865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.452874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.604 qpair failed and we were unable to recover it. 00:26:12.604 [2024-04-18 21:19:28.453071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.453330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.453340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.604 qpair failed and we were unable to recover it. 00:26:12.604 [2024-04-18 21:19:28.453638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.453927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.453936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.604 qpair failed and we were unable to recover it. 00:26:12.604 [2024-04-18 21:19:28.454283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.454607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.454617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.604 qpair failed and we were unable to recover it. 00:26:12.604 [2024-04-18 21:19:28.454912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.455182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.455191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.604 qpair failed and we were unable to recover it. 00:26:12.604 [2024-04-18 21:19:28.455494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.455849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.455859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.604 qpair failed and we were unable to recover it. 00:26:12.604 [2024-04-18 21:19:28.456205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.456530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.456540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.604 qpair failed and we were unable to recover it. 00:26:12.604 [2024-04-18 21:19:28.456816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.457185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.457194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.604 qpair failed and we were unable to recover it. 00:26:12.604 [2024-04-18 21:19:28.457495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.457730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.457740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.604 qpair failed and we were unable to recover it. 00:26:12.604 [2024-04-18 21:19:28.458055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.458419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.458429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.604 qpair failed and we were unable to recover it. 00:26:12.604 [2024-04-18 21:19:28.458688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.459059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.459068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.604 qpair failed and we were unable to recover it. 00:26:12.604 [2024-04-18 21:19:28.459354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.459720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.459730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.604 qpair failed and we were unable to recover it. 00:26:12.604 [2024-04-18 21:19:28.459990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.460213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.460223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.604 qpair failed and we were unable to recover it. 00:26:12.604 [2024-04-18 21:19:28.460559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.460909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.460919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.604 qpair failed and we were unable to recover it. 00:26:12.604 [2024-04-18 21:19:28.461192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.461482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.461492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.604 qpair failed and we were unable to recover it. 00:26:12.604 [2024-04-18 21:19:28.461906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.462242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.462251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.604 qpair failed and we were unable to recover it. 00:26:12.604 [2024-04-18 21:19:28.462555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.462922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.462931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.604 qpair failed and we were unable to recover it. 00:26:12.604 [2024-04-18 21:19:28.463157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.463506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.463525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.604 qpair failed and we were unable to recover it. 00:26:12.604 [2024-04-18 21:19:28.463808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.464132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.464141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.604 qpair failed and we were unable to recover it. 00:26:12.604 [2024-04-18 21:19:28.464493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.464841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.464851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.604 qpair failed and we were unable to recover it. 00:26:12.604 [2024-04-18 21:19:28.465112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.465464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.465474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.604 qpair failed and we were unable to recover it. 00:26:12.604 [2024-04-18 21:19:28.465690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.466015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.604 [2024-04-18 21:19:28.466024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.604 qpair failed and we were unable to recover it. 00:26:12.604 [2024-04-18 21:19:28.466389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.605 [2024-04-18 21:19:28.466740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.605 [2024-04-18 21:19:28.466750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.605 qpair failed and we were unable to recover it. 00:26:12.605 [2024-04-18 21:19:28.467102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.605 [2024-04-18 21:19:28.467425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.605 [2024-04-18 21:19:28.467435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.605 qpair failed and we were unable to recover it. 00:26:12.605 [2024-04-18 21:19:28.467780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.605 [2024-04-18 21:19:28.468079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.605 [2024-04-18 21:19:28.468088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.605 qpair failed and we were unable to recover it. 00:26:12.605 [2024-04-18 21:19:28.468362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.605 [2024-04-18 21:19:28.468659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.605 [2024-04-18 21:19:28.468670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.605 qpair failed and we were unable to recover it. 00:26:12.605 [2024-04-18 21:19:28.469043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.605 [2024-04-18 21:19:28.469369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.605 [2024-04-18 21:19:28.469378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.605 qpair failed and we were unable to recover it. 00:26:12.605 [2024-04-18 21:19:28.469725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.605 [2024-04-18 21:19:28.469988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.605 [2024-04-18 21:19:28.469998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.605 qpair failed and we were unable to recover it. 00:26:12.605 [2024-04-18 21:19:28.470344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.605 [2024-04-18 21:19:28.470699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.605 [2024-04-18 21:19:28.470709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.605 qpair failed and we were unable to recover it. 00:26:12.605 [2024-04-18 21:19:28.471033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.605 [2024-04-18 21:19:28.471320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.605 [2024-04-18 21:19:28.471331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.605 qpair failed and we were unable to recover it. 00:26:12.605 [2024-04-18 21:19:28.471603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.605 [2024-04-18 21:19:28.471903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.605 [2024-04-18 21:19:28.471913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.605 qpair failed and we were unable to recover it. 00:26:12.605 [2024-04-18 21:19:28.472170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.605 [2024-04-18 21:19:28.472519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.605 [2024-04-18 21:19:28.472528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.605 qpair failed and we were unable to recover it. 00:26:12.605 [2024-04-18 21:19:28.472854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.605 [2024-04-18 21:19:28.473175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.605 [2024-04-18 21:19:28.473184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.605 qpair failed and we were unable to recover it. 00:26:12.605 [2024-04-18 21:19:28.473457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.605 [2024-04-18 21:19:28.473820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.605 [2024-04-18 21:19:28.473830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.605 qpair failed and we were unable to recover it. 00:26:12.605 [2024-04-18 21:19:28.474132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.605 [2024-04-18 21:19:28.474406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.605 [2024-04-18 21:19:28.474416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.605 qpair failed and we were unable to recover it. 00:26:12.605 [2024-04-18 21:19:28.474755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.605 [2024-04-18 21:19:28.475022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.605 [2024-04-18 21:19:28.475031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.605 qpair failed and we were unable to recover it. 00:26:12.605 [2024-04-18 21:19:28.475341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.605 [2024-04-18 21:19:28.475604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.605 [2024-04-18 21:19:28.475614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.605 qpair failed and we were unable to recover it. 00:26:12.605 [2024-04-18 21:19:28.475961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.605 [2024-04-18 21:19:28.476261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.605 [2024-04-18 21:19:28.476270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.605 qpair failed and we were unable to recover it. 00:26:12.605 [2024-04-18 21:19:28.476623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.605 [2024-04-18 21:19:28.476831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.605 [2024-04-18 21:19:28.476840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.605 qpair failed and we were unable to recover it. 00:26:12.605 [2024-04-18 21:19:28.477141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.605 [2024-04-18 21:19:28.477404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.605 [2024-04-18 21:19:28.477415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.605 qpair failed and we were unable to recover it. 00:26:12.605 [2024-04-18 21:19:28.477757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.605 [2024-04-18 21:19:28.478065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.605 [2024-04-18 21:19:28.478074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.605 qpair failed and we were unable to recover it. 00:26:12.605 [2024-04-18 21:19:28.478456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.605 [2024-04-18 21:19:28.478714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.605 [2024-04-18 21:19:28.478724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.605 qpair failed and we were unable to recover it. 00:26:12.605 [2024-04-18 21:19:28.478985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.605 [2024-04-18 21:19:28.479325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.605 [2024-04-18 21:19:28.479334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.605 qpair failed and we were unable to recover it. 00:26:12.605 [2024-04-18 21:19:28.479687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.605 [2024-04-18 21:19:28.480009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.605 [2024-04-18 21:19:28.480019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.605 qpair failed and we were unable to recover it. 00:26:12.605 [2024-04-18 21:19:28.480371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.605 [2024-04-18 21:19:28.480641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.605 [2024-04-18 21:19:28.480650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.605 qpair failed and we were unable to recover it. 00:26:12.605 [2024-04-18 21:19:28.480949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.605 [2024-04-18 21:19:28.481295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.605 [2024-04-18 21:19:28.481304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.605 qpair failed and we were unable to recover it. 00:26:12.605 [2024-04-18 21:19:28.481585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.605 [2024-04-18 21:19:28.481786] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:12.605 [2024-04-18 21:19:28.481931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.481941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.606 qpair failed and we were unable to recover it. 00:26:12.606 [2024-04-18 21:19:28.482159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.482507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.482523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.606 qpair failed and we were unable to recover it. 00:26:12.606 [2024-04-18 21:19:28.482849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.483197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.483207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.606 qpair failed and we were unable to recover it. 00:26:12.606 [2024-04-18 21:19:28.483598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.483930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.483941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.606 qpair failed and we were unable to recover it. 00:26:12.606 [2024-04-18 21:19:28.484171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.484519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.484530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.606 qpair failed and we were unable to recover it. 00:26:12.606 [2024-04-18 21:19:28.484819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.485093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.485103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.606 qpair failed and we were unable to recover it. 00:26:12.606 [2024-04-18 21:19:28.485446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.485774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.485784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.606 qpair failed and we were unable to recover it. 00:26:12.606 [2024-04-18 21:19:28.486128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.486427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.486437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.606 qpair failed and we were unable to recover it. 00:26:12.606 [2024-04-18 21:19:28.486781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.487124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.487134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.606 qpair failed and we were unable to recover it. 00:26:12.606 [2024-04-18 21:19:28.487402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.487750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.487761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.606 qpair failed and we were unable to recover it. 00:26:12.606 [2024-04-18 21:19:28.488112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.488437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.488447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.606 qpair failed and we were unable to recover it. 00:26:12.606 [2024-04-18 21:19:28.488722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.489089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.489099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.606 qpair failed and we were unable to recover it. 00:26:12.606 [2024-04-18 21:19:28.489424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.489725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.489736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.606 qpair failed and we were unable to recover it. 00:26:12.606 [2024-04-18 21:19:28.489995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.490346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.490357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.606 qpair failed and we were unable to recover it. 00:26:12.606 [2024-04-18 21:19:28.490709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.491035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.491046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.606 qpair failed and we were unable to recover it. 00:26:12.606 [2024-04-18 21:19:28.491374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.491645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.491657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.606 qpair failed and we were unable to recover it. 00:26:12.606 [2024-04-18 21:19:28.491933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.492286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.492296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.606 qpair failed and we were unable to recover it. 00:26:12.606 [2024-04-18 21:19:28.492592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.492963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.492973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.606 qpair failed and we were unable to recover it. 00:26:12.606 [2024-04-18 21:19:28.493251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.493598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.493608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.606 qpair failed and we were unable to recover it. 00:26:12.606 [2024-04-18 21:19:28.493952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.494279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.494288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.606 qpair failed and we were unable to recover it. 00:26:12.606 [2024-04-18 21:19:28.494636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.494958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.494968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.606 qpair failed and we were unable to recover it. 00:26:12.606 [2024-04-18 21:19:28.495314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.495661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.495671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.606 qpair failed and we were unable to recover it. 00:26:12.606 [2024-04-18 21:19:28.496015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.496269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.496278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.606 qpair failed and we were unable to recover it. 00:26:12.606 [2024-04-18 21:19:28.496560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.496918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.496930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.606 qpair failed and we were unable to recover it. 00:26:12.606 [2024-04-18 21:19:28.497276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.497543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.497552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.606 qpair failed and we were unable to recover it. 00:26:12.606 [2024-04-18 21:19:28.497825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.498195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.498204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.606 qpair failed and we were unable to recover it. 00:26:12.606 [2024-04-18 21:19:28.498480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.498744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.498753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.606 qpair failed and we were unable to recover it. 00:26:12.606 [2024-04-18 21:19:28.499045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.499319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.499328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.606 qpair failed and we were unable to recover it. 00:26:12.606 [2024-04-18 21:19:28.499549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.499895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.606 [2024-04-18 21:19:28.499904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.606 qpair failed and we were unable to recover it. 00:26:12.606 [2024-04-18 21:19:28.500163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.607 [2024-04-18 21:19:28.500434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.607 [2024-04-18 21:19:28.500444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.607 qpair failed and we were unable to recover it. 00:26:12.607 [2024-04-18 21:19:28.500774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.607 [2024-04-18 21:19:28.501128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.607 [2024-04-18 21:19:28.501136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.607 qpair failed and we were unable to recover it. 00:26:12.607 [2024-04-18 21:19:28.501523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.607 [2024-04-18 21:19:28.501845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.607 [2024-04-18 21:19:28.501855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.607 qpair failed and we were unable to recover it. 00:26:12.607 [2024-04-18 21:19:28.502194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.607 [2024-04-18 21:19:28.502542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.607 [2024-04-18 21:19:28.502552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.607 qpair failed and we were unable to recover it. 00:26:12.607 [2024-04-18 21:19:28.502821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.607 [2024-04-18 21:19:28.503143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.607 [2024-04-18 21:19:28.503154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.607 qpair failed and we were unable to recover it. 00:26:12.607 [2024-04-18 21:19:28.503448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.607 [2024-04-18 21:19:28.503807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.607 [2024-04-18 21:19:28.503817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.607 qpair failed and we were unable to recover it. 00:26:12.607 [2024-04-18 21:19:28.504141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.607 [2024-04-18 21:19:28.504485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.607 [2024-04-18 21:19:28.504495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.607 qpair failed and we were unable to recover it. 00:26:12.607 [2024-04-18 21:19:28.504772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.607 [2024-04-18 21:19:28.505063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.607 [2024-04-18 21:19:28.505073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.607 qpair failed and we were unable to recover it. 00:26:12.607 [2024-04-18 21:19:28.505371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.607 [2024-04-18 21:19:28.505727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.607 [2024-04-18 21:19:28.505737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.607 qpair failed and we were unable to recover it. 00:26:12.607 [2024-04-18 21:19:28.506063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.607 [2024-04-18 21:19:28.506417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.607 [2024-04-18 21:19:28.506427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.607 qpair failed and we were unable to recover it. 00:26:12.607 [2024-04-18 21:19:28.506750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.607 [2024-04-18 21:19:28.507088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.607 [2024-04-18 21:19:28.507098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.607 qpair failed and we were unable to recover it. 00:26:12.607 [2024-04-18 21:19:28.507424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.607 [2024-04-18 21:19:28.507693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.607 [2024-04-18 21:19:28.507703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.607 qpair failed and we were unable to recover it. 00:26:12.607 [2024-04-18 21:19:28.508045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.607 [2024-04-18 21:19:28.508398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.607 [2024-04-18 21:19:28.508407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.607 qpair failed and we were unable to recover it. 00:26:12.607 [2024-04-18 21:19:28.508702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.607 [2024-04-18 21:19:28.509027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.607 [2024-04-18 21:19:28.509037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.607 qpair failed and we were unable to recover it. 00:26:12.607 [2024-04-18 21:19:28.509386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.607 [2024-04-18 21:19:28.509729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.607 [2024-04-18 21:19:28.509741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.607 qpair failed and we were unable to recover it. 00:26:12.607 [2024-04-18 21:19:28.510090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.607 [2024-04-18 21:19:28.510366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.607 [2024-04-18 21:19:28.510375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.607 qpair failed and we were unable to recover it. 00:26:12.607 [2024-04-18 21:19:28.510661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.607 [2024-04-18 21:19:28.510942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.607 [2024-04-18 21:19:28.510951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.607 qpair failed and we were unable to recover it. 00:26:12.607 [2024-04-18 21:19:28.511210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.607 [2024-04-18 21:19:28.511492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.607 [2024-04-18 21:19:28.511501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.607 qpair failed and we were unable to recover it. 00:26:12.875 [2024-04-18 21:19:28.511831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.875 [2024-04-18 21:19:28.512162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.875 [2024-04-18 21:19:28.512171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.875 qpair failed and we were unable to recover it. 00:26:12.875 [2024-04-18 21:19:28.512439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.875 [2024-04-18 21:19:28.512802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.875 [2024-04-18 21:19:28.512812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.875 qpair failed and we were unable to recover it. 00:26:12.875 [2024-04-18 21:19:28.513135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.875 [2024-04-18 21:19:28.513402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.875 [2024-04-18 21:19:28.513412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.875 qpair failed and we were unable to recover it. 00:26:12.875 [2024-04-18 21:19:28.513761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.875 [2024-04-18 21:19:28.514107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.875 [2024-04-18 21:19:28.514117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.875 qpair failed and we were unable to recover it. 00:26:12.875 [2024-04-18 21:19:28.514387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.875 [2024-04-18 21:19:28.514731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.875 [2024-04-18 21:19:28.514742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.875 qpair failed and we were unable to recover it. 00:26:12.875 [2024-04-18 21:19:28.515015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.875 [2024-04-18 21:19:28.515358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.875 [2024-04-18 21:19:28.515367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.875 qpair failed and we were unable to recover it. 00:26:12.875 [2024-04-18 21:19:28.515717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.875 [2024-04-18 21:19:28.516063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.875 [2024-04-18 21:19:28.516074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.875 qpair failed and we were unable to recover it. 00:26:12.875 [2024-04-18 21:19:28.516427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.875 [2024-04-18 21:19:28.516725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.875 [2024-04-18 21:19:28.516735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.875 qpair failed and we were unable to recover it. 00:26:12.875 [2024-04-18 21:19:28.516996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.875 [2024-04-18 21:19:28.517295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.875 [2024-04-18 21:19:28.517305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.875 qpair failed and we were unable to recover it. 00:26:12.875 [2024-04-18 21:19:28.517663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.875 [2024-04-18 21:19:28.518022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.875 [2024-04-18 21:19:28.518032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.875 qpair failed and we were unable to recover it. 00:26:12.875 [2024-04-18 21:19:28.518355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.875 [2024-04-18 21:19:28.518626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.875 [2024-04-18 21:19:28.518635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.875 qpair failed and we were unable to recover it. 00:26:12.875 [2024-04-18 21:19:28.518901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.875 [2024-04-18 21:19:28.519169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.875 [2024-04-18 21:19:28.519179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.875 qpair failed and we were unable to recover it. 00:26:12.875 [2024-04-18 21:19:28.519450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.875 [2024-04-18 21:19:28.519739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.875 [2024-04-18 21:19:28.519749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.875 qpair failed and we were unable to recover it. 00:26:12.875 [2024-04-18 21:19:28.520098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.875 [2024-04-18 21:19:28.520420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.875 [2024-04-18 21:19:28.520431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.875 qpair failed and we were unable to recover it. 00:26:12.875 [2024-04-18 21:19:28.520781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.875 [2024-04-18 21:19:28.521107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.875 [2024-04-18 21:19:28.521118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.875 qpair failed and we were unable to recover it. 00:26:12.875 [2024-04-18 21:19:28.521463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.875 [2024-04-18 21:19:28.521813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.875 [2024-04-18 21:19:28.521824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.875 qpair failed and we were unable to recover it. 00:26:12.875 [2024-04-18 21:19:28.522098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.875 [2024-04-18 21:19:28.522443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.522453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.876 qpair failed and we were unable to recover it. 00:26:12.876 [2024-04-18 21:19:28.522746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.523073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.523083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.876 qpair failed and we were unable to recover it. 00:26:12.876 [2024-04-18 21:19:28.523365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.523647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.523657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.876 qpair failed and we were unable to recover it. 00:26:12.876 [2024-04-18 21:19:28.523958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.524255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.524265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.876 qpair failed and we were unable to recover it. 00:26:12.876 [2024-04-18 21:19:28.524531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.524881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.524891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.876 qpair failed and we were unable to recover it. 00:26:12.876 [2024-04-18 21:19:28.525159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.525418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.525429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.876 qpair failed and we were unable to recover it. 00:26:12.876 [2024-04-18 21:19:28.525687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.526039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.526048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.876 qpair failed and we were unable to recover it. 00:26:12.876 [2024-04-18 21:19:28.526400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.526747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.526759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.876 qpair failed and we were unable to recover it. 00:26:12.876 [2024-04-18 21:19:28.527031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.527379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.527389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.876 qpair failed and we were unable to recover it. 00:26:12.876 [2024-04-18 21:19:28.527739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.528063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.528074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.876 qpair failed and we were unable to recover it. 00:26:12.876 [2024-04-18 21:19:28.528367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.528738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.528749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.876 qpair failed and we were unable to recover it. 00:26:12.876 [2024-04-18 21:19:28.529123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.529470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.529480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.876 qpair failed and we were unable to recover it. 00:26:12.876 [2024-04-18 21:19:28.529745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.530096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.530105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.876 qpair failed and we were unable to recover it. 00:26:12.876 [2024-04-18 21:19:28.530377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.530723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.530732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.876 qpair failed and we were unable to recover it. 00:26:12.876 [2024-04-18 21:19:28.531005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.531346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.531356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.876 qpair failed and we were unable to recover it. 00:26:12.876 [2024-04-18 21:19:28.531701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.531956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.531976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.876 qpair failed and we were unable to recover it. 00:26:12.876 [2024-04-18 21:19:28.532324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.532673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.532683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.876 qpair failed and we were unable to recover it. 00:26:12.876 [2024-04-18 21:19:28.533035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.533356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.533365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.876 qpair failed and we were unable to recover it. 00:26:12.876 [2024-04-18 21:19:28.533642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.533982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.533992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.876 qpair failed and we were unable to recover it. 00:26:12.876 [2024-04-18 21:19:28.534336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.534684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.534693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.876 qpair failed and we were unable to recover it. 00:26:12.876 [2024-04-18 21:19:28.535035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.535305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.535315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.876 qpair failed and we were unable to recover it. 00:26:12.876 [2024-04-18 21:19:28.535611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.535959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.535968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.876 qpair failed and we were unable to recover it. 00:26:12.876 [2024-04-18 21:19:28.536270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.536633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.536643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.876 qpair failed and we were unable to recover it. 00:26:12.876 [2024-04-18 21:19:28.537017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.537338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.537347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.876 qpair failed and we were unable to recover it. 00:26:12.876 [2024-04-18 21:19:28.537693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.537914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.537923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.876 qpair failed and we were unable to recover it. 00:26:12.876 [2024-04-18 21:19:28.538233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.538578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.538587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.876 qpair failed and we were unable to recover it. 00:26:12.876 [2024-04-18 21:19:28.538858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.539142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.539152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.876 qpair failed and we were unable to recover it. 00:26:12.876 [2024-04-18 21:19:28.539435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.539641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.539650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.876 qpair failed and we were unable to recover it. 00:26:12.876 [2024-04-18 21:19:28.540024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.540343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.876 [2024-04-18 21:19:28.540352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.876 qpair failed and we were unable to recover it. 00:26:12.876 [2024-04-18 21:19:28.540704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.541026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.541035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.877 qpair failed and we were unable to recover it. 00:26:12.877 [2024-04-18 21:19:28.541358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.541655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.541664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.877 qpair failed and we were unable to recover it. 00:26:12.877 [2024-04-18 21:19:28.542017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.542365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.542374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.877 qpair failed and we were unable to recover it. 00:26:12.877 [2024-04-18 21:19:28.542722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.543049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.543058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.877 qpair failed and we were unable to recover it. 00:26:12.877 [2024-04-18 21:19:28.543349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.543692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.543702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.877 qpair failed and we were unable to recover it. 00:26:12.877 [2024-04-18 21:19:28.543983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.544332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.544342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.877 qpair failed and we were unable to recover it. 00:26:12.877 [2024-04-18 21:19:28.544689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.545037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.545046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.877 qpair failed and we were unable to recover it. 00:26:12.877 [2024-04-18 21:19:28.545314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.545636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.545646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.877 qpair failed and we were unable to recover it. 00:26:12.877 [2024-04-18 21:19:28.545927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.546206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.546215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.877 qpair failed and we were unable to recover it. 00:26:12.877 [2024-04-18 21:19:28.546481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.546845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.546855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.877 qpair failed and we were unable to recover it. 00:26:12.877 [2024-04-18 21:19:28.547228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.547551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.547561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.877 qpair failed and we were unable to recover it. 00:26:12.877 [2024-04-18 21:19:28.547910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.548232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.548241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.877 qpair failed and we were unable to recover it. 00:26:12.877 [2024-04-18 21:19:28.548592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.548938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.548947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.877 qpair failed and we were unable to recover it. 00:26:12.877 [2024-04-18 21:19:28.549298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.549563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.549573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.877 qpair failed and we were unable to recover it. 00:26:12.877 [2024-04-18 21:19:28.549920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.550266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.550275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.877 qpair failed and we were unable to recover it. 00:26:12.877 [2024-04-18 21:19:28.550601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.550877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.550886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.877 qpair failed and we were unable to recover it. 00:26:12.877 [2024-04-18 21:19:28.551247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.551517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.551526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.877 qpair failed and we were unable to recover it. 00:26:12.877 [2024-04-18 21:19:28.551871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.552144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.552154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.877 qpair failed and we were unable to recover it. 00:26:12.877 [2024-04-18 21:19:28.552476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.552819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.552829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.877 qpair failed and we were unable to recover it. 00:26:12.877 [2024-04-18 21:19:28.553037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.553291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.553300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.877 qpair failed and we were unable to recover it. 00:26:12.877 [2024-04-18 21:19:28.553653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.553951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.553960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.877 qpair failed and we were unable to recover it. 00:26:12.877 [2024-04-18 21:19:28.554239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.554583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.554593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.877 qpair failed and we were unable to recover it. 00:26:12.877 [2024-04-18 21:19:28.554945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.555261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.555270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.877 qpair failed and we were unable to recover it. 00:26:12.877 [2024-04-18 21:19:28.555622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.555944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.555954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.877 qpair failed and we were unable to recover it. 00:26:12.877 [2024-04-18 21:19:28.556300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.556647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.556657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.877 qpair failed and we were unable to recover it. 00:26:12.877 [2024-04-18 21:19:28.556980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.557324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.557334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.877 qpair failed and we were unable to recover it. 00:26:12.877 [2024-04-18 21:19:28.557628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.557975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.557984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.877 qpair failed and we were unable to recover it. 00:26:12.877 [2024-04-18 21:19:28.558331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.558655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.558666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.877 qpair failed and we were unable to recover it. 00:26:12.877 [2024-04-18 21:19:28.558968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.877 [2024-04-18 21:19:28.559248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.559259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.878 qpair failed and we were unable to recover it. 00:26:12.878 [2024-04-18 21:19:28.559609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.559893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.559903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.878 qpair failed and we were unable to recover it. 00:26:12.878 [2024-04-18 21:19:28.560153] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:12.878 [2024-04-18 21:19:28.560180] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:12.878 [2024-04-18 21:19:28.560187] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:12.878 [2024-04-18 21:19:28.560193] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:12.878 [2024-04-18 21:19:28.560198] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:12.878 [2024-04-18 21:19:28.560255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.560302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:26:12.878 [2024-04-18 21:19:28.560410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:26:12.878 [2024-04-18 21:19:28.560532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:12.878 [2024-04-18 21:19:28.560577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.560587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.878 qpair failed and we were unable to recover it. 00:26:12.878 [2024-04-18 21:19:28.560533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:26:12.878 [2024-04-18 21:19:28.560935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.561257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.561266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.878 qpair failed and we were unable to recover it. 00:26:12.878 [2024-04-18 21:19:28.561617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.561894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.561903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.878 qpair failed and we were unable to recover it. 00:26:12.878 [2024-04-18 21:19:28.562235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.562584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.562594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.878 qpair failed and we were unable to recover it. 00:26:12.878 [2024-04-18 21:19:28.562947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.563269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.563278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.878 qpair failed and we were unable to recover it. 00:26:12.878 [2024-04-18 21:19:28.563623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.563904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.563914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.878 qpair failed and we were unable to recover it. 00:26:12.878 [2024-04-18 21:19:28.564221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.564581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.564591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.878 qpair failed and we were unable to recover it. 00:26:12.878 [2024-04-18 21:19:28.564871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.565216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.565225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.878 qpair failed and we were unable to recover it. 00:26:12.878 [2024-04-18 21:19:28.565524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.565810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.565820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.878 qpair failed and we were unable to recover it. 00:26:12.878 [2024-04-18 21:19:28.566168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.566442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.566451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.878 qpair failed and we were unable to recover it. 00:26:12.878 [2024-04-18 21:19:28.566778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.567133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.567144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.878 qpair failed and we were unable to recover it. 00:26:12.878 [2024-04-18 21:19:28.567471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.567742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.567752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.878 qpair failed and we were unable to recover it. 00:26:12.878 [2024-04-18 21:19:28.568033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.568333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.568344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.878 qpair failed and we were unable to recover it. 00:26:12.878 [2024-04-18 21:19:28.568619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.568944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.568955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.878 qpair failed and we were unable to recover it. 00:26:12.878 [2024-04-18 21:19:28.569319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.569697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.569709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.878 qpair failed and we were unable to recover it. 00:26:12.878 [2024-04-18 21:19:28.570005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.570299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.570310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.878 qpair failed and we were unable to recover it. 00:26:12.878 [2024-04-18 21:19:28.570668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.571042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.571052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.878 qpair failed and we were unable to recover it. 00:26:12.878 [2024-04-18 21:19:28.571422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.571697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.571708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.878 qpair failed and we were unable to recover it. 00:26:12.878 [2024-04-18 21:19:28.572057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.572437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.572447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.878 qpair failed and we were unable to recover it. 00:26:12.878 [2024-04-18 21:19:28.572777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.573052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.573062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.878 qpair failed and we were unable to recover it. 00:26:12.878 [2024-04-18 21:19:28.573390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.573722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.573733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.878 qpair failed and we were unable to recover it. 00:26:12.878 [2024-04-18 21:19:28.574082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.574404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.574416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.878 qpair failed and we were unable to recover it. 00:26:12.878 [2024-04-18 21:19:28.574743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.575092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.575103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.878 qpair failed and we were unable to recover it. 00:26:12.878 [2024-04-18 21:19:28.575449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.575725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.575737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.878 qpair failed and we were unable to recover it. 00:26:12.878 [2024-04-18 21:19:28.576020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.878 [2024-04-18 21:19:28.576352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.576363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.879 qpair failed and we were unable to recover it. 00:26:12.879 [2024-04-18 21:19:28.576690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.577054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.577065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.879 qpair failed and we were unable to recover it. 00:26:12.879 [2024-04-18 21:19:28.577439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.577741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.577753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.879 qpair failed and we were unable to recover it. 00:26:12.879 [2024-04-18 21:19:28.578100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.578406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.578417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.879 qpair failed and we were unable to recover it. 00:26:12.879 [2024-04-18 21:19:28.578776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.579048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.579059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.879 qpair failed and we were unable to recover it. 00:26:12.879 [2024-04-18 21:19:28.579416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.579679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.579690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.879 qpair failed and we were unable to recover it. 00:26:12.879 [2024-04-18 21:19:28.580048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.580393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.580405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.879 qpair failed and we were unable to recover it. 00:26:12.879 [2024-04-18 21:19:28.580680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.581029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.581040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.879 qpair failed and we were unable to recover it. 00:26:12.879 [2024-04-18 21:19:28.581382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.581678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.581689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.879 qpair failed and we were unable to recover it. 00:26:12.879 [2024-04-18 21:19:28.582036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.582360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.582371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.879 qpair failed and we were unable to recover it. 00:26:12.879 [2024-04-18 21:19:28.582696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.583041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.583052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.879 qpair failed and we were unable to recover it. 00:26:12.879 [2024-04-18 21:19:28.583320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.583670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.583681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.879 qpair failed and we were unable to recover it. 00:26:12.879 [2024-04-18 21:19:28.583910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.584239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.584250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.879 qpair failed and we were unable to recover it. 00:26:12.879 [2024-04-18 21:19:28.584604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.584956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.584966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.879 qpair failed and we were unable to recover it. 00:26:12.879 [2024-04-18 21:19:28.585315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.585658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.585668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.879 qpair failed and we were unable to recover it. 00:26:12.879 [2024-04-18 21:19:28.585961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.586237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.586247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.879 qpair failed and we were unable to recover it. 00:26:12.879 [2024-04-18 21:19:28.586599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.586868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.586878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.879 qpair failed and we were unable to recover it. 00:26:12.879 [2024-04-18 21:19:28.587223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.587582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.587592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.879 qpair failed and we were unable to recover it. 00:26:12.879 [2024-04-18 21:19:28.587820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.588167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.588177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.879 qpair failed and we were unable to recover it. 00:26:12.879 [2024-04-18 21:19:28.588568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.588862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.588872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.879 qpair failed and we were unable to recover it. 00:26:12.879 [2024-04-18 21:19:28.589168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.589518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.589530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.879 qpair failed and we were unable to recover it. 00:26:12.879 [2024-04-18 21:19:28.589812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.590159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.590169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.879 qpair failed and we were unable to recover it. 00:26:12.879 [2024-04-18 21:19:28.590520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.590868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.590878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.879 qpair failed and we were unable to recover it. 00:26:12.879 [2024-04-18 21:19:28.591223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.591505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.591528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.879 qpair failed and we were unable to recover it. 00:26:12.879 [2024-04-18 21:19:28.591799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.592141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.592151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.879 qpair failed and we were unable to recover it. 00:26:12.879 [2024-04-18 21:19:28.592502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.592776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.879 [2024-04-18 21:19:28.592787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.879 qpair failed and we were unable to recover it. 00:26:12.880 [2024-04-18 21:19:28.593136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.593484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.593495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.880 qpair failed and we were unable to recover it. 00:26:12.880 [2024-04-18 21:19:28.593767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.594117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.594127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.880 qpair failed and we were unable to recover it. 00:26:12.880 [2024-04-18 21:19:28.594397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.594697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.594708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.880 qpair failed and we were unable to recover it. 00:26:12.880 [2024-04-18 21:19:28.595053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.595400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.595411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.880 qpair failed and we were unable to recover it. 00:26:12.880 [2024-04-18 21:19:28.595763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.595975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.595985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.880 qpair failed and we were unable to recover it. 00:26:12.880 [2024-04-18 21:19:28.596336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.596686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.596696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.880 qpair failed and we were unable to recover it. 00:26:12.880 [2024-04-18 21:19:28.597041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.597316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.597325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.880 qpair failed and we were unable to recover it. 00:26:12.880 [2024-04-18 21:19:28.597675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.597941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.597950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.880 qpair failed and we were unable to recover it. 00:26:12.880 [2024-04-18 21:19:28.598250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.598592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.598602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.880 qpair failed and we were unable to recover it. 00:26:12.880 [2024-04-18 21:19:28.598950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.599291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.599300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.880 qpair failed and we were unable to recover it. 00:26:12.880 [2024-04-18 21:19:28.599652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.599948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.599957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.880 qpair failed and we were unable to recover it. 00:26:12.880 [2024-04-18 21:19:28.600254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.600599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.600608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.880 qpair failed and we were unable to recover it. 00:26:12.880 [2024-04-18 21:19:28.600877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.601223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.601232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.880 qpair failed and we were unable to recover it. 00:26:12.880 [2024-04-18 21:19:28.601494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.601799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.601809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.880 qpair failed and we were unable to recover it. 00:26:12.880 [2024-04-18 21:19:28.602077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.602335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.602346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.880 qpair failed and we were unable to recover it. 00:26:12.880 [2024-04-18 21:19:28.602611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.602895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.602906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.880 qpair failed and we were unable to recover it. 00:26:12.880 [2024-04-18 21:19:28.603278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.603623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.603633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.880 qpair failed and we were unable to recover it. 00:26:12.880 [2024-04-18 21:19:28.603985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.604245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.604255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.880 qpair failed and we were unable to recover it. 00:26:12.880 [2024-04-18 21:19:28.604609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.604957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.604967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.880 qpair failed and we were unable to recover it. 00:26:12.880 [2024-04-18 21:19:28.605320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.605612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.605621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.880 qpair failed and we were unable to recover it. 00:26:12.880 [2024-04-18 21:19:28.605972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.606320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.606331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.880 qpair failed and we were unable to recover it. 00:26:12.880 [2024-04-18 21:19:28.606659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.607005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.607016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.880 qpair failed and we were unable to recover it. 00:26:12.880 [2024-04-18 21:19:28.607223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.607552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.607564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.880 qpair failed and we were unable to recover it. 00:26:12.880 [2024-04-18 21:19:28.607915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.608246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.608256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.880 qpair failed and we were unable to recover it. 00:26:12.880 [2024-04-18 21:19:28.608583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.608950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.608961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.880 qpair failed and we were unable to recover it. 00:26:12.880 [2024-04-18 21:19:28.609287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.609633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.609644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.880 qpair failed and we were unable to recover it. 00:26:12.880 [2024-04-18 21:19:28.609921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.610207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.880 [2024-04-18 21:19:28.610216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.880 qpair failed and we were unable to recover it. 00:26:12.880 [2024-04-18 21:19:28.610455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.610825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.610835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.881 qpair failed and we were unable to recover it. 00:26:12.881 [2024-04-18 21:19:28.611137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.611351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.611361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.881 qpair failed and we were unable to recover it. 00:26:12.881 [2024-04-18 21:19:28.611641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.611989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.611999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.881 qpair failed and we were unable to recover it. 00:26:12.881 [2024-04-18 21:19:28.612326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.612652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.612663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.881 qpair failed and we were unable to recover it. 00:26:12.881 [2024-04-18 21:19:28.612927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.613223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.613233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.881 qpair failed and we were unable to recover it. 00:26:12.881 [2024-04-18 21:19:28.613582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.613853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.613863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.881 qpair failed and we were unable to recover it. 00:26:12.881 [2024-04-18 21:19:28.614127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.614326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.614336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.881 qpair failed and we were unable to recover it. 00:26:12.881 [2024-04-18 21:19:28.614664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.615023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.615033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.881 qpair failed and we were unable to recover it. 00:26:12.881 [2024-04-18 21:19:28.615324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.615580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.615590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.881 qpair failed and we were unable to recover it. 00:26:12.881 [2024-04-18 21:19:28.615939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.616220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.616230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.881 qpair failed and we were unable to recover it. 00:26:12.881 [2024-04-18 21:19:28.616450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.616887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.616898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.881 qpair failed and we were unable to recover it. 00:26:12.881 [2024-04-18 21:19:28.617153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.617500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.617515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.881 qpair failed and we were unable to recover it. 00:26:12.881 [2024-04-18 21:19:28.617803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.618072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.618083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.881 qpair failed and we were unable to recover it. 00:26:12.881 [2024-04-18 21:19:28.618431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.618769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.618779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.881 qpair failed and we were unable to recover it. 00:26:12.881 [2024-04-18 21:19:28.619046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.619338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.619347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.881 qpair failed and we were unable to recover it. 00:26:12.881 [2024-04-18 21:19:28.619648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.619880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.619889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.881 qpair failed and we were unable to recover it. 00:26:12.881 [2024-04-18 21:19:28.620094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.620418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.620428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.881 qpair failed and we were unable to recover it. 00:26:12.881 [2024-04-18 21:19:28.620765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.620972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.620981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.881 qpair failed and we were unable to recover it. 00:26:12.881 [2024-04-18 21:19:28.621259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.621578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.621588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.881 qpair failed and we were unable to recover it. 00:26:12.881 [2024-04-18 21:19:28.621799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.622070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.622079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.881 qpair failed and we were unable to recover it. 00:26:12.881 [2024-04-18 21:19:28.622361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.622648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.622658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.881 qpair failed and we were unable to recover it. 00:26:12.881 [2024-04-18 21:19:28.623007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.623350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.623360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.881 qpair failed and we were unable to recover it. 00:26:12.881 [2024-04-18 21:19:28.623634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.623957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.623966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.881 qpair failed and we were unable to recover it. 00:26:12.881 [2024-04-18 21:19:28.624308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.624660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.624670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.881 qpair failed and we were unable to recover it. 00:26:12.881 [2024-04-18 21:19:28.624952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.625296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.625305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.881 qpair failed and we were unable to recover it. 00:26:12.881 [2024-04-18 21:19:28.625580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.625925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.625935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.881 qpair failed and we were unable to recover it. 00:26:12.881 [2024-04-18 21:19:28.626281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.626500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.626514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.881 qpair failed and we were unable to recover it. 00:26:12.881 [2024-04-18 21:19:28.626853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.627145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.627154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.881 qpair failed and we were unable to recover it. 00:26:12.881 [2024-04-18 21:19:28.627428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.627693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.881 [2024-04-18 21:19:28.627702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.881 qpair failed and we were unable to recover it. 00:26:12.882 [2024-04-18 21:19:28.627976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.628248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.628257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.882 qpair failed and we were unable to recover it. 00:26:12.882 [2024-04-18 21:19:28.628618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.628992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.629002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.882 qpair failed and we were unable to recover it. 00:26:12.882 [2024-04-18 21:19:28.629271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.629609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.629619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.882 qpair failed and we were unable to recover it. 00:26:12.882 [2024-04-18 21:19:28.629944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.630292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.630301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.882 qpair failed and we were unable to recover it. 00:26:12.882 [2024-04-18 21:19:28.630582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.630929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.630939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.882 qpair failed and we were unable to recover it. 00:26:12.882 [2024-04-18 21:19:28.631213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.631507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.631522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.882 qpair failed and we were unable to recover it. 00:26:12.882 [2024-04-18 21:19:28.631849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.632118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.632127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.882 qpair failed and we were unable to recover it. 00:26:12.882 [2024-04-18 21:19:28.632385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.632708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.632718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.882 qpair failed and we were unable to recover it. 00:26:12.882 [2024-04-18 21:19:28.633070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.633371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.633380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.882 qpair failed and we were unable to recover it. 00:26:12.882 [2024-04-18 21:19:28.633733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.634083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.634093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.882 qpair failed and we were unable to recover it. 00:26:12.882 [2024-04-18 21:19:28.634438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.634784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.634794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.882 qpair failed and we were unable to recover it. 00:26:12.882 [2024-04-18 21:19:28.635096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.635361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.635371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.882 qpair failed and we were unable to recover it. 00:26:12.882 [2024-04-18 21:19:28.635638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.635957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.635967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.882 qpair failed and we were unable to recover it. 00:26:12.882 [2024-04-18 21:19:28.636262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.636634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.636644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.882 qpair failed and we were unable to recover it. 00:26:12.882 [2024-04-18 21:19:28.637015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.637366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.637377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.882 qpair failed and we were unable to recover it. 00:26:12.882 [2024-04-18 21:19:28.637650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.637908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.637918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.882 qpair failed and we were unable to recover it. 00:26:12.882 [2024-04-18 21:19:28.638264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.638624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.638634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.882 qpair failed and we were unable to recover it. 00:26:12.882 [2024-04-18 21:19:28.638934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.639324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.639333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.882 qpair failed and we were unable to recover it. 00:26:12.882 [2024-04-18 21:19:28.639624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.639960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.639969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.882 qpair failed and we were unable to recover it. 00:26:12.882 [2024-04-18 21:19:28.640238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.640548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.640557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.882 qpair failed and we were unable to recover it. 00:26:12.882 [2024-04-18 21:19:28.640837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.641124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.641133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.882 qpair failed and we were unable to recover it. 00:26:12.882 [2024-04-18 21:19:28.641407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.641746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.641755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.882 qpair failed and we were unable to recover it. 00:26:12.882 [2024-04-18 21:19:28.642025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.642374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.642383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.882 qpair failed and we were unable to recover it. 00:26:12.882 [2024-04-18 21:19:28.642589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.642861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.642870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.882 qpair failed and we were unable to recover it. 00:26:12.882 [2024-04-18 21:19:28.643133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.643460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.643471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.882 qpair failed and we were unable to recover it. 00:26:12.882 [2024-04-18 21:19:28.643687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.643960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.882 [2024-04-18 21:19:28.643969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.882 qpair failed and we were unable to recover it. 00:26:12.883 [2024-04-18 21:19:28.644296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.644572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.644582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.883 qpair failed and we were unable to recover it. 00:26:12.883 [2024-04-18 21:19:28.644938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.645205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.645214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.883 qpair failed and we were unable to recover it. 00:26:12.883 [2024-04-18 21:19:28.645475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.645751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.645762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.883 qpair failed and we were unable to recover it. 00:26:12.883 [2024-04-18 21:19:28.646062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.646385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.646394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.883 qpair failed and we were unable to recover it. 00:26:12.883 [2024-04-18 21:19:28.646737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.647012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.647021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.883 qpair failed and we were unable to recover it. 00:26:12.883 [2024-04-18 21:19:28.647240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.647514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.647524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.883 qpair failed and we were unable to recover it. 00:26:12.883 [2024-04-18 21:19:28.647846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.648286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.648295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.883 qpair failed and we were unable to recover it. 00:26:12.883 [2024-04-18 21:19:28.648553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.648833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.648842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.883 qpair failed and we were unable to recover it. 00:26:12.883 [2024-04-18 21:19:28.649116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.649318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.649331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.883 qpair failed and we were unable to recover it. 00:26:12.883 [2024-04-18 21:19:28.649675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.650024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.650033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.883 qpair failed and we were unable to recover it. 00:26:12.883 [2024-04-18 21:19:28.650386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.650651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.650661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.883 qpair failed and we were unable to recover it. 00:26:12.883 [2024-04-18 21:19:28.651015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.651339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.651349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.883 qpair failed and we were unable to recover it. 00:26:12.883 [2024-04-18 21:19:28.651635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.651933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.651943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.883 qpair failed and we were unable to recover it. 00:26:12.883 [2024-04-18 21:19:28.652213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.652534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.652544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.883 qpair failed and we were unable to recover it. 00:26:12.883 [2024-04-18 21:19:28.652746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.653080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.653089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.883 qpair failed and we were unable to recover it. 00:26:12.883 [2024-04-18 21:19:28.653291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.653615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.653625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.883 qpair failed and we were unable to recover it. 00:26:12.883 [2024-04-18 21:19:28.653942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.654207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.654217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.883 qpair failed and we were unable to recover it. 00:26:12.883 [2024-04-18 21:19:28.654483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.654764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.654774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.883 qpair failed and we were unable to recover it. 00:26:12.883 [2024-04-18 21:19:28.655129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.655473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.655484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.883 qpair failed and we were unable to recover it. 00:26:12.883 [2024-04-18 21:19:28.655780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.656070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.656080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.883 qpair failed and we were unable to recover it. 00:26:12.883 [2024-04-18 21:19:28.656356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.656608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.656618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.883 qpair failed and we were unable to recover it. 00:26:12.883 [2024-04-18 21:19:28.656948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.657301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.657310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.883 qpair failed and we were unable to recover it. 00:26:12.883 [2024-04-18 21:19:28.657609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.657813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.657823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.883 qpair failed and we were unable to recover it. 00:26:12.883 [2024-04-18 21:19:28.658095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.658417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.658426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.883 qpair failed and we were unable to recover it. 00:26:12.883 [2024-04-18 21:19:28.658631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.658908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.658917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.883 qpair failed and we were unable to recover it. 00:26:12.883 [2024-04-18 21:19:28.659283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.659540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.659550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.883 qpair failed and we were unable to recover it. 00:26:12.883 [2024-04-18 21:19:28.659903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.660172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.660181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.883 qpair failed and we were unable to recover it. 00:26:12.883 [2024-04-18 21:19:28.660393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.660713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.660723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.883 qpair failed and we were unable to recover it. 00:26:12.883 [2024-04-18 21:19:28.660983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.661248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.883 [2024-04-18 21:19:28.661257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.883 qpair failed and we were unable to recover it. 00:26:12.884 [2024-04-18 21:19:28.661612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.884 [2024-04-18 21:19:28.661879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.884 [2024-04-18 21:19:28.661889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.884 qpair failed and we were unable to recover it. 00:26:12.884 [2024-04-18 21:19:28.662241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.884 [2024-04-18 21:19:28.662499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.884 [2024-04-18 21:19:28.662509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.884 qpair failed and we were unable to recover it. 00:26:12.884 [2024-04-18 21:19:28.662862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.884 [2024-04-18 21:19:28.663162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.884 [2024-04-18 21:19:28.663171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.884 qpair failed and we were unable to recover it. 00:26:12.884 [2024-04-18 21:19:28.663377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.884 [2024-04-18 21:19:28.663700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.884 [2024-04-18 21:19:28.663710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.884 qpair failed and we were unable to recover it. 00:26:12.884 [2024-04-18 21:19:28.664060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.884 [2024-04-18 21:19:28.664329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.884 [2024-04-18 21:19:28.664338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.884 qpair failed and we were unable to recover it. 00:26:12.884 [2024-04-18 21:19:28.664553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.884 [2024-04-18 21:19:28.664900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.884 [2024-04-18 21:19:28.664909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.884 qpair failed and we were unable to recover it. 00:26:12.884 [2024-04-18 21:19:28.665236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.884 [2024-04-18 21:19:28.665518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.884 [2024-04-18 21:19:28.665528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.884 qpair failed and we were unable to recover it. 00:26:12.884 [2024-04-18 21:19:28.665784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.884 [2024-04-18 21:19:28.666076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.884 [2024-04-18 21:19:28.666085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.884 qpair failed and we were unable to recover it. 00:26:12.884 [2024-04-18 21:19:28.666306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.884 [2024-04-18 21:19:28.666570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.884 [2024-04-18 21:19:28.666580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.884 qpair failed and we were unable to recover it. 00:26:12.884 [2024-04-18 21:19:28.666923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.884 [2024-04-18 21:19:28.667118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.884 [2024-04-18 21:19:28.667127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.884 qpair failed and we were unable to recover it. 00:26:12.884 [2024-04-18 21:19:28.667390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.884 [2024-04-18 21:19:28.667647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.884 [2024-04-18 21:19:28.667657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.884 qpair failed and we were unable to recover it. 00:26:12.884 [2024-04-18 21:19:28.667927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.884 [2024-04-18 21:19:28.668200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.884 [2024-04-18 21:19:28.668210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.884 qpair failed and we were unable to recover it. 00:26:12.884 [2024-04-18 21:19:28.668565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.884 [2024-04-18 21:19:28.668836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.884 [2024-04-18 21:19:28.668845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.884 qpair failed and we were unable to recover it. 00:26:12.884 [2024-04-18 21:19:28.669111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.884 [2024-04-18 21:19:28.669385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.884 [2024-04-18 21:19:28.669394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.884 qpair failed and we were unable to recover it. 00:26:12.884 [2024-04-18 21:19:28.669745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.884 [2024-04-18 21:19:28.670070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.884 [2024-04-18 21:19:28.670079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.884 qpair failed and we were unable to recover it. 00:26:12.884 [2024-04-18 21:19:28.670351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.884 [2024-04-18 21:19:28.670613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.884 [2024-04-18 21:19:28.670622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.884 qpair failed and we were unable to recover it. 00:26:12.884 [2024-04-18 21:19:28.670977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.884 [2024-04-18 21:19:28.671298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.884 [2024-04-18 21:19:28.671307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.884 qpair failed and we were unable to recover it. 00:26:12.884 [2024-04-18 21:19:28.671573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.884 [2024-04-18 21:19:28.671901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.884 [2024-04-18 21:19:28.671911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.884 qpair failed and we were unable to recover it. 00:26:12.884 [2024-04-18 21:19:28.672234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.884 [2024-04-18 21:19:28.672521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.884 [2024-04-18 21:19:28.672531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.884 qpair failed and we were unable to recover it. 00:26:12.884 [2024-04-18 21:19:28.672812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.884 [2024-04-18 21:19:28.673188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.884 [2024-04-18 21:19:28.673197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.884 qpair failed and we were unable to recover it. 00:26:12.885 [2024-04-18 21:19:28.673487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.673883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.673892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.885 qpair failed and we were unable to recover it. 00:26:12.885 [2024-04-18 21:19:28.674241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.674527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.674537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.885 qpair failed and we were unable to recover it. 00:26:12.885 [2024-04-18 21:19:28.674863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.675200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.675209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.885 qpair failed and we were unable to recover it. 00:26:12.885 [2024-04-18 21:19:28.675477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.675847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.675856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.885 qpair failed and we were unable to recover it. 00:26:12.885 [2024-04-18 21:19:28.676276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.676606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.676616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.885 qpair failed and we were unable to recover it. 00:26:12.885 [2024-04-18 21:19:28.676902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.677249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.677258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.885 qpair failed and we were unable to recover it. 00:26:12.885 [2024-04-18 21:19:28.677530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.677854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.677863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.885 qpair failed and we were unable to recover it. 00:26:12.885 [2024-04-18 21:19:28.678201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.678548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.678558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.885 qpair failed and we were unable to recover it. 00:26:12.885 [2024-04-18 21:19:28.678778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.679064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.679073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.885 qpair failed and we were unable to recover it. 00:26:12.885 [2024-04-18 21:19:28.679423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.679766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.679776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.885 qpair failed and we were unable to recover it. 00:26:12.885 [2024-04-18 21:19:28.679995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.680282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.680291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.885 qpair failed and we were unable to recover it. 00:26:12.885 [2024-04-18 21:19:28.680570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.680917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.680926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.885 qpair failed and we were unable to recover it. 00:26:12.885 [2024-04-18 21:19:28.681261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.681586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.681596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.885 qpair failed and we were unable to recover it. 00:26:12.885 [2024-04-18 21:19:28.681869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.682227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.682236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.885 qpair failed and we were unable to recover it. 00:26:12.885 [2024-04-18 21:19:28.682522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.682735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.682745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.885 qpair failed and we were unable to recover it. 00:26:12.885 [2024-04-18 21:19:28.683092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.683368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.683378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.885 qpair failed and we were unable to recover it. 00:26:12.885 [2024-04-18 21:19:28.683635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.683920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.683929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.885 qpair failed and we were unable to recover it. 00:26:12.885 [2024-04-18 21:19:28.684252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.684600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.684610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.885 qpair failed and we were unable to recover it. 00:26:12.885 [2024-04-18 21:19:28.684960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.685309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.685318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.885 qpair failed and we were unable to recover it. 00:26:12.885 [2024-04-18 21:19:28.685640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.685903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.685912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.885 qpair failed and we were unable to recover it. 00:26:12.885 [2024-04-18 21:19:28.686280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.686562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.686571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.885 qpair failed and we were unable to recover it. 00:26:12.885 [2024-04-18 21:19:28.686853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.687055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.687064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.885 qpair failed and we were unable to recover it. 00:26:12.885 [2024-04-18 21:19:28.687271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.687613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.687623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.885 qpair failed and we were unable to recover it. 00:26:12.885 [2024-04-18 21:19:28.687969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.688251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.688260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.885 qpair failed and we were unable to recover it. 00:26:12.885 [2024-04-18 21:19:28.688537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.688811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.688820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.885 qpair failed and we were unable to recover it. 00:26:12.885 [2024-04-18 21:19:28.689151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.689423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.689432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.885 qpair failed and we were unable to recover it. 00:26:12.885 [2024-04-18 21:19:28.689792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.690057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-04-18 21:19:28.690066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.885 qpair failed and we were unable to recover it. 00:26:12.886 [2024-04-18 21:19:28.690326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.690580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.690589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.886 qpair failed and we were unable to recover it. 00:26:12.886 [2024-04-18 21:19:28.690892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.691216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.691226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.886 qpair failed and we were unable to recover it. 00:26:12.886 [2024-04-18 21:19:28.691501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.691857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.691867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.886 qpair failed and we were unable to recover it. 00:26:12.886 [2024-04-18 21:19:28.692128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.692402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.692411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.886 qpair failed and we were unable to recover it. 00:26:12.886 [2024-04-18 21:19:28.692788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.693137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.693146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.886 qpair failed and we were unable to recover it. 00:26:12.886 [2024-04-18 21:19:28.693414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.693690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.693699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.886 qpair failed and we were unable to recover it. 00:26:12.886 [2024-04-18 21:19:28.694032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.694324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.694334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.886 qpair failed and we were unable to recover it. 00:26:12.886 [2024-04-18 21:19:28.694681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.695049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.695058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.886 qpair failed and we were unable to recover it. 00:26:12.886 [2024-04-18 21:19:28.695428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.695760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.695770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.886 qpair failed and we were unable to recover it. 00:26:12.886 [2024-04-18 21:19:28.696051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.696467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.696477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.886 qpair failed and we were unable to recover it. 00:26:12.886 [2024-04-18 21:19:28.696737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.697085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.697095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.886 qpair failed and we were unable to recover it. 00:26:12.886 [2024-04-18 21:19:28.697382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.697667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.697677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.886 qpair failed and we were unable to recover it. 00:26:12.886 [2024-04-18 21:19:28.697956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.698323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.698333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.886 qpair failed and we were unable to recover it. 00:26:12.886 [2024-04-18 21:19:28.698701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.698932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.698942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.886 qpair failed and we were unable to recover it. 00:26:12.886 [2024-04-18 21:19:28.699278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.699517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.699527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.886 qpair failed and we were unable to recover it. 00:26:12.886 [2024-04-18 21:19:28.699876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.700238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.700247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.886 qpair failed and we were unable to recover it. 00:26:12.886 [2024-04-18 21:19:28.700639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.700855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.700865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.886 qpair failed and we were unable to recover it. 00:26:12.886 [2024-04-18 21:19:28.701121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.701397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.701407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.886 qpair failed and we were unable to recover it. 00:26:12.886 [2024-04-18 21:19:28.701748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.702072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.702081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.886 qpair failed and we were unable to recover it. 00:26:12.886 [2024-04-18 21:19:28.702283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.702495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.702505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.886 qpair failed and we were unable to recover it. 00:26:12.886 [2024-04-18 21:19:28.702768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.703034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.703043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.886 qpair failed and we were unable to recover it. 00:26:12.886 [2024-04-18 21:19:28.703304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.703581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.703591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.886 qpair failed and we were unable to recover it. 00:26:12.886 [2024-04-18 21:19:28.703966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.704226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.704235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.886 qpair failed and we were unable to recover it. 00:26:12.886 [2024-04-18 21:19:28.704560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.704857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.704866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.886 qpair failed and we were unable to recover it. 00:26:12.886 [2024-04-18 21:19:28.705193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.705543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.705553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.886 qpair failed and we were unable to recover it. 00:26:12.886 [2024-04-18 21:19:28.705876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.706134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.706143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.886 qpair failed and we were unable to recover it. 00:26:12.886 [2024-04-18 21:19:28.706486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.706861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.706871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.886 qpair failed and we were unable to recover it. 00:26:12.886 [2024-04-18 21:19:28.707219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.707543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-04-18 21:19:28.707553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.886 qpair failed and we were unable to recover it. 00:26:12.886 [2024-04-18 21:19:28.707823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.887 [2024-04-18 21:19:28.708049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.887 [2024-04-18 21:19:28.708059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.887 qpair failed and we were unable to recover it. 00:26:12.887 [2024-04-18 21:19:28.708339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.887 [2024-04-18 21:19:28.708612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.887 [2024-04-18 21:19:28.708622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.887 qpair failed and we were unable to recover it. 00:26:12.887 [2024-04-18 21:19:28.708970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.887 [2024-04-18 21:19:28.709185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.887 [2024-04-18 21:19:28.709195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.887 qpair failed and we were unable to recover it. 00:26:12.887 [2024-04-18 21:19:28.709408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.887 [2024-04-18 21:19:28.709703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.887 [2024-04-18 21:19:28.709713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.887 qpair failed and we were unable to recover it. 00:26:12.887 [2024-04-18 21:19:28.709998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.887 [2024-04-18 21:19:28.710326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.887 [2024-04-18 21:19:28.710336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.887 qpair failed and we were unable to recover it. 00:26:12.887 [2024-04-18 21:19:28.710605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.887 [2024-04-18 21:19:28.710762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.887 [2024-04-18 21:19:28.710771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.887 qpair failed and we were unable to recover it. 00:26:12.887 [2024-04-18 21:19:28.710974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.887 [2024-04-18 21:19:28.711230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.887 [2024-04-18 21:19:28.711239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.887 qpair failed and we were unable to recover it. 00:26:12.887 [2024-04-18 21:19:28.711519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.887 [2024-04-18 21:19:28.711781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.887 [2024-04-18 21:19:28.711790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.887 qpair failed and we were unable to recover it. 00:26:12.887 [2024-04-18 21:19:28.712005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.887 [2024-04-18 21:19:28.712401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.887 [2024-04-18 21:19:28.712410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.887 qpair failed and we were unable to recover it. 00:26:12.887 [2024-04-18 21:19:28.712694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.887 [2024-04-18 21:19:28.713016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.887 [2024-04-18 21:19:28.713026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.887 qpair failed and we were unable to recover it. 00:26:12.887 [2024-04-18 21:19:28.713352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.887 [2024-04-18 21:19:28.713697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.887 [2024-04-18 21:19:28.713707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.887 qpair failed and we were unable to recover it. 00:26:12.887 [2024-04-18 21:19:28.714002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.887 [2024-04-18 21:19:28.714328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.887 [2024-04-18 21:19:28.714337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.887 qpair failed and we were unable to recover it. 00:26:12.887 [2024-04-18 21:19:28.714614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.887 [2024-04-18 21:19:28.714831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.887 [2024-04-18 21:19:28.714840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.887 qpair failed and we were unable to recover it. 00:26:12.887 [2024-04-18 21:19:28.715184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.887 [2024-04-18 21:19:28.715474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.887 [2024-04-18 21:19:28.715483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.887 qpair failed and we were unable to recover it. 00:26:12.887 [2024-04-18 21:19:28.715763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.887 [2024-04-18 21:19:28.716015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.887 [2024-04-18 21:19:28.716024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.887 qpair failed and we were unable to recover it. 00:26:12.887 [2024-04-18 21:19:28.716291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.887 [2024-04-18 21:19:28.716668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.887 [2024-04-18 21:19:28.716677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.887 qpair failed and we were unable to recover it. 00:26:12.887 [2024-04-18 21:19:28.717000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.887 [2024-04-18 21:19:28.717346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.887 [2024-04-18 21:19:28.717355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.887 qpair failed and we were unable to recover it. 00:26:12.887 [2024-04-18 21:19:28.717681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.887 [2024-04-18 21:19:28.717952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.887 [2024-04-18 21:19:28.717962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.887 qpair failed and we were unable to recover it. 00:26:12.887 [2024-04-18 21:19:28.718327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.887 [2024-04-18 21:19:28.718614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.887 [2024-04-18 21:19:28.718624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.887 qpair failed and we were unable to recover it. 00:26:12.887 [2024-04-18 21:19:28.718901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.887 [2024-04-18 21:19:28.719246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.887 [2024-04-18 21:19:28.719255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.887 qpair failed and we were unable to recover it. 00:26:12.887 [2024-04-18 21:19:28.719602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.887 [2024-04-18 21:19:28.719900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.887 [2024-04-18 21:19:28.719909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.887 qpair failed and we were unable to recover it. 00:26:12.887 [2024-04-18 21:19:28.720263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.887 [2024-04-18 21:19:28.720532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.887 [2024-04-18 21:19:28.720542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.887 qpair failed and we were unable to recover it. 00:26:12.887 [2024-04-18 21:19:28.720832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.887 [2024-04-18 21:19:28.721098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.887 [2024-04-18 21:19:28.721107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.887 qpair failed and we were unable to recover it. 00:26:12.887 [2024-04-18 21:19:28.721424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.887 [2024-04-18 21:19:28.721754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.887 [2024-04-18 21:19:28.721764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.887 qpair failed and we were unable to recover it. 00:26:12.888 [2024-04-18 21:19:28.722030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.722376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.722385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.888 qpair failed and we were unable to recover it. 00:26:12.888 [2024-04-18 21:19:28.722644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.722921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.722930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.888 qpair failed and we were unable to recover it. 00:26:12.888 [2024-04-18 21:19:28.723214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.723560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.723570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.888 qpair failed and we were unable to recover it. 00:26:12.888 [2024-04-18 21:19:28.723785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.724075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.724084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.888 qpair failed and we were unable to recover it. 00:26:12.888 [2024-04-18 21:19:28.724360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.724679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.724688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.888 qpair failed and we were unable to recover it. 00:26:12.888 [2024-04-18 21:19:28.725031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.725370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.725379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.888 qpair failed and we were unable to recover it. 00:26:12.888 [2024-04-18 21:19:28.725722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.726064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.726073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.888 qpair failed and we were unable to recover it. 00:26:12.888 [2024-04-18 21:19:28.726389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.726654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.726663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.888 qpair failed and we were unable to recover it. 00:26:12.888 [2024-04-18 21:19:28.727007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.727361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.727370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.888 qpair failed and we were unable to recover it. 00:26:12.888 [2024-04-18 21:19:28.727696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.728039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.728048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.888 qpair failed and we were unable to recover it. 00:26:12.888 [2024-04-18 21:19:28.728308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.728690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.728699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.888 qpair failed and we were unable to recover it. 00:26:12.888 [2024-04-18 21:19:28.728961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.729249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.729260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.888 qpair failed and we were unable to recover it. 00:26:12.888 [2024-04-18 21:19:28.729559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.729850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.729859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.888 qpair failed and we were unable to recover it. 00:26:12.888 [2024-04-18 21:19:28.730085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.730447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.730457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.888 qpair failed and we were unable to recover it. 00:26:12.888 [2024-04-18 21:19:28.730807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.731028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.731037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.888 qpair failed and we were unable to recover it. 00:26:12.888 [2024-04-18 21:19:28.731382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.731798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.731808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.888 qpair failed and we were unable to recover it. 00:26:12.888 [2024-04-18 21:19:28.732142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.732488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.732497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.888 qpair failed and we were unable to recover it. 00:26:12.888 [2024-04-18 21:19:28.732784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.733133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.733142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.888 qpair failed and we were unable to recover it. 00:26:12.888 [2024-04-18 21:19:28.733528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.733745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.733754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.888 qpair failed and we were unable to recover it. 00:26:12.888 [2024-04-18 21:19:28.734095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.734447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.734456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.888 qpair failed and we were unable to recover it. 00:26:12.888 [2024-04-18 21:19:28.734806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.735102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.735111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.888 qpair failed and we were unable to recover it. 00:26:12.888 [2024-04-18 21:19:28.735318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.735643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.735655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.888 qpair failed and we were unable to recover it. 00:26:12.888 [2024-04-18 21:19:28.735987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.736325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.736334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.888 qpair failed and we were unable to recover it. 00:26:12.888 [2024-04-18 21:19:28.736662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.736934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.736943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.888 qpair failed and we were unable to recover it. 00:26:12.888 [2024-04-18 21:19:28.737284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.737612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.737622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.888 qpair failed and we were unable to recover it. 00:26:12.888 [2024-04-18 21:19:28.737893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.738183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.738192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.888 qpair failed and we were unable to recover it. 00:26:12.888 [2024-04-18 21:19:28.738546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.738845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.888 [2024-04-18 21:19:28.738855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.888 qpair failed and we were unable to recover it. 00:26:12.889 [2024-04-18 21:19:28.739147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.889 [2024-04-18 21:19:28.739421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.889 [2024-04-18 21:19:28.739430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.889 qpair failed and we were unable to recover it. 00:26:12.889 [2024-04-18 21:19:28.739705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.889 [2024-04-18 21:19:28.740005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.889 [2024-04-18 21:19:28.740014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.889 qpair failed and we were unable to recover it. 00:26:12.889 [2024-04-18 21:19:28.740298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.889 [2024-04-18 21:19:28.740534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.889 [2024-04-18 21:19:28.740544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.889 qpair failed and we were unable to recover it. 00:26:12.889 [2024-04-18 21:19:28.740889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.889 [2024-04-18 21:19:28.741167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.889 [2024-04-18 21:19:28.741177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.889 qpair failed and we were unable to recover it. 00:26:12.889 [2024-04-18 21:19:28.741524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.889 [2024-04-18 21:19:28.741756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.889 [2024-04-18 21:19:28.741769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.889 qpair failed and we were unable to recover it. 00:26:12.889 [2024-04-18 21:19:28.742124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.889 [2024-04-18 21:19:28.742399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.889 [2024-04-18 21:19:28.742409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.889 qpair failed and we were unable to recover it. 00:26:12.889 [2024-04-18 21:19:28.742689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.889 [2024-04-18 21:19:28.743036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.889 [2024-04-18 21:19:28.743046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.889 qpair failed and we were unable to recover it. 00:26:12.889 [2024-04-18 21:19:28.743395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.889 [2024-04-18 21:19:28.743716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.889 [2024-04-18 21:19:28.743726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.889 qpair failed and we were unable to recover it. 00:26:12.889 [2024-04-18 21:19:28.744031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.889 [2024-04-18 21:19:28.744363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.889 [2024-04-18 21:19:28.744372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.889 qpair failed and we were unable to recover it. 00:26:12.889 [2024-04-18 21:19:28.744694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.889 [2024-04-18 21:19:28.744980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.889 [2024-04-18 21:19:28.744989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.889 qpair failed and we were unable to recover it. 00:26:12.889 [2024-04-18 21:19:28.745343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.889 [2024-04-18 21:19:28.745632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.889 [2024-04-18 21:19:28.745641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.889 qpair failed and we were unable to recover it. 00:26:12.889 [2024-04-18 21:19:28.746007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.889 [2024-04-18 21:19:28.746378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.889 [2024-04-18 21:19:28.746387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.889 qpair failed and we were unable to recover it. 00:26:12.889 [2024-04-18 21:19:28.746685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.889 [2024-04-18 21:19:28.746950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.889 [2024-04-18 21:19:28.746959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.889 qpair failed and we were unable to recover it. 00:26:12.889 [2024-04-18 21:19:28.747286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.889 [2024-04-18 21:19:28.747654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.889 [2024-04-18 21:19:28.747664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.889 qpair failed and we were unable to recover it. 00:26:12.889 [2024-04-18 21:19:28.748037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.889 [2024-04-18 21:19:28.748400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.889 [2024-04-18 21:19:28.748410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.889 qpair failed and we were unable to recover it. 00:26:12.889 [2024-04-18 21:19:28.748783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.889 [2024-04-18 21:19:28.749156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.889 [2024-04-18 21:19:28.749165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.889 qpair failed and we were unable to recover it. 00:26:12.889 [2024-04-18 21:19:28.749437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.889 [2024-04-18 21:19:28.749803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.889 [2024-04-18 21:19:28.749813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.889 qpair failed and we were unable to recover it. 00:26:12.889 [2024-04-18 21:19:28.750090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.889 [2024-04-18 21:19:28.750381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.889 [2024-04-18 21:19:28.750390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.889 qpair failed and we were unable to recover it. 00:26:12.889 [2024-04-18 21:19:28.750736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.889 [2024-04-18 21:19:28.751086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.889 [2024-04-18 21:19:28.751095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.889 qpair failed and we were unable to recover it. 00:26:12.889 [2024-04-18 21:19:28.751372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.889 [2024-04-18 21:19:28.751715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.889 [2024-04-18 21:19:28.751725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.889 qpair failed and we were unable to recover it. 00:26:12.889 [2024-04-18 21:19:28.752075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.889 [2024-04-18 21:19:28.752400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.889 [2024-04-18 21:19:28.752409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.889 qpair failed and we were unable to recover it. 00:26:12.889 [2024-04-18 21:19:28.752703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.889 [2024-04-18 21:19:28.753058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.889 [2024-04-18 21:19:28.753067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.889 qpair failed and we were unable to recover it. 00:26:12.889 [2024-04-18 21:19:28.753415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.889 [2024-04-18 21:19:28.753703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.889 [2024-04-18 21:19:28.753713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.889 qpair failed and we were unable to recover it. 00:26:12.890 [2024-04-18 21:19:28.753991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.754280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.754289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.890 qpair failed and we were unable to recover it. 00:26:12.890 [2024-04-18 21:19:28.754551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.754895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.754905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.890 qpair failed and we were unable to recover it. 00:26:12.890 [2024-04-18 21:19:28.755203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.755549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.755558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.890 qpair failed and we were unable to recover it. 00:26:12.890 [2024-04-18 21:19:28.755906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.756256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.756265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.890 qpair failed and we were unable to recover it. 00:26:12.890 [2024-04-18 21:19:28.756524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.756725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.756734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.890 qpair failed and we were unable to recover it. 00:26:12.890 [2024-04-18 21:19:28.757006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.757354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.757363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.890 qpair failed and we were unable to recover it. 00:26:12.890 [2024-04-18 21:19:28.757684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.758032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.758041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.890 qpair failed and we were unable to recover it. 00:26:12.890 [2024-04-18 21:19:28.758317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.758590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.758600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.890 qpair failed and we were unable to recover it. 00:26:12.890 [2024-04-18 21:19:28.758951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.759294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.759304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.890 qpair failed and we were unable to recover it. 00:26:12.890 [2024-04-18 21:19:28.759626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.759972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.759981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.890 qpair failed and we were unable to recover it. 00:26:12.890 [2024-04-18 21:19:28.760259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.760607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.760616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.890 qpair failed and we were unable to recover it. 00:26:12.890 [2024-04-18 21:19:28.760963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.761284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.761293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.890 qpair failed and we were unable to recover it. 00:26:12.890 [2024-04-18 21:19:28.761645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.761914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.761923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.890 qpair failed and we were unable to recover it. 00:26:12.890 [2024-04-18 21:19:28.762184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.762554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.762564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.890 qpair failed and we were unable to recover it. 00:26:12.890 [2024-04-18 21:19:28.762886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.763211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.763220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.890 qpair failed and we were unable to recover it. 00:26:12.890 [2024-04-18 21:19:28.763492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.763850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.763860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.890 qpair failed and we were unable to recover it. 00:26:12.890 [2024-04-18 21:19:28.764233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.764599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.764609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.890 qpair failed and we were unable to recover it. 00:26:12.890 [2024-04-18 21:19:28.764909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.765208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.765218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.890 qpair failed and we were unable to recover it. 00:26:12.890 [2024-04-18 21:19:28.765569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.765915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.765924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.890 qpair failed and we were unable to recover it. 00:26:12.890 [2024-04-18 21:19:28.766270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.766570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.766579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.890 qpair failed and we were unable to recover it. 00:26:12.890 [2024-04-18 21:19:28.766947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.767294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.767303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.890 qpair failed and we were unable to recover it. 00:26:12.890 [2024-04-18 21:19:28.767593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.767930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.767939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.890 qpair failed and we were unable to recover it. 00:26:12.890 [2024-04-18 21:19:28.768335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.768684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.768693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.890 qpair failed and we were unable to recover it. 00:26:12.890 [2024-04-18 21:19:28.768982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.769329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.769338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.890 qpair failed and we were unable to recover it. 00:26:12.890 [2024-04-18 21:19:28.769681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.769960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.769969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.890 qpair failed and we were unable to recover it. 00:26:12.890 [2024-04-18 21:19:28.770319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.770667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.770677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.890 qpair failed and we were unable to recover it. 00:26:12.890 [2024-04-18 21:19:28.770976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.771270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.771279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.890 qpair failed and we were unable to recover it. 00:26:12.890 [2024-04-18 21:19:28.771627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.771974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.890 [2024-04-18 21:19:28.771983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.890 qpair failed and we were unable to recover it. 00:26:12.891 [2024-04-18 21:19:28.772266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.772547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.772556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.891 qpair failed and we were unable to recover it. 00:26:12.891 [2024-04-18 21:19:28.772900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.773249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.773258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.891 qpair failed and we were unable to recover it. 00:26:12.891 [2024-04-18 21:19:28.773535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.773874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.773884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.891 qpair failed and we were unable to recover it. 00:26:12.891 [2024-04-18 21:19:28.774229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.774573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.774583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.891 qpair failed and we were unable to recover it. 00:26:12.891 [2024-04-18 21:19:28.774855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.775202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.775212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.891 qpair failed and we were unable to recover it. 00:26:12.891 [2024-04-18 21:19:28.775536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.775858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.775867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.891 qpair failed and we were unable to recover it. 00:26:12.891 [2024-04-18 21:19:28.776191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.776566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.776576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.891 qpair failed and we were unable to recover it. 00:26:12.891 [2024-04-18 21:19:28.776864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.777184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.777194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.891 qpair failed and we were unable to recover it. 00:26:12.891 [2024-04-18 21:19:28.777540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.777886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.777895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.891 qpair failed and we were unable to recover it. 00:26:12.891 [2024-04-18 21:19:28.778168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.778442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.778451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.891 qpair failed and we were unable to recover it. 00:26:12.891 [2024-04-18 21:19:28.778734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.779056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.779065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.891 qpair failed and we were unable to recover it. 00:26:12.891 [2024-04-18 21:19:28.779389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.779732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.779760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.891 qpair failed and we were unable to recover it. 00:26:12.891 [2024-04-18 21:19:28.780095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.780415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.780424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.891 qpair failed and we were unable to recover it. 00:26:12.891 [2024-04-18 21:19:28.780757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.781101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.781111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.891 qpair failed and we were unable to recover it. 00:26:12.891 [2024-04-18 21:19:28.781334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.781675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.781684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.891 qpair failed and we were unable to recover it. 00:26:12.891 [2024-04-18 21:19:28.782039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.782361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.782370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.891 qpair failed and we were unable to recover it. 00:26:12.891 [2024-04-18 21:19:28.782633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.782910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.782919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.891 qpair failed and we were unable to recover it. 00:26:12.891 [2024-04-18 21:19:28.783265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.783534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.783544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.891 qpair failed and we were unable to recover it. 00:26:12.891 [2024-04-18 21:19:28.783760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.784128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.784137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.891 qpair failed and we were unable to recover it. 00:26:12.891 [2024-04-18 21:19:28.784407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.784707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.784717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.891 qpair failed and we were unable to recover it. 00:26:12.891 [2024-04-18 21:19:28.785062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.785384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.785393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.891 qpair failed and we were unable to recover it. 00:26:12.891 [2024-04-18 21:19:28.785666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.786038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.786047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.891 qpair failed and we were unable to recover it. 00:26:12.891 [2024-04-18 21:19:28.786420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.786718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.786728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.891 qpair failed and we were unable to recover it. 00:26:12.891 [2024-04-18 21:19:28.787075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.787366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.787375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.891 qpair failed and we were unable to recover it. 00:26:12.891 [2024-04-18 21:19:28.787722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.788043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.788051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.891 qpair failed and we were unable to recover it. 00:26:12.891 [2024-04-18 21:19:28.788405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.788725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.788735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.891 qpair failed and we were unable to recover it. 00:26:12.891 [2024-04-18 21:19:28.788992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.789338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.891 [2024-04-18 21:19:28.789347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.891 qpair failed and we were unable to recover it. 00:26:12.892 [2024-04-18 21:19:28.789711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.892 [2024-04-18 21:19:28.789926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.892 [2024-04-18 21:19:28.789935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.892 qpair failed and we were unable to recover it. 00:26:12.892 [2024-04-18 21:19:28.790284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.892 [2024-04-18 21:19:28.790630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.892 [2024-04-18 21:19:28.790640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.892 qpair failed and we were unable to recover it. 00:26:12.892 [2024-04-18 21:19:28.790919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.892 [2024-04-18 21:19:28.791261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.892 [2024-04-18 21:19:28.791270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.892 qpair failed and we were unable to recover it. 00:26:12.892 [2024-04-18 21:19:28.791594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.892 [2024-04-18 21:19:28.791941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.892 [2024-04-18 21:19:28.791951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.892 qpair failed and we were unable to recover it. 00:26:12.892 [2024-04-18 21:19:28.792300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.892 [2024-04-18 21:19:28.792645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.892 [2024-04-18 21:19:28.792654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.892 qpair failed and we were unable to recover it. 00:26:12.892 [2024-04-18 21:19:28.792927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.892 [2024-04-18 21:19:28.793248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.892 [2024-04-18 21:19:28.793257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.892 qpair failed and we were unable to recover it. 00:26:12.892 [2024-04-18 21:19:28.793626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.892 [2024-04-18 21:19:28.793997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.892 [2024-04-18 21:19:28.794007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.892 qpair failed and we were unable to recover it. 00:26:12.892 [2024-04-18 21:19:28.794285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.892 [2024-04-18 21:19:28.794628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.892 [2024-04-18 21:19:28.794638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.892 qpair failed and we were unable to recover it. 00:26:12.892 [2024-04-18 21:19:28.794931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.892 [2024-04-18 21:19:28.795213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.892 [2024-04-18 21:19:28.795222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.892 qpair failed and we were unable to recover it. 00:26:12.892 [2024-04-18 21:19:28.795568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.892 [2024-04-18 21:19:28.795917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.892 [2024-04-18 21:19:28.795926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.892 qpair failed and we were unable to recover it. 00:26:12.892 [2024-04-18 21:19:28.796272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.892 [2024-04-18 21:19:28.796598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.892 [2024-04-18 21:19:28.796608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:12.892 qpair failed and we were unable to recover it. 00:26:12.892 [2024-04-18 21:19:28.796957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.892 [2024-04-18 21:19:28.797300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.160 [2024-04-18 21:19:28.797310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.160 qpair failed and we were unable to recover it. 00:26:13.160 [2024-04-18 21:19:28.797658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.160 [2024-04-18 21:19:28.797955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.160 [2024-04-18 21:19:28.797964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.160 qpair failed and we were unable to recover it. 00:26:13.160 [2024-04-18 21:19:28.798263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.160 [2024-04-18 21:19:28.798612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.160 [2024-04-18 21:19:28.798621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.160 qpair failed and we were unable to recover it. 00:26:13.160 [2024-04-18 21:19:28.798945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.160 [2024-04-18 21:19:28.799217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.160 [2024-04-18 21:19:28.799227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.160 qpair failed and we were unable to recover it. 00:26:13.160 [2024-04-18 21:19:28.799500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.160 [2024-04-18 21:19:28.799834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.160 [2024-04-18 21:19:28.799844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.160 qpair failed and we were unable to recover it. 00:26:13.160 [2024-04-18 21:19:28.800128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.160 [2024-04-18 21:19:28.800483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.160 [2024-04-18 21:19:28.800492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.160 qpair failed and we were unable to recover it. 00:26:13.160 [2024-04-18 21:19:28.800859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.160 [2024-04-18 21:19:28.801229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.160 [2024-04-18 21:19:28.801239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.160 qpair failed and we were unable to recover it. 00:26:13.160 [2024-04-18 21:19:28.801523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.160 [2024-04-18 21:19:28.801875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.160 [2024-04-18 21:19:28.801884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.160 qpair failed and we were unable to recover it. 00:26:13.160 [2024-04-18 21:19:28.802227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.160 [2024-04-18 21:19:28.802575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.160 [2024-04-18 21:19:28.802585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.160 qpair failed and we were unable to recover it. 00:26:13.160 [2024-04-18 21:19:28.802939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.160 [2024-04-18 21:19:28.803288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.160 [2024-04-18 21:19:28.803296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.160 qpair failed and we were unable to recover it. 00:26:13.160 [2024-04-18 21:19:28.803650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.161 [2024-04-18 21:19:28.803942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.161 [2024-04-18 21:19:28.803951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.161 qpair failed and we were unable to recover it. 00:26:13.161 [2024-04-18 21:19:28.804230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.161 [2024-04-18 21:19:28.804572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.161 [2024-04-18 21:19:28.804582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.161 qpair failed and we were unable to recover it. 00:26:13.161 [2024-04-18 21:19:28.804934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.161 [2024-04-18 21:19:28.805226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.161 [2024-04-18 21:19:28.805235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.161 qpair failed and we were unable to recover it. 00:26:13.161 [2024-04-18 21:19:28.805507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.161 [2024-04-18 21:19:28.805866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.161 [2024-04-18 21:19:28.805876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.161 qpair failed and we were unable to recover it. 00:26:13.161 [2024-04-18 21:19:28.806158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.161 [2024-04-18 21:19:28.806508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.161 [2024-04-18 21:19:28.806521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.161 qpair failed and we were unable to recover it. 00:26:13.161 [2024-04-18 21:19:28.806866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.161 [2024-04-18 21:19:28.807209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.161 [2024-04-18 21:19:28.807218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.161 qpair failed and we were unable to recover it. 00:26:13.161 [2024-04-18 21:19:28.807567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.161 [2024-04-18 21:19:28.807846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.161 [2024-04-18 21:19:28.807855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.161 qpair failed and we were unable to recover it. 00:26:13.161 [2024-04-18 21:19:28.808180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.161 [2024-04-18 21:19:28.808439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.161 [2024-04-18 21:19:28.808449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.161 qpair failed and we were unable to recover it. 00:26:13.161 [2024-04-18 21:19:28.808797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.161 [2024-04-18 21:19:28.809122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.161 [2024-04-18 21:19:28.809131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.161 qpair failed and we were unable to recover it. 00:26:13.161 [2024-04-18 21:19:28.809399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.161 [2024-04-18 21:19:28.809761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.161 [2024-04-18 21:19:28.809770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.161 qpair failed and we were unable to recover it. 00:26:13.161 [2024-04-18 21:19:28.810116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.161 [2024-04-18 21:19:28.810437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.161 [2024-04-18 21:19:28.810446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.161 qpair failed and we were unable to recover it. 00:26:13.161 [2024-04-18 21:19:28.810801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.161 [2024-04-18 21:19:28.811121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.161 [2024-04-18 21:19:28.811130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.161 qpair failed and we were unable to recover it. 00:26:13.161 [2024-04-18 21:19:28.811398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.161 [2024-04-18 21:19:28.811764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.161 [2024-04-18 21:19:28.811774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.161 qpair failed and we were unable to recover it. 00:26:13.161 [2024-04-18 21:19:28.812141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.161 [2024-04-18 21:19:28.812347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.161 [2024-04-18 21:19:28.812355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.161 qpair failed and we were unable to recover it. 00:26:13.161 [2024-04-18 21:19:28.812704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.161 [2024-04-18 21:19:28.813050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.161 [2024-04-18 21:19:28.813060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.161 qpair failed and we were unable to recover it. 00:26:13.161 [2024-04-18 21:19:28.813408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.161 [2024-04-18 21:19:28.813754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.161 [2024-04-18 21:19:28.813763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.161 qpair failed and we were unable to recover it. 00:26:13.161 [2024-04-18 21:19:28.814036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.161 [2024-04-18 21:19:28.814309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.161 [2024-04-18 21:19:28.814318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.161 qpair failed and we were unable to recover it. 00:26:13.161 [2024-04-18 21:19:28.814666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.161 [2024-04-18 21:19:28.814987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.161 [2024-04-18 21:19:28.814996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.161 qpair failed and we were unable to recover it. 00:26:13.161 [2024-04-18 21:19:28.815318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.161 [2024-04-18 21:19:28.815586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.161 [2024-04-18 21:19:28.815595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.161 qpair failed and we were unable to recover it. 00:26:13.161 [2024-04-18 21:19:28.815940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.161 [2024-04-18 21:19:28.816287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.161 [2024-04-18 21:19:28.816296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.161 qpair failed and we were unable to recover it. 00:26:13.161 [2024-04-18 21:19:28.816598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.161 [2024-04-18 21:19:28.816965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.161 [2024-04-18 21:19:28.816975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.161 qpair failed and we were unable to recover it. 00:26:13.161 [2024-04-18 21:19:28.817345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.161 [2024-04-18 21:19:28.817641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.161 [2024-04-18 21:19:28.817651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.161 qpair failed and we were unable to recover it. 00:26:13.161 [2024-04-18 21:19:28.817940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.162 [2024-04-18 21:19:28.818211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.162 [2024-04-18 21:19:28.818220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.162 qpair failed and we were unable to recover it. 00:26:13.162 [2024-04-18 21:19:28.818567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.162 [2024-04-18 21:19:28.818856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.162 [2024-04-18 21:19:28.818865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.162 qpair failed and we were unable to recover it. 00:26:13.162 [2024-04-18 21:19:28.819165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.162 [2024-04-18 21:19:28.819482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.162 [2024-04-18 21:19:28.819491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.162 qpair failed and we were unable to recover it. 00:26:13.162 [2024-04-18 21:19:28.819847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.162 [2024-04-18 21:19:28.820134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.162 [2024-04-18 21:19:28.820143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.162 qpair failed and we were unable to recover it. 00:26:13.162 [2024-04-18 21:19:28.820425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.162 [2024-04-18 21:19:28.820721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.162 [2024-04-18 21:19:28.820731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.162 qpair failed and we were unable to recover it. 00:26:13.162 [2024-04-18 21:19:28.821076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.162 [2024-04-18 21:19:28.821430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.162 [2024-04-18 21:19:28.821439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.162 qpair failed and we were unable to recover it. 00:26:13.162 [2024-04-18 21:19:28.821759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.162 [2024-04-18 21:19:28.822062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.162 [2024-04-18 21:19:28.822071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.162 qpair failed and we were unable to recover it. 00:26:13.162 [2024-04-18 21:19:28.822415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.162 [2024-04-18 21:19:28.822671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.162 [2024-04-18 21:19:28.822681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.162 qpair failed and we were unable to recover it. 00:26:13.162 [2024-04-18 21:19:28.823037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.162 [2024-04-18 21:19:28.823322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.162 [2024-04-18 21:19:28.823331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.162 qpair failed and we were unable to recover it. 00:26:13.162 [2024-04-18 21:19:28.823588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.162 [2024-04-18 21:19:28.823860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.162 [2024-04-18 21:19:28.823869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.162 qpair failed and we were unable to recover it. 00:26:13.162 [2024-04-18 21:19:28.824164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.162 [2024-04-18 21:19:28.824486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.162 [2024-04-18 21:19:28.824495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.162 qpair failed and we were unable to recover it. 00:26:13.162 [2024-04-18 21:19:28.824855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.162 [2024-04-18 21:19:28.825130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.162 [2024-04-18 21:19:28.825139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.162 qpair failed and we were unable to recover it. 00:26:13.162 [2024-04-18 21:19:28.825490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.162 [2024-04-18 21:19:28.825770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.162 [2024-04-18 21:19:28.825779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.162 qpair failed and we were unable to recover it. 00:26:13.162 [2024-04-18 21:19:28.826058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.162 [2024-04-18 21:19:28.826333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.162 [2024-04-18 21:19:28.826342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.162 qpair failed and we were unable to recover it. 00:26:13.162 [2024-04-18 21:19:28.826689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.162 [2024-04-18 21:19:28.826964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.162 [2024-04-18 21:19:28.826973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.162 qpair failed and we were unable to recover it. 00:26:13.162 [2024-04-18 21:19:28.827321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.162 [2024-04-18 21:19:28.827669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.162 [2024-04-18 21:19:28.827679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.162 qpair failed and we were unable to recover it. 00:26:13.162 [2024-04-18 21:19:28.827897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.162 [2024-04-18 21:19:28.828217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.162 [2024-04-18 21:19:28.828226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.162 qpair failed and we were unable to recover it. 00:26:13.162 [2024-04-18 21:19:28.828531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.162 [2024-04-18 21:19:28.828891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.162 [2024-04-18 21:19:28.828901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.162 qpair failed and we were unable to recover it. 00:26:13.162 [2024-04-18 21:19:28.829273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.162 [2024-04-18 21:19:28.829637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.162 [2024-04-18 21:19:28.829647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.162 qpair failed and we were unable to recover it. 00:26:13.162 [2024-04-18 21:19:28.829943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.162 [2024-04-18 21:19:28.830295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.162 [2024-04-18 21:19:28.830304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.162 qpair failed and we were unable to recover it. 00:26:13.162 [2024-04-18 21:19:28.830644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.162 [2024-04-18 21:19:28.830966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.830976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.163 qpair failed and we were unable to recover it. 00:26:13.163 [2024-04-18 21:19:28.831303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.831624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.831634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.163 qpair failed and we were unable to recover it. 00:26:13.163 [2024-04-18 21:19:28.831980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.832329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.832338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.163 qpair failed and we were unable to recover it. 00:26:13.163 [2024-04-18 21:19:28.832686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.833015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.833024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.163 qpair failed and we were unable to recover it. 00:26:13.163 [2024-04-18 21:19:28.833291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.833637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.833650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.163 qpair failed and we were unable to recover it. 00:26:13.163 [2024-04-18 21:19:28.833997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.834286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.834296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.163 qpair failed and we were unable to recover it. 00:26:13.163 [2024-04-18 21:19:28.834624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.834969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.834978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.163 qpair failed and we were unable to recover it. 00:26:13.163 [2024-04-18 21:19:28.835325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.835616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.835625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.163 qpair failed and we were unable to recover it. 00:26:13.163 [2024-04-18 21:19:28.835950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.836270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.836280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.163 qpair failed and we were unable to recover it. 00:26:13.163 [2024-04-18 21:19:28.836606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.836930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.836940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.163 qpair failed and we were unable to recover it. 00:26:13.163 [2024-04-18 21:19:28.837289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.837569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.837578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.163 qpair failed and we were unable to recover it. 00:26:13.163 [2024-04-18 21:19:28.837903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.838223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.838232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.163 qpair failed and we were unable to recover it. 00:26:13.163 [2024-04-18 21:19:28.838569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.838922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.838931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.163 qpair failed and we were unable to recover it. 00:26:13.163 [2024-04-18 21:19:28.839203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.839546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.839555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.163 qpair failed and we were unable to recover it. 00:26:13.163 [2024-04-18 21:19:28.839925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.840200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.840213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.163 qpair failed and we were unable to recover it. 00:26:13.163 [2024-04-18 21:19:28.840411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.840756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.840766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.163 qpair failed and we were unable to recover it. 00:26:13.163 [2024-04-18 21:19:28.841089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.841435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.841445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.163 qpair failed and we were unable to recover it. 00:26:13.163 [2024-04-18 21:19:28.841788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.842137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.842146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.163 qpair failed and we were unable to recover it. 00:26:13.163 [2024-04-18 21:19:28.842497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.842765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.842774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.163 qpair failed and we were unable to recover it. 00:26:13.163 [2024-04-18 21:19:28.843046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.843397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.843406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.163 qpair failed and we were unable to recover it. 00:26:13.163 [2024-04-18 21:19:28.843727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.844094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.844103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.163 qpair failed and we were unable to recover it. 00:26:13.163 [2024-04-18 21:19:28.844380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.844639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.844649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.163 qpair failed and we were unable to recover it. 00:26:13.163 [2024-04-18 21:19:28.844993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.845340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.845349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.163 qpair failed and we were unable to recover it. 00:26:13.163 [2024-04-18 21:19:28.845647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.846020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.846029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.163 qpair failed and we were unable to recover it. 00:26:13.163 [2024-04-18 21:19:28.846288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.846616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.846628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.163 qpair failed and we were unable to recover it. 00:26:13.163 [2024-04-18 21:19:28.846974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.847231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.847241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.163 qpair failed and we were unable to recover it. 00:26:13.163 [2024-04-18 21:19:28.847593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.847972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.847981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.163 qpair failed and we were unable to recover it. 00:26:13.163 [2024-04-18 21:19:28.848253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.848598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.848608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.163 qpair failed and we were unable to recover it. 00:26:13.163 [2024-04-18 21:19:28.848901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.849248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.163 [2024-04-18 21:19:28.849257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.163 qpair failed and we were unable to recover it. 00:26:13.164 [2024-04-18 21:19:28.849519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.849873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.849882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.164 qpair failed and we were unable to recover it. 00:26:13.164 [2024-04-18 21:19:28.850177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.850450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.850459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.164 qpair failed and we were unable to recover it. 00:26:13.164 [2024-04-18 21:19:28.850780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.851153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.851163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.164 qpair failed and we were unable to recover it. 00:26:13.164 [2024-04-18 21:19:28.851442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.851714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.851723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.164 qpair failed and we were unable to recover it. 00:26:13.164 [2024-04-18 21:19:28.852048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.852328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.852338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.164 qpair failed and we were unable to recover it. 00:26:13.164 [2024-04-18 21:19:28.852613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.852874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.852884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.164 qpair failed and we were unable to recover it. 00:26:13.164 [2024-04-18 21:19:28.853176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.853498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.853508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.164 qpair failed and we were unable to recover it. 00:26:13.164 [2024-04-18 21:19:28.853811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.854167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.854176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.164 qpair failed and we were unable to recover it. 00:26:13.164 [2024-04-18 21:19:28.854470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.854816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.854826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.164 qpair failed and we were unable to recover it. 00:26:13.164 [2024-04-18 21:19:28.855155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.855496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.855505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.164 qpair failed and we were unable to recover it. 00:26:13.164 [2024-04-18 21:19:28.855854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.856246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.856255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.164 qpair failed and we were unable to recover it. 00:26:13.164 [2024-04-18 21:19:28.856563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.856931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.856941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.164 qpair failed and we were unable to recover it. 00:26:13.164 [2024-04-18 21:19:28.857284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.857631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.857640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.164 qpair failed and we were unable to recover it. 00:26:13.164 [2024-04-18 21:19:28.857985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.858393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.858402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.164 qpair failed and we were unable to recover it. 00:26:13.164 [2024-04-18 21:19:28.858677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.859015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.859025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.164 qpair failed and we were unable to recover it. 00:26:13.164 [2024-04-18 21:19:28.859294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.859597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.859607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.164 qpair failed and we were unable to recover it. 00:26:13.164 [2024-04-18 21:19:28.859877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.860222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.860231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.164 qpair failed and we were unable to recover it. 00:26:13.164 [2024-04-18 21:19:28.860506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.860867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.860876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.164 qpair failed and we were unable to recover it. 00:26:13.164 [2024-04-18 21:19:28.861200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.861460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.861470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.164 qpair failed and we were unable to recover it. 00:26:13.164 [2024-04-18 21:19:28.861730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.862025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.862034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.164 qpair failed and we were unable to recover it. 00:26:13.164 [2024-04-18 21:19:28.862359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.862708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.862718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.164 qpair failed and we were unable to recover it. 00:26:13.164 [2024-04-18 21:19:28.863008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.863356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.863366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.164 qpair failed and we were unable to recover it. 00:26:13.164 [2024-04-18 21:19:28.863691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.864032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.864041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.164 qpair failed and we were unable to recover it. 00:26:13.164 [2024-04-18 21:19:28.864391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.864676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.864685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.164 qpair failed and we were unable to recover it. 00:26:13.164 [2024-04-18 21:19:28.865020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.865230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.865239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.164 qpair failed and we were unable to recover it. 00:26:13.164 [2024-04-18 21:19:28.865499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.865793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.865803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.164 qpair failed and we were unable to recover it. 00:26:13.164 [2024-04-18 21:19:28.866077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.866309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.866319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.164 qpair failed and we were unable to recover it. 00:26:13.164 [2024-04-18 21:19:28.866664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.867012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.867022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.164 qpair failed and we were unable to recover it. 00:26:13.164 [2024-04-18 21:19:28.867409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.164 [2024-04-18 21:19:28.867703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.165 [2024-04-18 21:19:28.867713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.165 qpair failed and we were unable to recover it. 00:26:13.165 [2024-04-18 21:19:28.868017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.165 [2024-04-18 21:19:28.868317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.165 [2024-04-18 21:19:28.868326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.165 qpair failed and we were unable to recover it. 00:26:13.165 [2024-04-18 21:19:28.868611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.165 [2024-04-18 21:19:28.868891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.165 [2024-04-18 21:19:28.868900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.165 qpair failed and we were unable to recover it. 00:26:13.165 [2024-04-18 21:19:28.869187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.165 [2024-04-18 21:19:28.869403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.165 [2024-04-18 21:19:28.869412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.165 qpair failed and we were unable to recover it. 00:26:13.165 [2024-04-18 21:19:28.869630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.165 [2024-04-18 21:19:28.869826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.165 [2024-04-18 21:19:28.869837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.165 qpair failed and we were unable to recover it. 00:26:13.165 [2024-04-18 21:19:28.870206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.165 [2024-04-18 21:19:28.870554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.165 [2024-04-18 21:19:28.870564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.165 qpair failed and we were unable to recover it. 00:26:13.165 [2024-04-18 21:19:28.870922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.165 [2024-04-18 21:19:28.871304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.165 [2024-04-18 21:19:28.871321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.165 qpair failed and we were unable to recover it. 00:26:13.165 [2024-04-18 21:19:28.871684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.165 [2024-04-18 21:19:28.872056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.165 [2024-04-18 21:19:28.872065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.165 qpair failed and we were unable to recover it. 00:26:13.165 [2024-04-18 21:19:28.872372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.165 [2024-04-18 21:19:28.872583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.165 [2024-04-18 21:19:28.872593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.165 qpair failed and we were unable to recover it. 00:26:13.165 [2024-04-18 21:19:28.872821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.165 [2024-04-18 21:19:28.873184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.165 [2024-04-18 21:19:28.873194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.165 qpair failed and we were unable to recover it. 00:26:13.165 [2024-04-18 21:19:28.873473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.165 [2024-04-18 21:19:28.873696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.165 [2024-04-18 21:19:28.873706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.165 qpair failed and we were unable to recover it. 00:26:13.165 [2024-04-18 21:19:28.873975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.165 [2024-04-18 21:19:28.874236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.165 [2024-04-18 21:19:28.874245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.165 qpair failed and we were unable to recover it. 00:26:13.165 [2024-04-18 21:19:28.874506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.165 [2024-04-18 21:19:28.874792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.165 [2024-04-18 21:19:28.874802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.165 qpair failed and we were unable to recover it. 00:26:13.165 [2024-04-18 21:19:28.875073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.165 [2024-04-18 21:19:28.875419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.165 [2024-04-18 21:19:28.875429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.165 qpair failed and we were unable to recover it. 00:26:13.165 [2024-04-18 21:19:28.875703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.165 [2024-04-18 21:19:28.875999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.165 [2024-04-18 21:19:28.876008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.165 qpair failed and we were unable to recover it. 00:26:13.165 [2024-04-18 21:19:28.876289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.165 [2024-04-18 21:19:28.876627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.165 [2024-04-18 21:19:28.876637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.165 qpair failed and we were unable to recover it. 00:26:13.165 [2024-04-18 21:19:28.876912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.165 [2024-04-18 21:19:28.877255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.165 [2024-04-18 21:19:28.877264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.165 qpair failed and we were unable to recover it. 00:26:13.165 [2024-04-18 21:19:28.877586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.165 [2024-04-18 21:19:28.877911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.165 [2024-04-18 21:19:28.877921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.165 qpair failed and we were unable to recover it. 00:26:13.165 [2024-04-18 21:19:28.878247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.165 [2024-04-18 21:19:28.878547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.165 [2024-04-18 21:19:28.878558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.165 qpair failed and we were unable to recover it. 00:26:13.165 [2024-04-18 21:19:28.878848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.165 [2024-04-18 21:19:28.879120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.165 [2024-04-18 21:19:28.879130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.165 qpair failed and we were unable to recover it. 00:26:13.165 [2024-04-18 21:19:28.879413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.165 [2024-04-18 21:19:28.879683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.165 [2024-04-18 21:19:28.879693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.165 qpair failed and we were unable to recover it. 00:26:13.165 [2024-04-18 21:19:28.880019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.165 [2024-04-18 21:19:28.880356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.165 [2024-04-18 21:19:28.880366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.165 qpair failed and we were unable to recover it. 00:26:13.165 [2024-04-18 21:19:28.880625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.165 [2024-04-18 21:19:28.880961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.165 [2024-04-18 21:19:28.880971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.165 qpair failed and we were unable to recover it. 00:26:13.165 [2024-04-18 21:19:28.881171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.165 [2024-04-18 21:19:28.881600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.165 [2024-04-18 21:19:28.881610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.165 qpair failed and we were unable to recover it. 00:26:13.165 [2024-04-18 21:19:28.881912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.165 [2024-04-18 21:19:28.882131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.882141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.166 qpair failed and we were unable to recover it. 00:26:13.166 [2024-04-18 21:19:28.882474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.882805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.882815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.166 qpair failed and we were unable to recover it. 00:26:13.166 [2024-04-18 21:19:28.883139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.883424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.883434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.166 qpair failed and we were unable to recover it. 00:26:13.166 [2024-04-18 21:19:28.883721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.884023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.884033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.166 qpair failed and we were unable to recover it. 00:26:13.166 [2024-04-18 21:19:28.884333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.884643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.884654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.166 qpair failed and we were unable to recover it. 00:26:13.166 [2024-04-18 21:19:28.884927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.885251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.885260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.166 qpair failed and we were unable to recover it. 00:26:13.166 [2024-04-18 21:19:28.885619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.885993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.886003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.166 qpair failed and we were unable to recover it. 00:26:13.166 [2024-04-18 21:19:28.886225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.886526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.886536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.166 qpair failed and we were unable to recover it. 00:26:13.166 [2024-04-18 21:19:28.886747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.887053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.887062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.166 qpair failed and we were unable to recover it. 00:26:13.166 [2024-04-18 21:19:28.887343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.887597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.887607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.166 qpair failed and we were unable to recover it. 00:26:13.166 [2024-04-18 21:19:28.887932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.888144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.888154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.166 qpair failed and we were unable to recover it. 00:26:13.166 [2024-04-18 21:19:28.888419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.888753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.888763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.166 qpair failed and we were unable to recover it. 00:26:13.166 [2024-04-18 21:19:28.889044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.889366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.889375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.166 qpair failed and we were unable to recover it. 00:26:13.166 [2024-04-18 21:19:28.889597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.889866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.889876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.166 qpair failed and we were unable to recover it. 00:26:13.166 [2024-04-18 21:19:28.890165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.890431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.890441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.166 qpair failed and we were unable to recover it. 00:26:13.166 [2024-04-18 21:19:28.890776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.891065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.891075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.166 qpair failed and we were unable to recover it. 00:26:13.166 [2024-04-18 21:19:28.891463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.891798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.891809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.166 qpair failed and we were unable to recover it. 00:26:13.166 [2024-04-18 21:19:28.892141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.892415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.892424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.166 qpair failed and we were unable to recover it. 00:26:13.166 [2024-04-18 21:19:28.892783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.893067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.893088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.166 qpair failed and we were unable to recover it. 00:26:13.166 [2024-04-18 21:19:28.893430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.893825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.893836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.166 qpair failed and we were unable to recover it. 00:26:13.166 [2024-04-18 21:19:28.894164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.894379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.894388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.166 qpair failed and we were unable to recover it. 00:26:13.166 [2024-04-18 21:19:28.894716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.894992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.895002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.166 qpair failed and we were unable to recover it. 00:26:13.166 [2024-04-18 21:19:28.895297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.895644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.895654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.166 qpair failed and we were unable to recover it. 00:26:13.166 [2024-04-18 21:19:28.895923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.896153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.896163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.166 qpair failed and we were unable to recover it. 00:26:13.166 [2024-04-18 21:19:28.896519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.896792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.896801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.166 qpair failed and we were unable to recover it. 00:26:13.166 [2024-04-18 21:19:28.897106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.897344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.897354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.166 qpair failed and we were unable to recover it. 00:26:13.166 [2024-04-18 21:19:28.897633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.897956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.897966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.166 qpair failed and we were unable to recover it. 00:26:13.166 [2024-04-18 21:19:28.898319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.898618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.898628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.166 qpair failed and we were unable to recover it. 00:26:13.166 [2024-04-18 21:19:28.898845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.166 [2024-04-18 21:19:28.899196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.899205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.167 qpair failed and we were unable to recover it. 00:26:13.167 [2024-04-18 21:19:28.899480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.899690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.899701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.167 qpair failed and we were unable to recover it. 00:26:13.167 [2024-04-18 21:19:28.900080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.900369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.900379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.167 qpair failed and we were unable to recover it. 00:26:13.167 [2024-04-18 21:19:28.900595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.900823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.900833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.167 qpair failed and we were unable to recover it. 00:26:13.167 [2024-04-18 21:19:28.901091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.901389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.901398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.167 qpair failed and we were unable to recover it. 00:26:13.167 [2024-04-18 21:19:28.901609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.901961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.901972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.167 qpair failed and we were unable to recover it. 00:26:13.167 [2024-04-18 21:19:28.902194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.902452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.902462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.167 qpair failed and we were unable to recover it. 00:26:13.167 [2024-04-18 21:19:28.902804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.903073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.903082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.167 qpair failed and we were unable to recover it. 00:26:13.167 [2024-04-18 21:19:28.903394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.903793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.903803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.167 qpair failed and we were unable to recover it. 00:26:13.167 [2024-04-18 21:19:28.904156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.904481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.904491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.167 qpair failed and we were unable to recover it. 00:26:13.167 [2024-04-18 21:19:28.904794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.905007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.905016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.167 qpair failed and we were unable to recover it. 00:26:13.167 [2024-04-18 21:19:28.905241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.905574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.905584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.167 qpair failed and we were unable to recover it. 00:26:13.167 [2024-04-18 21:19:28.905886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.906152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.906162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.167 qpair failed and we were unable to recover it. 00:26:13.167 [2024-04-18 21:19:28.906422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.906698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.906708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.167 qpair failed and we were unable to recover it. 00:26:13.167 [2024-04-18 21:19:28.907051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.907406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.907415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.167 qpair failed and we were unable to recover it. 00:26:13.167 [2024-04-18 21:19:28.907734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.908058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.908070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.167 qpair failed and we were unable to recover it. 00:26:13.167 [2024-04-18 21:19:28.908277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.908569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.908579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.167 qpair failed and we were unable to recover it. 00:26:13.167 [2024-04-18 21:19:28.908805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.909009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.909018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.167 qpair failed and we were unable to recover it. 00:26:13.167 [2024-04-18 21:19:28.909405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.909732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.909744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.167 qpair failed and we were unable to recover it. 00:26:13.167 [2024-04-18 21:19:28.909966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.910301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.910310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.167 qpair failed and we were unable to recover it. 00:26:13.167 [2024-04-18 21:19:28.910603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.910826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.910836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.167 qpair failed and we were unable to recover it. 00:26:13.167 [2024-04-18 21:19:28.911117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.911527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.911537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.167 qpair failed and we were unable to recover it. 00:26:13.167 [2024-04-18 21:19:28.911762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.912089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.912098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.167 qpair failed and we were unable to recover it. 00:26:13.167 [2024-04-18 21:19:28.912381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.912805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.912815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.167 qpair failed and we were unable to recover it. 00:26:13.167 [2024-04-18 21:19:28.913228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.913500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.913516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.167 qpair failed and we were unable to recover it. 00:26:13.167 [2024-04-18 21:19:28.913851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.914063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.914073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.167 qpair failed and we were unable to recover it. 00:26:13.167 [2024-04-18 21:19:28.914484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.914790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.914801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.167 qpair failed and we were unable to recover it. 00:26:13.167 [2024-04-18 21:19:28.915069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.915424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.167 [2024-04-18 21:19:28.915434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.167 qpair failed and we were unable to recover it. 00:26:13.168 [2024-04-18 21:19:28.915708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.916049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.916058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.168 qpair failed and we were unable to recover it. 00:26:13.168 [2024-04-18 21:19:28.916271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.916528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.916539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.168 qpair failed and we were unable to recover it. 00:26:13.168 [2024-04-18 21:19:28.916811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.917087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.917097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.168 qpair failed and we were unable to recover it. 00:26:13.168 [2024-04-18 21:19:28.917397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.917813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.917823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.168 qpair failed and we were unable to recover it. 00:26:13.168 [2024-04-18 21:19:28.918153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.918432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.918443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.168 qpair failed and we were unable to recover it. 00:26:13.168 [2024-04-18 21:19:28.918723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.919048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.919057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.168 qpair failed and we were unable to recover it. 00:26:13.168 [2024-04-18 21:19:28.919502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.919844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.919854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.168 qpair failed and we were unable to recover it. 00:26:13.168 [2024-04-18 21:19:28.920140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.920413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.920423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.168 qpair failed and we were unable to recover it. 00:26:13.168 [2024-04-18 21:19:28.920686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.920975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.920985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.168 qpair failed and we were unable to recover it. 00:26:13.168 [2024-04-18 21:19:28.921298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.921644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.921655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.168 qpair failed and we were unable to recover it. 00:26:13.168 [2024-04-18 21:19:28.921954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.922314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.922324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.168 qpair failed and we were unable to recover it. 00:26:13.168 [2024-04-18 21:19:28.922641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.922852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.922862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.168 qpair failed and we were unable to recover it. 00:26:13.168 [2024-04-18 21:19:28.923081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.923305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.923315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.168 qpair failed and we were unable to recover it. 00:26:13.168 [2024-04-18 21:19:28.923667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.924082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.924091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.168 qpair failed and we were unable to recover it. 00:26:13.168 [2024-04-18 21:19:28.924301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.924522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.924533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.168 qpair failed and we were unable to recover it. 00:26:13.168 [2024-04-18 21:19:28.924747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.924977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.924987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.168 qpair failed and we were unable to recover it. 00:26:13.168 [2024-04-18 21:19:28.925282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.925505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.925522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.168 qpair failed and we were unable to recover it. 00:26:13.168 [2024-04-18 21:19:28.925888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.926170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.926179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.168 qpair failed and we were unable to recover it. 00:26:13.168 [2024-04-18 21:19:28.926446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.926716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.926726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.168 qpair failed and we were unable to recover it. 00:26:13.168 [2024-04-18 21:19:28.927004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.927313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.927323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.168 qpair failed and we were unable to recover it. 00:26:13.168 [2024-04-18 21:19:28.927684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.928010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.928019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.168 qpair failed and we were unable to recover it. 00:26:13.168 [2024-04-18 21:19:28.928233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.928579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.928589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.168 qpair failed and we were unable to recover it. 00:26:13.168 [2024-04-18 21:19:28.928868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.929192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.929202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.168 qpair failed and we were unable to recover it. 00:26:13.168 [2024-04-18 21:19:28.929604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.929885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.929895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.168 qpair failed and we were unable to recover it. 00:26:13.168 [2024-04-18 21:19:28.930170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.930439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.930448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.168 qpair failed and we were unable to recover it. 00:26:13.168 [2024-04-18 21:19:28.930720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.930927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.930937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.168 qpair failed and we were unable to recover it. 00:26:13.168 [2024-04-18 21:19:28.931139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.931502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.931523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.168 qpair failed and we were unable to recover it. 00:26:13.168 [2024-04-18 21:19:28.931814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.932144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.932154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.168 qpair failed and we were unable to recover it. 00:26:13.168 [2024-04-18 21:19:28.932434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.932719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.168 [2024-04-18 21:19:28.932734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.168 qpair failed and we were unable to recover it. 00:26:13.169 [2024-04-18 21:19:28.933058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.933329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.933339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.169 qpair failed and we were unable to recover it. 00:26:13.169 [2024-04-18 21:19:28.933770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.933989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.933998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.169 qpair failed and we were unable to recover it. 00:26:13.169 [2024-04-18 21:19:28.934330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.934651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.934662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.169 qpair failed and we were unable to recover it. 00:26:13.169 [2024-04-18 21:19:28.934943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.935236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.935246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.169 qpair failed and we were unable to recover it. 00:26:13.169 [2024-04-18 21:19:28.935604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.935873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.935883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.169 qpair failed and we were unable to recover it. 00:26:13.169 [2024-04-18 21:19:28.936172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.936470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.936480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.169 qpair failed and we were unable to recover it. 00:26:13.169 [2024-04-18 21:19:28.936805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.937141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.937150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.169 qpair failed and we were unable to recover it. 00:26:13.169 [2024-04-18 21:19:28.937393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.937683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.937693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.169 qpair failed and we were unable to recover it. 00:26:13.169 [2024-04-18 21:19:28.938121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.938401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.938411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.169 qpair failed and we were unable to recover it. 00:26:13.169 [2024-04-18 21:19:28.938620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.938910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.938922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.169 qpair failed and we were unable to recover it. 00:26:13.169 [2024-04-18 21:19:28.939194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.939528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.939539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.169 qpair failed and we were unable to recover it. 00:26:13.169 [2024-04-18 21:19:28.939866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.940152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.940162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.169 qpair failed and we were unable to recover it. 00:26:13.169 [2024-04-18 21:19:28.940388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.940672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.940682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.169 qpair failed and we were unable to recover it. 00:26:13.169 [2024-04-18 21:19:28.940960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.941282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.941292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.169 qpair failed and we were unable to recover it. 00:26:13.169 [2024-04-18 21:19:28.941579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.941906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.941915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.169 qpair failed and we were unable to recover it. 00:26:13.169 [2024-04-18 21:19:28.942240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.942472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.942482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.169 qpair failed and we were unable to recover it. 00:26:13.169 [2024-04-18 21:19:28.942759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.943053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.943063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.169 qpair failed and we were unable to recover it. 00:26:13.169 [2024-04-18 21:19:28.943400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.943614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.943624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.169 qpair failed and we were unable to recover it. 00:26:13.169 [2024-04-18 21:19:28.943885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.944303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.944312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.169 qpair failed and we were unable to recover it. 00:26:13.169 [2024-04-18 21:19:28.944658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.944958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.944969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.169 qpair failed and we were unable to recover it. 00:26:13.169 [2024-04-18 21:19:28.945299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.945595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.945605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.169 qpair failed and we were unable to recover it. 00:26:13.169 [2024-04-18 21:19:28.945813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.946038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.946048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.169 qpair failed and we were unable to recover it. 00:26:13.169 [2024-04-18 21:19:28.946338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.946685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.946695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.169 qpair failed and we were unable to recover it. 00:26:13.169 [2024-04-18 21:19:28.946923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.947295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.947305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.169 qpair failed and we were unable to recover it. 00:26:13.169 [2024-04-18 21:19:28.947594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.947867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.947877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.169 qpair failed and we were unable to recover it. 00:26:13.169 [2024-04-18 21:19:28.948155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.948447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.948457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.169 qpair failed and we were unable to recover it. 00:26:13.169 [2024-04-18 21:19:28.948728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.949027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.949037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.169 qpair failed and we were unable to recover it. 00:26:13.169 [2024-04-18 21:19:28.949265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.949523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.949533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.169 qpair failed and we were unable to recover it. 00:26:13.169 [2024-04-18 21:19:28.949743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.949947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.169 [2024-04-18 21:19:28.949957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.170 qpair failed and we were unable to recover it. 00:26:13.170 [2024-04-18 21:19:28.950286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.170 [2024-04-18 21:19:28.950488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.170 [2024-04-18 21:19:28.950500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.170 qpair failed and we were unable to recover it. 00:26:13.170 [2024-04-18 21:19:28.950842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.170 [2024-04-18 21:19:28.951138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.170 [2024-04-18 21:19:28.951148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.170 qpair failed and we were unable to recover it. 00:26:13.170 [2024-04-18 21:19:28.951486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.170 [2024-04-18 21:19:28.951817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.170 [2024-04-18 21:19:28.951827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.170 qpair failed and we were unable to recover it. 00:26:13.170 [2024-04-18 21:19:28.952131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.170 [2024-04-18 21:19:28.952401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.170 [2024-04-18 21:19:28.952411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.170 qpair failed and we were unable to recover it. 00:26:13.170 [2024-04-18 21:19:28.952707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.170 [2024-04-18 21:19:28.953029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.170 [2024-04-18 21:19:28.953039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.170 qpair failed and we were unable to recover it. 00:26:13.170 [2024-04-18 21:19:28.953255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.170 [2024-04-18 21:19:28.953558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.170 [2024-04-18 21:19:28.953567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.170 qpair failed and we were unable to recover it. 00:26:13.170 [2024-04-18 21:19:28.953779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.170 [2024-04-18 21:19:28.954052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.170 [2024-04-18 21:19:28.954062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.170 qpair failed and we were unable to recover it. 00:26:13.170 [2024-04-18 21:19:28.954465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.170 [2024-04-18 21:19:28.954761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.170 [2024-04-18 21:19:28.954770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.170 qpair failed and we were unable to recover it. 00:26:13.170 [2024-04-18 21:19:28.955076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.170 [2024-04-18 21:19:28.955444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.170 [2024-04-18 21:19:28.955453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.170 qpair failed and we were unable to recover it. 00:26:13.170 [2024-04-18 21:19:28.955726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.170 [2024-04-18 21:19:28.956075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.170 [2024-04-18 21:19:28.956084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.170 qpair failed and we were unable to recover it. 00:26:13.170 [2024-04-18 21:19:28.956455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.170 [2024-04-18 21:19:28.956691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.170 [2024-04-18 21:19:28.956701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.170 qpair failed and we were unable to recover it. 00:26:13.170 [2024-04-18 21:19:28.956912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.170 [2024-04-18 21:19:28.957181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.170 [2024-04-18 21:19:28.957191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.170 qpair failed and we were unable to recover it. 00:26:13.170 [2024-04-18 21:19:28.957465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.170 [2024-04-18 21:19:28.957749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.170 [2024-04-18 21:19:28.957758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.170 qpair failed and we were unable to recover it. 00:26:13.170 [2024-04-18 21:19:28.958026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.170 [2024-04-18 21:19:28.958309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.170 [2024-04-18 21:19:28.958319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.170 qpair failed and we were unable to recover it. 00:26:13.170 [2024-04-18 21:19:28.958590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.170 [2024-04-18 21:19:28.958872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.170 [2024-04-18 21:19:28.958881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.170 qpair failed and we were unable to recover it. 00:26:13.170 [2024-04-18 21:19:28.959160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.170 [2024-04-18 21:19:28.959457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.170 [2024-04-18 21:19:28.959466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.170 qpair failed and we were unable to recover it. 00:26:13.170 [2024-04-18 21:19:28.959692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.170 [2024-04-18 21:19:28.959912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.170 [2024-04-18 21:19:28.959922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.170 qpair failed and we were unable to recover it. 00:26:13.170 [2024-04-18 21:19:28.960194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.170 [2024-04-18 21:19:28.960582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.170 [2024-04-18 21:19:28.960593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.170 qpair failed and we were unable to recover it. 00:26:13.170 [2024-04-18 21:19:28.960804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.170 [2024-04-18 21:19:28.961083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.170 [2024-04-18 21:19:28.961092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.170 qpair failed and we were unable to recover it. 00:26:13.170 [2024-04-18 21:19:28.961303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.170 [2024-04-18 21:19:28.961562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.170 [2024-04-18 21:19:28.961573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.170 qpair failed and we were unable to recover it. 00:26:13.170 [2024-04-18 21:19:28.961789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.170 [2024-04-18 21:19:28.962067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.170 [2024-04-18 21:19:28.962076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.170 qpair failed and we were unable to recover it. 00:26:13.170 [2024-04-18 21:19:28.962343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.170 [2024-04-18 21:19:28.962645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.170 [2024-04-18 21:19:28.962655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.170 qpair failed and we were unable to recover it. 00:26:13.170 [2024-04-18 21:19:28.962864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.170 [2024-04-18 21:19:28.963162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.170 [2024-04-18 21:19:28.963171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.170 qpair failed and we were unable to recover it. 00:26:13.171 [2024-04-18 21:19:28.963472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.963761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.963771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.171 qpair failed and we were unable to recover it. 00:26:13.171 [2024-04-18 21:19:28.964049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.964326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.964336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.171 qpair failed and we were unable to recover it. 00:26:13.171 [2024-04-18 21:19:28.964707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.964985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.964995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.171 qpair failed and we were unable to recover it. 00:26:13.171 [2024-04-18 21:19:28.965305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.965570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.965580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.171 qpair failed and we were unable to recover it. 00:26:13.171 [2024-04-18 21:19:28.965933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.966313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.966322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.171 qpair failed and we were unable to recover it. 00:26:13.171 [2024-04-18 21:19:28.966680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.966956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.966966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.171 qpair failed and we were unable to recover it. 00:26:13.171 [2024-04-18 21:19:28.967189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.967577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.967587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.171 qpair failed and we were unable to recover it. 00:26:13.171 [2024-04-18 21:19:28.967864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.968085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.968094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.171 qpair failed and we were unable to recover it. 00:26:13.171 [2024-04-18 21:19:28.968435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.968736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.968746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.171 qpair failed and we were unable to recover it. 00:26:13.171 [2024-04-18 21:19:28.969021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.969236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.969246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.171 qpair failed and we were unable to recover it. 00:26:13.171 [2024-04-18 21:19:28.969519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.969806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.969816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.171 qpair failed and we were unable to recover it. 00:26:13.171 [2024-04-18 21:19:28.970111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.970417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.970426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.171 qpair failed and we were unable to recover it. 00:26:13.171 [2024-04-18 21:19:28.970728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.971012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.971022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.171 qpair failed and we were unable to recover it. 00:26:13.171 [2024-04-18 21:19:28.971301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.971613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.971623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.171 qpair failed and we were unable to recover it. 00:26:13.171 [2024-04-18 21:19:28.971902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.972179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.972188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.171 qpair failed and we were unable to recover it. 00:26:13.171 [2024-04-18 21:19:28.972554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.972845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.972855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.171 qpair failed and we were unable to recover it. 00:26:13.171 [2024-04-18 21:19:28.973127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.973406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.973416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.171 qpair failed and we were unable to recover it. 00:26:13.171 [2024-04-18 21:19:28.973684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.973887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.973897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.171 qpair failed and we were unable to recover it. 00:26:13.171 [2024-04-18 21:19:28.974174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.974528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.974538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.171 qpair failed and we were unable to recover it. 00:26:13.171 [2024-04-18 21:19:28.974815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.975033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.975042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.171 qpair failed and we were unable to recover it. 00:26:13.171 [2024-04-18 21:19:28.975343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.975712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.975722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.171 qpair failed and we were unable to recover it. 00:26:13.171 [2024-04-18 21:19:28.975991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.976375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.976384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.171 qpair failed and we were unable to recover it. 00:26:13.171 [2024-04-18 21:19:28.976676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.976952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.976962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.171 qpair failed and we were unable to recover it. 00:26:13.171 [2024-04-18 21:19:28.977243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.977520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.977531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.171 qpair failed and we were unable to recover it. 00:26:13.171 [2024-04-18 21:19:28.977750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.977979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.977988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.171 qpair failed and we were unable to recover it. 00:26:13.171 [2024-04-18 21:19:28.978337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.978635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.978645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.171 qpair failed and we were unable to recover it. 00:26:13.171 [2024-04-18 21:19:28.978850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.979088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.979097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.171 qpair failed and we were unable to recover it. 00:26:13.171 [2024-04-18 21:19:28.979462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.979806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.171 [2024-04-18 21:19:28.979816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.171 qpair failed and we were unable to recover it. 00:26:13.171 [2024-04-18 21:19:28.980121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.980500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.980514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.172 qpair failed and we were unable to recover it. 00:26:13.172 [2024-04-18 21:19:28.980928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.981229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.981239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.172 qpair failed and we were unable to recover it. 00:26:13.172 [2024-04-18 21:19:28.981589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.981821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.981830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.172 qpair failed and we were unable to recover it. 00:26:13.172 [2024-04-18 21:19:28.982119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.982443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.982453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.172 qpair failed and we were unable to recover it. 00:26:13.172 [2024-04-18 21:19:28.982780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.983012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.983022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.172 qpair failed and we were unable to recover it. 00:26:13.172 [2024-04-18 21:19:28.983376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.983666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.983676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.172 qpair failed and we were unable to recover it. 00:26:13.172 [2024-04-18 21:19:28.983896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.984089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.984098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.172 qpair failed and we were unable to recover it. 00:26:13.172 [2024-04-18 21:19:28.984396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.984719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.984729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.172 qpair failed and we were unable to recover it. 00:26:13.172 [2024-04-18 21:19:28.985004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.985287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.985296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.172 qpair failed and we were unable to recover it. 00:26:13.172 [2024-04-18 21:19:28.985659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.985938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.985947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.172 qpair failed and we were unable to recover it. 00:26:13.172 [2024-04-18 21:19:28.986228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.986597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.986607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.172 qpair failed and we were unable to recover it. 00:26:13.172 [2024-04-18 21:19:28.986823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.987100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.987109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.172 qpair failed and we were unable to recover it. 00:26:13.172 [2024-04-18 21:19:28.987415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.987776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.987786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.172 qpair failed and we were unable to recover it. 00:26:13.172 [2024-04-18 21:19:28.988020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.988363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.988372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.172 qpair failed and we were unable to recover it. 00:26:13.172 [2024-04-18 21:19:28.988722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.988997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.989007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.172 qpair failed and we were unable to recover it. 00:26:13.172 [2024-04-18 21:19:28.989387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.989724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.989734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.172 qpair failed and we were unable to recover it. 00:26:13.172 [2024-04-18 21:19:28.990009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.990300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.990309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.172 qpair failed and we were unable to recover it. 00:26:13.172 [2024-04-18 21:19:28.990577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.990810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.990820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.172 qpair failed and we were unable to recover it. 00:26:13.172 [2024-04-18 21:19:28.991099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.991412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.991421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.172 qpair failed and we were unable to recover it. 00:26:13.172 [2024-04-18 21:19:28.991796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.992025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.992034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.172 qpair failed and we were unable to recover it. 00:26:13.172 [2024-04-18 21:19:28.992263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.992617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.992626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.172 qpair failed and we were unable to recover it. 00:26:13.172 [2024-04-18 21:19:28.992890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.993151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.993162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.172 qpair failed and we were unable to recover it. 00:26:13.172 [2024-04-18 21:19:28.993386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.993670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.993680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.172 qpair failed and we were unable to recover it. 00:26:13.172 [2024-04-18 21:19:28.993994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.994218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.994227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.172 qpair failed and we were unable to recover it. 00:26:13.172 [2024-04-18 21:19:28.994522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.994882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.994891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.172 qpair failed and we were unable to recover it. 00:26:13.172 [2024-04-18 21:19:28.995163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.995466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.995475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.172 qpair failed and we were unable to recover it. 00:26:13.172 [2024-04-18 21:19:28.995746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.996019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.996028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.172 qpair failed and we were unable to recover it. 00:26:13.172 [2024-04-18 21:19:28.996408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.996680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.172 [2024-04-18 21:19:28.996690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.172 qpair failed and we were unable to recover it. 00:26:13.172 [2024-04-18 21:19:28.996990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:28.997200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:28.997210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.173 qpair failed and we were unable to recover it. 00:26:13.173 [2024-04-18 21:19:28.997536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:28.997801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:28.997810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.173 qpair failed and we were unable to recover it. 00:26:13.173 [2024-04-18 21:19:28.998043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:28.998321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:28.998331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.173 qpair failed and we were unable to recover it. 00:26:13.173 [2024-04-18 21:19:28.998661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:28.998940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:28.998950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.173 qpair failed and we were unable to recover it. 00:26:13.173 [2024-04-18 21:19:28.999155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:28.999497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:28.999507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.173 qpair failed and we were unable to recover it. 00:26:13.173 [2024-04-18 21:19:28.999874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:29.000089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:29.000099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.173 qpair failed and we were unable to recover it. 00:26:13.173 [2024-04-18 21:19:29.000422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:29.000724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:29.000735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.173 qpair failed and we were unable to recover it. 00:26:13.173 [2024-04-18 21:19:29.000957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:29.001188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:29.001197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.173 qpair failed and we were unable to recover it. 00:26:13.173 [2024-04-18 21:19:29.001471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:29.001759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:29.001769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.173 qpair failed and we were unable to recover it. 00:26:13.173 [2024-04-18 21:19:29.001996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:29.002297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:29.002307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.173 qpair failed and we were unable to recover it. 00:26:13.173 [2024-04-18 21:19:29.002664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:29.002925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:29.002935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.173 qpair failed and we were unable to recover it. 00:26:13.173 [2024-04-18 21:19:29.003275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:29.003612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:29.003623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.173 qpair failed and we were unable to recover it. 00:26:13.173 [2024-04-18 21:19:29.003895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:29.004209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:29.004218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.173 qpair failed and we were unable to recover it. 00:26:13.173 [2024-04-18 21:19:29.004559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:29.004790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:29.004800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.173 qpair failed and we were unable to recover it. 00:26:13.173 [2024-04-18 21:19:29.005079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:29.005453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:29.005462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.173 qpair failed and we were unable to recover it. 00:26:13.173 [2024-04-18 21:19:29.005742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:29.005971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:29.005980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.173 qpair failed and we were unable to recover it. 00:26:13.173 [2024-04-18 21:19:29.006255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:29.006475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:29.006485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.173 qpair failed and we were unable to recover it. 00:26:13.173 [2024-04-18 21:19:29.006768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:29.007044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:29.007053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.173 qpair failed and we were unable to recover it. 00:26:13.173 [2024-04-18 21:19:29.007386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:29.007716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:29.007726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.173 qpair failed and we were unable to recover it. 00:26:13.173 [2024-04-18 21:19:29.007940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:29.008164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:29.008173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.173 qpair failed and we were unable to recover it. 00:26:13.173 [2024-04-18 21:19:29.008453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:29.008778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:29.008788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.173 qpair failed and we were unable to recover it. 00:26:13.173 [2024-04-18 21:19:29.009070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:29.009400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:29.009410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.173 qpair failed and we were unable to recover it. 00:26:13.173 [2024-04-18 21:19:29.009699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:29.010050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:29.010059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.173 qpair failed and we were unable to recover it. 00:26:13.173 [2024-04-18 21:19:29.010453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:29.010776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:29.010786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.173 qpair failed and we were unable to recover it. 00:26:13.173 [2024-04-18 21:19:29.011014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:29.011312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:29.011321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.173 qpair failed and we were unable to recover it. 00:26:13.173 [2024-04-18 21:19:29.011595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:29.011832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:29.011842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.173 qpair failed and we were unable to recover it. 00:26:13.173 [2024-04-18 21:19:29.012123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:29.012482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:29.012492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.173 qpair failed and we were unable to recover it. 00:26:13.173 [2024-04-18 21:19:29.012873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:29.013078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:29.013087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.173 qpair failed and we were unable to recover it. 00:26:13.173 [2024-04-18 21:19:29.013399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.173 [2024-04-18 21:19:29.013612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.013622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.174 qpair failed and we were unable to recover it. 00:26:13.174 [2024-04-18 21:19:29.013891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.014094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.014103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.174 qpair failed and we were unable to recover it. 00:26:13.174 [2024-04-18 21:19:29.014322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.014672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.014682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.174 qpair failed and we were unable to recover it. 00:26:13.174 [2024-04-18 21:19:29.014953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.015269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.015278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.174 qpair failed and we were unable to recover it. 00:26:13.174 [2024-04-18 21:19:29.015602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.015825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.015835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.174 qpair failed and we were unable to recover it. 00:26:13.174 [2024-04-18 21:19:29.016145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.016497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.016506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.174 qpair failed and we were unable to recover it. 00:26:13.174 [2024-04-18 21:19:29.016813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.017153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.017162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.174 qpair failed and we were unable to recover it. 00:26:13.174 [2024-04-18 21:19:29.017518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.017750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.017759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.174 qpair failed and we were unable to recover it. 00:26:13.174 [2024-04-18 21:19:29.017988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.018355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.018364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.174 qpair failed and we were unable to recover it. 00:26:13.174 [2024-04-18 21:19:29.018669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.019023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.019032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.174 qpair failed and we were unable to recover it. 00:26:13.174 [2024-04-18 21:19:29.019356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.019684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.019694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.174 qpair failed and we were unable to recover it. 00:26:13.174 [2024-04-18 21:19:29.019918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.020130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.020139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.174 qpair failed and we were unable to recover it. 00:26:13.174 [2024-04-18 21:19:29.020508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.020741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.020751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.174 qpair failed and we were unable to recover it. 00:26:13.174 [2024-04-18 21:19:29.020971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.021296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.021306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.174 qpair failed and we were unable to recover it. 00:26:13.174 [2024-04-18 21:19:29.021680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.021901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.021914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.174 qpair failed and we were unable to recover it. 00:26:13.174 [2024-04-18 21:19:29.022242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.022598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.022607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.174 qpair failed and we were unable to recover it. 00:26:13.174 [2024-04-18 21:19:29.022841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.023133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.023142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.174 qpair failed and we were unable to recover it. 00:26:13.174 [2024-04-18 21:19:29.023500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.023786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.023796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.174 qpair failed and we were unable to recover it. 00:26:13.174 [2024-04-18 21:19:29.024018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.024236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.024245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.174 qpair failed and we were unable to recover it. 00:26:13.174 [2024-04-18 21:19:29.024580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.024855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.024864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.174 qpair failed and we were unable to recover it. 00:26:13.174 [2024-04-18 21:19:29.025087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.025440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.025449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.174 qpair failed and we were unable to recover it. 00:26:13.174 [2024-04-18 21:19:29.025728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.026128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.026137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.174 qpair failed and we were unable to recover it. 00:26:13.174 [2024-04-18 21:19:29.026465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.026733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.026743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.174 qpair failed and we were unable to recover it. 00:26:13.174 [2024-04-18 21:19:29.027034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.027352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.027361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.174 qpair failed and we were unable to recover it. 00:26:13.174 [2024-04-18 21:19:29.027696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.027926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.027937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.174 qpair failed and we were unable to recover it. 00:26:13.174 [2024-04-18 21:19:29.028231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.028566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.028576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.174 qpair failed and we were unable to recover it. 00:26:13.174 [2024-04-18 21:19:29.028845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.029117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.029126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.174 qpair failed and we were unable to recover it. 00:26:13.174 [2024-04-18 21:19:29.029406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.174 [2024-04-18 21:19:29.029767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.175 [2024-04-18 21:19:29.029777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.175 qpair failed and we were unable to recover it. 00:26:13.175 [2024-04-18 21:19:29.030130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.175 [2024-04-18 21:19:29.030345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.175 [2024-04-18 21:19:29.030354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.175 qpair failed and we were unable to recover it. 00:26:13.175 [2024-04-18 21:19:29.030709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.175 [2024-04-18 21:19:29.030991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.175 [2024-04-18 21:19:29.031000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.175 qpair failed and we were unable to recover it. 00:26:13.175 [2024-04-18 21:19:29.031223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.175 [2024-04-18 21:19:29.031612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.175 [2024-04-18 21:19:29.031622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.175 qpair failed and we were unable to recover it. 00:26:13.175 [2024-04-18 21:19:29.031839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.175 [2024-04-18 21:19:29.032067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.175 [2024-04-18 21:19:29.032077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.175 qpair failed and we were unable to recover it. 00:26:13.175 [2024-04-18 21:19:29.032391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.175 [2024-04-18 21:19:29.032773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.175 [2024-04-18 21:19:29.032783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.175 qpair failed and we were unable to recover it. 00:26:13.175 [2024-04-18 21:19:29.033137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.175 [2024-04-18 21:19:29.033394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.175 [2024-04-18 21:19:29.033403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.175 qpair failed and we were unable to recover it. 00:26:13.175 [2024-04-18 21:19:29.033666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.175 [2024-04-18 21:19:29.034070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.175 [2024-04-18 21:19:29.034081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.175 qpair failed and we were unable to recover it. 00:26:13.175 [2024-04-18 21:19:29.034350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.175 [2024-04-18 21:19:29.034717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.175 [2024-04-18 21:19:29.034727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.175 qpair failed and we were unable to recover it. 00:26:13.175 [2024-04-18 21:19:29.034953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.175 [2024-04-18 21:19:29.035253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.175 [2024-04-18 21:19:29.035262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.175 qpair failed and we were unable to recover it. 00:26:13.175 [2024-04-18 21:19:29.035493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.175 [2024-04-18 21:19:29.035722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.175 [2024-04-18 21:19:29.035732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.175 qpair failed and we were unable to recover it. 00:26:13.175 [2024-04-18 21:19:29.036030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.175 [2024-04-18 21:19:29.036374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.175 [2024-04-18 21:19:29.036383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.175 qpair failed and we were unable to recover it. 00:26:13.175 [2024-04-18 21:19:29.036662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.175 [2024-04-18 21:19:29.036868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.175 [2024-04-18 21:19:29.036877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.175 qpair failed and we were unable to recover it. 00:26:13.175 [2024-04-18 21:19:29.037156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.175 [2024-04-18 21:19:29.037353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.175 [2024-04-18 21:19:29.037362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.175 qpair failed and we were unable to recover it. 00:26:13.175 [2024-04-18 21:19:29.037632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.175 [2024-04-18 21:19:29.037909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.175 [2024-04-18 21:19:29.037919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.175 qpair failed and we were unable to recover it. 00:26:13.175 [2024-04-18 21:19:29.038122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.175 [2024-04-18 21:19:29.038322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.175 [2024-04-18 21:19:29.038331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.175 qpair failed and we were unable to recover it. 00:26:13.175 [2024-04-18 21:19:29.038560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.175 [2024-04-18 21:19:29.038763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.175 [2024-04-18 21:19:29.038773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.175 qpair failed and we were unable to recover it. 00:26:13.175 [2024-04-18 21:19:29.039038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.175 [2024-04-18 21:19:29.039288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.175 [2024-04-18 21:19:29.039300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.175 qpair failed and we were unable to recover it. 00:26:13.175 [2024-04-18 21:19:29.039566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.175 [2024-04-18 21:19:29.039779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.175 [2024-04-18 21:19:29.039788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.175 qpair failed and we were unable to recover it. 00:26:13.175 [2024-04-18 21:19:29.040053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.175 [2024-04-18 21:19:29.040327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.175 [2024-04-18 21:19:29.040337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.175 qpair failed and we were unable to recover it. 00:26:13.175 [2024-04-18 21:19:29.040604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.175 [2024-04-18 21:19:29.040834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.175 [2024-04-18 21:19:29.040844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.175 qpair failed and we were unable to recover it. 00:26:13.175 [2024-04-18 21:19:29.041055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.175 [2024-04-18 21:19:29.041254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.175 [2024-04-18 21:19:29.041264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.175 qpair failed and we were unable to recover it. 00:26:13.175 [2024-04-18 21:19:29.041465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.175 [2024-04-18 21:19:29.041726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.175 [2024-04-18 21:19:29.041737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.175 qpair failed and we were unable to recover it. 00:26:13.175 [2024-04-18 21:19:29.041934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.175 [2024-04-18 21:19:29.042133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.042142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.176 qpair failed and we were unable to recover it. 00:26:13.176 [2024-04-18 21:19:29.042344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.042548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.042558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.176 qpair failed and we were unable to recover it. 00:26:13.176 [2024-04-18 21:19:29.042828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.043089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.043099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.176 qpair failed and we were unable to recover it. 00:26:13.176 [2024-04-18 21:19:29.043300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.043603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.043614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.176 qpair failed and we were unable to recover it. 00:26:13.176 [2024-04-18 21:19:29.043954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.044167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.044176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.176 qpair failed and we were unable to recover it. 00:26:13.176 [2024-04-18 21:19:29.044375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.044669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.044678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.176 qpair failed and we were unable to recover it. 00:26:13.176 [2024-04-18 21:19:29.044890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.045105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.045115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.176 qpair failed and we were unable to recover it. 00:26:13.176 [2024-04-18 21:19:29.045422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.045641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.045651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.176 qpair failed and we were unable to recover it. 00:26:13.176 [2024-04-18 21:19:29.045956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.046158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.046167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.176 qpair failed and we were unable to recover it. 00:26:13.176 [2024-04-18 21:19:29.046380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.046610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.046620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.176 qpair failed and we were unable to recover it. 00:26:13.176 [2024-04-18 21:19:29.046839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.047001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.047010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.176 qpair failed and we were unable to recover it. 00:26:13.176 [2024-04-18 21:19:29.047304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.047526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.047536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.176 qpair failed and we were unable to recover it. 00:26:13.176 [2024-04-18 21:19:29.047888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.048113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.048122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.176 qpair failed and we were unable to recover it. 00:26:13.176 [2024-04-18 21:19:29.048328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.048604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.048614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.176 qpair failed and we were unable to recover it. 00:26:13.176 [2024-04-18 21:19:29.048813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.049016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.049026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.176 qpair failed and we were unable to recover it. 00:26:13.176 [2024-04-18 21:19:29.049216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.049492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.049502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.176 qpair failed and we were unable to recover it. 00:26:13.176 [2024-04-18 21:19:29.049789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.050051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.050061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.176 qpair failed and we were unable to recover it. 00:26:13.176 [2024-04-18 21:19:29.050375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.050575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.050586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.176 qpair failed and we were unable to recover it. 00:26:13.176 [2024-04-18 21:19:29.050794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.050958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.050967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.176 qpair failed and we were unable to recover it. 00:26:13.176 [2024-04-18 21:19:29.051294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.051567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.051576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.176 qpair failed and we were unable to recover it. 00:26:13.176 [2024-04-18 21:19:29.051848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.052104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.052113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.176 qpair failed and we were unable to recover it. 00:26:13.176 [2024-04-18 21:19:29.052327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.052595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.052605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.176 qpair failed and we were unable to recover it. 00:26:13.176 [2024-04-18 21:19:29.052822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.053091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.053101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.176 qpair failed and we were unable to recover it. 00:26:13.176 [2024-04-18 21:19:29.053318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.053691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.053701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.176 qpair failed and we were unable to recover it. 00:26:13.176 [2024-04-18 21:19:29.053983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.054241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.054250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.176 qpair failed and we were unable to recover it. 00:26:13.176 [2024-04-18 21:19:29.054517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.054808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.054818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.176 qpair failed and we were unable to recover it. 00:26:13.176 [2024-04-18 21:19:29.055122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.055351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.055360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.176 qpair failed and we were unable to recover it. 00:26:13.176 [2024-04-18 21:19:29.055652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.055916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.176 [2024-04-18 21:19:29.055927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.176 qpair failed and we were unable to recover it. 00:26:13.177 [2024-04-18 21:19:29.056142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.056355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.056365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.177 qpair failed and we were unable to recover it. 00:26:13.177 [2024-04-18 21:19:29.056688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.056925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.056935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.177 qpair failed and we were unable to recover it. 00:26:13.177 [2024-04-18 21:19:29.057180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.057489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.057499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.177 qpair failed and we were unable to recover it. 00:26:13.177 [2024-04-18 21:19:29.057763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.057956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.057966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.177 qpair failed and we were unable to recover it. 00:26:13.177 [2024-04-18 21:19:29.058292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.058501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.058516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.177 qpair failed and we were unable to recover it. 00:26:13.177 [2024-04-18 21:19:29.058668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.058945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.058955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.177 qpair failed and we were unable to recover it. 00:26:13.177 [2024-04-18 21:19:29.059178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.059390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.059399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.177 qpair failed and we were unable to recover it. 00:26:13.177 [2024-04-18 21:19:29.059663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.059989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.059999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.177 qpair failed and we were unable to recover it. 00:26:13.177 [2024-04-18 21:19:29.060234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.060434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.060444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.177 qpair failed and we were unable to recover it. 00:26:13.177 [2024-04-18 21:19:29.060706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.060974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.060983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.177 qpair failed and we were unable to recover it. 00:26:13.177 [2024-04-18 21:19:29.061182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.061440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.061449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.177 qpair failed and we were unable to recover it. 00:26:13.177 [2024-04-18 21:19:29.061738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.061882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.061891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.177 qpair failed and we were unable to recover it. 00:26:13.177 [2024-04-18 21:19:29.062104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.062376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.062386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.177 qpair failed and we were unable to recover it. 00:26:13.177 [2024-04-18 21:19:29.062603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.062864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.062873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.177 qpair failed and we were unable to recover it. 00:26:13.177 [2024-04-18 21:19:29.063146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.063368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.063377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.177 qpair failed and we were unable to recover it. 00:26:13.177 [2024-04-18 21:19:29.063655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.063809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.063818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.177 qpair failed and we were unable to recover it. 00:26:13.177 [2024-04-18 21:19:29.064124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.064325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.064334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.177 qpair failed and we were unable to recover it. 00:26:13.177 [2024-04-18 21:19:29.064554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.064816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.064825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.177 qpair failed and we were unable to recover it. 00:26:13.177 [2024-04-18 21:19:29.065088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.065290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.065299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.177 qpair failed and we were unable to recover it. 00:26:13.177 [2024-04-18 21:19:29.065577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.065842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.065852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.177 qpair failed and we were unable to recover it. 00:26:13.177 [2024-04-18 21:19:29.066121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.066327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.066337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.177 qpair failed and we were unable to recover it. 00:26:13.177 [2024-04-18 21:19:29.066594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.066802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.066811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.177 qpair failed and we were unable to recover it. 00:26:13.177 [2024-04-18 21:19:29.067036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.067233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.067243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.177 qpair failed and we were unable to recover it. 00:26:13.177 [2024-04-18 21:19:29.067443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.067653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.067663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.177 qpair failed and we were unable to recover it. 00:26:13.177 [2024-04-18 21:19:29.067937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.068141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.068150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.177 qpair failed and we were unable to recover it. 00:26:13.177 [2024-04-18 21:19:29.068372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.068643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.068653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.177 qpair failed and we were unable to recover it. 00:26:13.177 [2024-04-18 21:19:29.068917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.177 [2024-04-18 21:19:29.069135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.069145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.178 qpair failed and we were unable to recover it. 00:26:13.178 [2024-04-18 21:19:29.069360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.069595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.069606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.178 qpair failed and we were unable to recover it. 00:26:13.178 [2024-04-18 21:19:29.069716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.069976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.069986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.178 qpair failed and we were unable to recover it. 00:26:13.178 [2024-04-18 21:19:29.070182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.070404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.070414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.178 qpair failed and we were unable to recover it. 00:26:13.178 [2024-04-18 21:19:29.070623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.070897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.070907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.178 qpair failed and we were unable to recover it. 00:26:13.178 [2024-04-18 21:19:29.071162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.071386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.071395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.178 qpair failed and we were unable to recover it. 00:26:13.178 [2024-04-18 21:19:29.071592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.071853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.071863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.178 qpair failed and we were unable to recover it. 00:26:13.178 [2024-04-18 21:19:29.072155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.072377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.072386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.178 qpair failed and we were unable to recover it. 00:26:13.178 [2024-04-18 21:19:29.072664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.072929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.072939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.178 qpair failed and we were unable to recover it. 00:26:13.178 [2024-04-18 21:19:29.073050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.073305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.073315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.178 qpair failed and we were unable to recover it. 00:26:13.178 [2024-04-18 21:19:29.073524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.073735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.073745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.178 qpair failed and we were unable to recover it. 00:26:13.178 [2024-04-18 21:19:29.074006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.074230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.074242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.178 qpair failed and we were unable to recover it. 00:26:13.178 [2024-04-18 21:19:29.074499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.074815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.074825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.178 qpair failed and we were unable to recover it. 00:26:13.178 [2024-04-18 21:19:29.075116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.075402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.075412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.178 qpair failed and we were unable to recover it. 00:26:13.178 [2024-04-18 21:19:29.075622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.075899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.075909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.178 qpair failed and we were unable to recover it. 00:26:13.178 [2024-04-18 21:19:29.076125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.076312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.076322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.178 qpair failed and we were unable to recover it. 00:26:13.178 [2024-04-18 21:19:29.076584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.076787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.076797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.178 qpair failed and we were unable to recover it. 00:26:13.178 [2024-04-18 21:19:29.077069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.077336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.077346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.178 qpair failed and we were unable to recover it. 00:26:13.178 [2024-04-18 21:19:29.077559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.077821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.077831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.178 qpair failed and we were unable to recover it. 00:26:13.178 [2024-04-18 21:19:29.078115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.078336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.078346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.178 qpair failed and we were unable to recover it. 00:26:13.178 [2024-04-18 21:19:29.078556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.078826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.078835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.178 qpair failed and we were unable to recover it. 00:26:13.178 [2024-04-18 21:19:29.079098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.079445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.079454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.178 qpair failed and we were unable to recover it. 00:26:13.178 [2024-04-18 21:19:29.079662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.079935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.079944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.178 qpair failed and we were unable to recover it. 00:26:13.178 [2024-04-18 21:19:29.080174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.080527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.080537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.178 qpair failed and we were unable to recover it. 00:26:13.178 [2024-04-18 21:19:29.080753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.080984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.080993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.178 qpair failed and we were unable to recover it. 00:26:13.178 [2024-04-18 21:19:29.081214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.081476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.081485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.178 qpair failed and we were unable to recover it. 00:26:13.178 [2024-04-18 21:19:29.081689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.081794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.178 [2024-04-18 21:19:29.081803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.178 qpair failed and we were unable to recover it. 00:26:13.443 [2024-04-18 21:19:29.082135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.443 [2024-04-18 21:19:29.082391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.443 [2024-04-18 21:19:29.082400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.443 qpair failed and we were unable to recover it. 00:26:13.443 [2024-04-18 21:19:29.082692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.443 [2024-04-18 21:19:29.083010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.443 [2024-04-18 21:19:29.083020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.443 qpair failed and we were unable to recover it. 00:26:13.443 [2024-04-18 21:19:29.083219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.443 [2024-04-18 21:19:29.083485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.083494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.444 qpair failed and we were unable to recover it. 00:26:13.444 [2024-04-18 21:19:29.083711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.083912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.083921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.444 qpair failed and we were unable to recover it. 00:26:13.444 [2024-04-18 21:19:29.084187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.084520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.084530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.444 qpair failed and we were unable to recover it. 00:26:13.444 [2024-04-18 21:19:29.084862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.085133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.085143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.444 qpair failed and we were unable to recover it. 00:26:13.444 [2024-04-18 21:19:29.085436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.085718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.085728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.444 qpair failed and we were unable to recover it. 00:26:13.444 [2024-04-18 21:19:29.085941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.086156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.086165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.444 qpair failed and we were unable to recover it. 00:26:13.444 [2024-04-18 21:19:29.086381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.086587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.086597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.444 qpair failed and we were unable to recover it. 00:26:13.444 [2024-04-18 21:19:29.086814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.087014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.087023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.444 qpair failed and we were unable to recover it. 00:26:13.444 [2024-04-18 21:19:29.087306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.087520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.087531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.444 qpair failed and we were unable to recover it. 00:26:13.444 [2024-04-18 21:19:29.087791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.088073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.088082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.444 qpair failed and we were unable to recover it. 00:26:13.444 [2024-04-18 21:19:29.088355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.088630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.088639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.444 qpair failed and we were unable to recover it. 00:26:13.444 [2024-04-18 21:19:29.088914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.089104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.089113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.444 qpair failed and we were unable to recover it. 00:26:13.444 [2024-04-18 21:19:29.089279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.089474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.089484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.444 qpair failed and we were unable to recover it. 00:26:13.444 [2024-04-18 21:19:29.089678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.089881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.089891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.444 qpair failed and we were unable to recover it. 00:26:13.444 [2024-04-18 21:19:29.090096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.090221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.090231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.444 qpair failed and we were unable to recover it. 00:26:13.444 [2024-04-18 21:19:29.090452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.090655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.090665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.444 qpair failed and we were unable to recover it. 00:26:13.444 [2024-04-18 21:19:29.090929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.091186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.091196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.444 qpair failed and we were unable to recover it. 00:26:13.444 [2024-04-18 21:19:29.091468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.091669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.091688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.444 qpair failed and we were unable to recover it. 00:26:13.444 [2024-04-18 21:19:29.091911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.092274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.092284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.444 qpair failed and we were unable to recover it. 00:26:13.444 [2024-04-18 21:19:29.092475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.092750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.092760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.444 qpair failed and we were unable to recover it. 00:26:13.444 [2024-04-18 21:19:29.092922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.093191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.093201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.444 qpair failed and we were unable to recover it. 00:26:13.444 [2024-04-18 21:19:29.093487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.093708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.093717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.444 qpair failed and we were unable to recover it. 00:26:13.444 [2024-04-18 21:19:29.093912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.094125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.094134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.444 qpair failed and we were unable to recover it. 00:26:13.444 [2024-04-18 21:19:29.094243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.094442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.094451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.444 qpair failed and we were unable to recover it. 00:26:13.444 [2024-04-18 21:19:29.094654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.094867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.094877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.444 qpair failed and we were unable to recover it. 00:26:13.444 [2024-04-18 21:19:29.095041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.095260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.444 [2024-04-18 21:19:29.095270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.445 qpair failed and we were unable to recover it. 00:26:13.445 [2024-04-18 21:19:29.095638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.445 [2024-04-18 21:19:29.095906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.445 [2024-04-18 21:19:29.095916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.445 qpair failed and we were unable to recover it. 00:26:13.445 [2024-04-18 21:19:29.096112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.445 [2024-04-18 21:19:29.096320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.445 [2024-04-18 21:19:29.096330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.445 qpair failed and we were unable to recover it. 00:26:13.445 [2024-04-18 21:19:29.096554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.445 [2024-04-18 21:19:29.096828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.445 [2024-04-18 21:19:29.096837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.445 qpair failed and we were unable to recover it. 00:26:13.445 [2024-04-18 21:19:29.097043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.445 [2024-04-18 21:19:29.097323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.445 [2024-04-18 21:19:29.097332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.445 qpair failed and we were unable to recover it. 00:26:13.445 [2024-04-18 21:19:29.097659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.445 [2024-04-18 21:19:29.097866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.445 [2024-04-18 21:19:29.097876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.445 qpair failed and we were unable to recover it. 00:26:13.445 [2024-04-18 21:19:29.098144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.445 [2024-04-18 21:19:29.098357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.445 [2024-04-18 21:19:29.098366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.445 qpair failed and we were unable to recover it. 00:26:13.445 [2024-04-18 21:19:29.098634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.445 [2024-04-18 21:19:29.098806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.445 [2024-04-18 21:19:29.098815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.445 qpair failed and we were unable to recover it. 00:26:13.445 [2024-04-18 21:19:29.099083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.445 [2024-04-18 21:19:29.099342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.445 [2024-04-18 21:19:29.099351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.445 qpair failed and we were unable to recover it. 00:26:13.445 [2024-04-18 21:19:29.099610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.445 [2024-04-18 21:19:29.099854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.445 [2024-04-18 21:19:29.099864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.445 qpair failed and we were unable to recover it. 00:26:13.445 [2024-04-18 21:19:29.100061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.445 [2024-04-18 21:19:29.100265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.445 [2024-04-18 21:19:29.100274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.445 qpair failed and we were unable to recover it. 00:26:13.445 [2024-04-18 21:19:29.100530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.445 [2024-04-18 21:19:29.100721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.445 [2024-04-18 21:19:29.100730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.445 qpair failed and we were unable to recover it. 00:26:13.445 [2024-04-18 21:19:29.101012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.445 [2024-04-18 21:19:29.101113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.445 [2024-04-18 21:19:29.101121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.445 qpair failed and we were unable to recover it. 00:26:13.445 [2024-04-18 21:19:29.101390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.445 [2024-04-18 21:19:29.101598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.445 [2024-04-18 21:19:29.101609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.445 qpair failed and we were unable to recover it. 00:26:13.445 [2024-04-18 21:19:29.101832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.445 [2024-04-18 21:19:29.101935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.445 [2024-04-18 21:19:29.101944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.445 qpair failed and we were unable to recover it. 00:26:13.445 [2024-04-18 21:19:29.102271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.445 [2024-04-18 21:19:29.102538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.445 [2024-04-18 21:19:29.102547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.445 qpair failed and we were unable to recover it. 00:26:13.445 [2024-04-18 21:19:29.102741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.445 [2024-04-18 21:19:29.103041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.445 [2024-04-18 21:19:29.103051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.445 qpair failed and we were unable to recover it. 00:26:13.445 [2024-04-18 21:19:29.103309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.445 [2024-04-18 21:19:29.103518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.445 [2024-04-18 21:19:29.103530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.445 qpair failed and we were unable to recover it. 00:26:13.445 [2024-04-18 21:19:29.103736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.445 [2024-04-18 21:19:29.103930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.445 [2024-04-18 21:19:29.103940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.445 qpair failed and we were unable to recover it. 00:26:13.445 [2024-04-18 21:19:29.104137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.445 [2024-04-18 21:19:29.104347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.445 [2024-04-18 21:19:29.104356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.445 qpair failed and we were unable to recover it. 00:26:13.445 [2024-04-18 21:19:29.104569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.445 [2024-04-18 21:19:29.104772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.445 [2024-04-18 21:19:29.104783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.445 qpair failed and we were unable to recover it. 00:26:13.445 [2024-04-18 21:19:29.105047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.445 [2024-04-18 21:19:29.105307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.445 [2024-04-18 21:19:29.105316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.445 qpair failed and we were unable to recover it. 00:26:13.445 [2024-04-18 21:19:29.105528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.445 [2024-04-18 21:19:29.105728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.445 [2024-04-18 21:19:29.105737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.445 qpair failed and we were unable to recover it. 00:26:13.445 [2024-04-18 21:19:29.106015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.445 [2024-04-18 21:19:29.106210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.445 [2024-04-18 21:19:29.106220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.445 qpair failed and we were unable to recover it. 00:26:13.445 [2024-04-18 21:19:29.106409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.445 [2024-04-18 21:19:29.106613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.446 [2024-04-18 21:19:29.106623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.446 qpair failed and we were unable to recover it. 00:26:13.446 [2024-04-18 21:19:29.106835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.446 [2024-04-18 21:19:29.107039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.446 [2024-04-18 21:19:29.107049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.446 qpair failed and we were unable to recover it. 00:26:13.446 [2024-04-18 21:19:29.107310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.446 [2024-04-18 21:19:29.107525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.446 [2024-04-18 21:19:29.107535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.446 qpair failed and we were unable to recover it. 00:26:13.446 [2024-04-18 21:19:29.107800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.446 [2024-04-18 21:19:29.107999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.446 [2024-04-18 21:19:29.108010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.446 qpair failed and we were unable to recover it. 00:26:13.446 [2024-04-18 21:19:29.108234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.446 [2024-04-18 21:19:29.108432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.446 [2024-04-18 21:19:29.108441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.446 qpair failed and we were unable to recover it. 00:26:13.446 [2024-04-18 21:19:29.108704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.446 [2024-04-18 21:19:29.108977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.446 [2024-04-18 21:19:29.108987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.446 qpair failed and we were unable to recover it. 00:26:13.446 [2024-04-18 21:19:29.109184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.446 [2024-04-18 21:19:29.109403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.446 [2024-04-18 21:19:29.109413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.446 qpair failed and we were unable to recover it. 00:26:13.446 [2024-04-18 21:19:29.109629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.446 [2024-04-18 21:19:29.109732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.446 [2024-04-18 21:19:29.109741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.446 qpair failed and we were unable to recover it. 00:26:13.446 [2024-04-18 21:19:29.109932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.446 [2024-04-18 21:19:29.110144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.446 [2024-04-18 21:19:29.110153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.446 qpair failed and we were unable to recover it. 00:26:13.446 [2024-04-18 21:19:29.110389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.446 [2024-04-18 21:19:29.110680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.446 [2024-04-18 21:19:29.110690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.446 qpair failed and we were unable to recover it. 00:26:13.446 [2024-04-18 21:19:29.110901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.446 [2024-04-18 21:19:29.111165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.446 [2024-04-18 21:19:29.111175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.446 qpair failed and we were unable to recover it. 00:26:13.446 [2024-04-18 21:19:29.111450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.446 [2024-04-18 21:19:29.111660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.446 [2024-04-18 21:19:29.111669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.446 qpair failed and we were unable to recover it. 00:26:13.446 [2024-04-18 21:19:29.111982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.446 [2024-04-18 21:19:29.112181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.446 [2024-04-18 21:19:29.112191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.446 qpair failed and we were unable to recover it. 00:26:13.446 [2024-04-18 21:19:29.112389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.446 [2024-04-18 21:19:29.112610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.446 [2024-04-18 21:19:29.112623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.446 qpair failed and we were unable to recover it. 00:26:13.446 [2024-04-18 21:19:29.112859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.446 [2024-04-18 21:19:29.113073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.446 [2024-04-18 21:19:29.113082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.446 qpair failed and we were unable to recover it. 00:26:13.446 [2024-04-18 21:19:29.113291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.446 [2024-04-18 21:19:29.113549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.446 [2024-04-18 21:19:29.113559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.446 qpair failed and we were unable to recover it. 00:26:13.446 [2024-04-18 21:19:29.113816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.446 [2024-04-18 21:19:29.114026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.446 [2024-04-18 21:19:29.114036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.446 qpair failed and we were unable to recover it. 00:26:13.446 [2024-04-18 21:19:29.114297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.446 [2024-04-18 21:19:29.114565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.446 [2024-04-18 21:19:29.114574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.446 qpair failed and we were unable to recover it. 00:26:13.446 [2024-04-18 21:19:29.114846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.446 [2024-04-18 21:19:29.115066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.446 [2024-04-18 21:19:29.115076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.446 qpair failed and we were unable to recover it. 00:26:13.446 [2024-04-18 21:19:29.115366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.446 [2024-04-18 21:19:29.115593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.446 [2024-04-18 21:19:29.115603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.446 qpair failed and we were unable to recover it. 00:26:13.446 [2024-04-18 21:19:29.115919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.446 [2024-04-18 21:19:29.116119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.446 [2024-04-18 21:19:29.116128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.446 qpair failed and we were unable to recover it. 00:26:13.446 [2024-04-18 21:19:29.116336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.446 [2024-04-18 21:19:29.116544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.446 [2024-04-18 21:19:29.116555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.446 qpair failed and we were unable to recover it. 00:26:13.446 [2024-04-18 21:19:29.116828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.446 [2024-04-18 21:19:29.117112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.446 [2024-04-18 21:19:29.117122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.447 qpair failed and we were unable to recover it. 00:26:13.447 [2024-04-18 21:19:29.117325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.117542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.117554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.447 qpair failed and we were unable to recover it. 00:26:13.447 [2024-04-18 21:19:29.117761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.118033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.118043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.447 qpair failed and we were unable to recover it. 00:26:13.447 [2024-04-18 21:19:29.118200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.118410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.118419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.447 qpair failed and we were unable to recover it. 00:26:13.447 [2024-04-18 21:19:29.118622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.118724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.118734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.447 qpair failed and we were unable to recover it. 00:26:13.447 [2024-04-18 21:19:29.118933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.119044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.119054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.447 qpair failed and we were unable to recover it. 00:26:13.447 [2024-04-18 21:19:29.119314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.119494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.119504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.447 qpair failed and we were unable to recover it. 00:26:13.447 [2024-04-18 21:19:29.119714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.119991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.120001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.447 qpair failed and we were unable to recover it. 00:26:13.447 [2024-04-18 21:19:29.120200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.120458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.120468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.447 qpair failed and we were unable to recover it. 00:26:13.447 [2024-04-18 21:19:29.120672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.121031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.121041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.447 qpair failed and we were unable to recover it. 00:26:13.447 [2024-04-18 21:19:29.121353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.121556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.121567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.447 qpair failed and we were unable to recover it. 00:26:13.447 [2024-04-18 21:19:29.121779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.121982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.121992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.447 qpair failed and we were unable to recover it. 00:26:13.447 [2024-04-18 21:19:29.122142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.122405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.122415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.447 qpair failed and we were unable to recover it. 00:26:13.447 [2024-04-18 21:19:29.122643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.122858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.122867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.447 qpair failed and we were unable to recover it. 00:26:13.447 [2024-04-18 21:19:29.123078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.123267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.123277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.447 qpair failed and we were unable to recover it. 00:26:13.447 [2024-04-18 21:19:29.123561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.123755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.123765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.447 qpair failed and we were unable to recover it. 00:26:13.447 [2024-04-18 21:19:29.123988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.124195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.124205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.447 qpair failed and we were unable to recover it. 00:26:13.447 [2024-04-18 21:19:29.124411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.124681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.124691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.447 qpair failed and we were unable to recover it. 00:26:13.447 [2024-04-18 21:19:29.124896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.125109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.125119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.447 qpair failed and we were unable to recover it. 00:26:13.447 [2024-04-18 21:19:29.125347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.125567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.125577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.447 qpair failed and we were unable to recover it. 00:26:13.447 [2024-04-18 21:19:29.125806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.126072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.126081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.447 qpair failed and we were unable to recover it. 00:26:13.447 [2024-04-18 21:19:29.126191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.126389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.126398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.447 qpair failed and we were unable to recover it. 00:26:13.447 [2024-04-18 21:19:29.126634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.126999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.127008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.447 qpair failed and we were unable to recover it. 00:26:13.447 [2024-04-18 21:19:29.127213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.127416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.127425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.447 qpair failed and we were unable to recover it. 00:26:13.447 [2024-04-18 21:19:29.127701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.127918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.127928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.447 qpair failed and we were unable to recover it. 00:26:13.447 [2024-04-18 21:19:29.128200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.128469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.128479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.447 qpair failed and we were unable to recover it. 00:26:13.447 [2024-04-18 21:19:29.128757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.128981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.128991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.447 qpair failed and we were unable to recover it. 00:26:13.447 [2024-04-18 21:19:29.129269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.129442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.129452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.447 qpair failed and we were unable to recover it. 00:26:13.447 [2024-04-18 21:19:29.129657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.129872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.129882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.447 qpair failed and we were unable to recover it. 00:26:13.447 [2024-04-18 21:19:29.130090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.447 [2024-04-18 21:19:29.130298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.448 [2024-04-18 21:19:29.130307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.448 qpair failed and we were unable to recover it. 00:26:13.448 [2024-04-18 21:19:29.130551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.448 [2024-04-18 21:19:29.130840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.448 [2024-04-18 21:19:29.130853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.448 qpair failed and we were unable to recover it. 00:26:13.448 [2024-04-18 21:19:29.131149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.448 [2024-04-18 21:19:29.131374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.448 [2024-04-18 21:19:29.131384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.448 qpair failed and we were unable to recover it. 00:26:13.448 [2024-04-18 21:19:29.131669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.448 [2024-04-18 21:19:29.131891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.448 [2024-04-18 21:19:29.131901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.448 qpair failed and we were unable to recover it. 00:26:13.448 [2024-04-18 21:19:29.132111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.448 [2024-04-18 21:19:29.132331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.448 [2024-04-18 21:19:29.132340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.448 qpair failed and we were unable to recover it. 00:26:13.448 [2024-04-18 21:19:29.132579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.448 [2024-04-18 21:19:29.132842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.448 [2024-04-18 21:19:29.132852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.448 qpair failed and we were unable to recover it. 00:26:13.448 [2024-04-18 21:19:29.133125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.448 [2024-04-18 21:19:29.133393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.448 [2024-04-18 21:19:29.133403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.448 qpair failed and we were unable to recover it. 00:26:13.448 [2024-04-18 21:19:29.133669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.448 [2024-04-18 21:19:29.133938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.448 [2024-04-18 21:19:29.133948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.448 qpair failed and we were unable to recover it. 00:26:13.448 [2024-04-18 21:19:29.134244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.448 [2024-04-18 21:19:29.134627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.448 [2024-04-18 21:19:29.134637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.448 qpair failed and we were unable to recover it. 00:26:13.448 [2024-04-18 21:19:29.134743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.448 [2024-04-18 21:19:29.135050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.448 [2024-04-18 21:19:29.135060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.448 qpair failed and we were unable to recover it. 00:26:13.448 [2024-04-18 21:19:29.135348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.448 [2024-04-18 21:19:29.135649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.448 [2024-04-18 21:19:29.135659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.448 qpair failed and we were unable to recover it. 00:26:13.448 [2024-04-18 21:19:29.135858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.448 [2024-04-18 21:19:29.136067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.448 [2024-04-18 21:19:29.136078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.448 qpair failed and we were unable to recover it. 00:26:13.448 [2024-04-18 21:19:29.136339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.448 [2024-04-18 21:19:29.136610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.448 [2024-04-18 21:19:29.136620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.448 qpair failed and we were unable to recover it. 00:26:13.448 [2024-04-18 21:19:29.136805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.448 [2024-04-18 21:19:29.137060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.448 [2024-04-18 21:19:29.137069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.448 qpair failed and we were unable to recover it. 00:26:13.448 [2024-04-18 21:19:29.137344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.448 [2024-04-18 21:19:29.137619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.448 [2024-04-18 21:19:29.137629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.448 qpair failed and we were unable to recover it. 00:26:13.448 [2024-04-18 21:19:29.137848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.448 [2024-04-18 21:19:29.138064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.448 [2024-04-18 21:19:29.138073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.448 qpair failed and we were unable to recover it. 00:26:13.448 [2024-04-18 21:19:29.138339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.448 [2024-04-18 21:19:29.138549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.448 [2024-04-18 21:19:29.138560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.448 qpair failed and we were unable to recover it. 00:26:13.448 [2024-04-18 21:19:29.138773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.448 [2024-04-18 21:19:29.138888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.448 [2024-04-18 21:19:29.138897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.448 qpair failed and we were unable to recover it. 00:26:13.448 [2024-04-18 21:19:29.139100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.448 [2024-04-18 21:19:29.139368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.448 [2024-04-18 21:19:29.139378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.448 qpair failed and we were unable to recover it. 00:26:13.448 [2024-04-18 21:19:29.139658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.448 [2024-04-18 21:19:29.139866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.448 [2024-04-18 21:19:29.139875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.448 qpair failed and we were unable to recover it. 00:26:13.448 [2024-04-18 21:19:29.140080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.448 [2024-04-18 21:19:29.140262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.448 [2024-04-18 21:19:29.140272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.448 qpair failed and we were unable to recover it. 00:26:13.448 [2024-04-18 21:19:29.140388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.448 [2024-04-18 21:19:29.140689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.448 [2024-04-18 21:19:29.140699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.448 qpair failed and we were unable to recover it. 00:26:13.448 [2024-04-18 21:19:29.140974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.448 [2024-04-18 21:19:29.141175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.448 [2024-04-18 21:19:29.141186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.448 qpair failed and we were unable to recover it. 00:26:13.448 [2024-04-18 21:19:29.141398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.448 [2024-04-18 21:19:29.141723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.448 [2024-04-18 21:19:29.141733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.448 qpair failed and we were unable to recover it. 00:26:13.448 [2024-04-18 21:19:29.141923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.449 [2024-04-18 21:19:29.142136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.449 [2024-04-18 21:19:29.142146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.449 qpair failed and we were unable to recover it. 00:26:13.449 [2024-04-18 21:19:29.142497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.449 [2024-04-18 21:19:29.142720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.449 [2024-04-18 21:19:29.142730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.449 qpair failed and we were unable to recover it. 00:26:13.449 [2024-04-18 21:19:29.142999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.449 [2024-04-18 21:19:29.143196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.449 [2024-04-18 21:19:29.143206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.449 qpair failed and we were unable to recover it. 00:26:13.449 [2024-04-18 21:19:29.143376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.449 [2024-04-18 21:19:29.143649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.449 [2024-04-18 21:19:29.143658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.449 qpair failed and we were unable to recover it. 00:26:13.449 [2024-04-18 21:19:29.143963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.449 [2024-04-18 21:19:29.144163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.449 [2024-04-18 21:19:29.144173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.449 qpair failed and we were unable to recover it. 00:26:13.449 [2024-04-18 21:19:29.144398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.449 [2024-04-18 21:19:29.144809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.449 [2024-04-18 21:19:29.144820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.449 qpair failed and we were unable to recover it. 00:26:13.449 [2024-04-18 21:19:29.145105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.449 [2024-04-18 21:19:29.145408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.449 [2024-04-18 21:19:29.145418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.449 qpair failed and we were unable to recover it. 00:26:13.449 [2024-04-18 21:19:29.145688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.449 [2024-04-18 21:19:29.145954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.449 [2024-04-18 21:19:29.145963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.449 qpair failed and we were unable to recover it. 00:26:13.449 [2024-04-18 21:19:29.146181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.449 [2024-04-18 21:19:29.146470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.449 [2024-04-18 21:19:29.146480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.449 qpair failed and we were unable to recover it. 00:26:13.449 [2024-04-18 21:19:29.146703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.449 [2024-04-18 21:19:29.146854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.449 [2024-04-18 21:19:29.146863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.449 qpair failed and we were unable to recover it. 00:26:13.449 [2024-04-18 21:19:29.147079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.449 [2024-04-18 21:19:29.147417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.449 [2024-04-18 21:19:29.147426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.449 qpair failed and we were unable to recover it. 00:26:13.449 [2024-04-18 21:19:29.147545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.449 [2024-04-18 21:19:29.147762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.449 [2024-04-18 21:19:29.147771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.449 qpair failed and we were unable to recover it. 00:26:13.449 [2024-04-18 21:19:29.147980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.449 [2024-04-18 21:19:29.148309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.449 [2024-04-18 21:19:29.148319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.449 qpair failed and we were unable to recover it. 00:26:13.449 [2024-04-18 21:19:29.148630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.449 [2024-04-18 21:19:29.148846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.449 [2024-04-18 21:19:29.148855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.449 qpair failed and we were unable to recover it. 00:26:13.449 [2024-04-18 21:19:29.149072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.449 [2024-04-18 21:19:29.149271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.449 [2024-04-18 21:19:29.149281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.449 qpair failed and we were unable to recover it. 00:26:13.449 [2024-04-18 21:19:29.149554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.449 [2024-04-18 21:19:29.149836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.449 [2024-04-18 21:19:29.149847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.449 qpair failed and we were unable to recover it. 00:26:13.449 [2024-04-18 21:19:29.150054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.449 [2024-04-18 21:19:29.150250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.449 [2024-04-18 21:19:29.150260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.449 qpair failed and we were unable to recover it. 00:26:13.449 [2024-04-18 21:19:29.150601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.449 [2024-04-18 21:19:29.150794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.449 [2024-04-18 21:19:29.150804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.449 qpair failed and we were unable to recover it. 00:26:13.450 [2024-04-18 21:19:29.151002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.450 [2024-04-18 21:19:29.151197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.450 [2024-04-18 21:19:29.151208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.450 qpair failed and we were unable to recover it. 00:26:13.450 [2024-04-18 21:19:29.151478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.450 [2024-04-18 21:19:29.151685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.450 [2024-04-18 21:19:29.151696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.450 qpair failed and we were unable to recover it. 00:26:13.450 [2024-04-18 21:19:29.151904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.450 [2024-04-18 21:19:29.152100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.450 [2024-04-18 21:19:29.152110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.450 qpair failed and we were unable to recover it. 00:26:13.450 [2024-04-18 21:19:29.152327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.450 [2024-04-18 21:19:29.152590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.450 [2024-04-18 21:19:29.152600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.450 qpair failed and we were unable to recover it. 00:26:13.450 [2024-04-18 21:19:29.152888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.450 [2024-04-18 21:19:29.153084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.450 [2024-04-18 21:19:29.153095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.450 qpair failed and we were unable to recover it. 00:26:13.450 [2024-04-18 21:19:29.153291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.450 [2024-04-18 21:19:29.153500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.450 [2024-04-18 21:19:29.153516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.450 qpair failed and we were unable to recover it. 00:26:13.450 [2024-04-18 21:19:29.153723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.450 [2024-04-18 21:19:29.153986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.450 [2024-04-18 21:19:29.153996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.450 qpair failed and we were unable to recover it. 00:26:13.450 [2024-04-18 21:19:29.154209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.450 [2024-04-18 21:19:29.154414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.450 [2024-04-18 21:19:29.154424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.450 qpair failed and we were unable to recover it. 00:26:13.450 [2024-04-18 21:19:29.154637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.450 [2024-04-18 21:19:29.154855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.450 [2024-04-18 21:19:29.154865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.450 qpair failed and we were unable to recover it. 00:26:13.450 [2024-04-18 21:19:29.155067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.450 [2024-04-18 21:19:29.155273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.450 [2024-04-18 21:19:29.155283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.450 qpair failed and we were unable to recover it. 00:26:13.450 [2024-04-18 21:19:29.155503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.450 [2024-04-18 21:19:29.155696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.450 [2024-04-18 21:19:29.155706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.450 qpair failed and we were unable to recover it. 00:26:13.450 [2024-04-18 21:19:29.155914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.450 [2024-04-18 21:19:29.156117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.450 [2024-04-18 21:19:29.156126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.450 qpair failed and we were unable to recover it. 00:26:13.450 [2024-04-18 21:19:29.156292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.450 [2024-04-18 21:19:29.156492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.450 [2024-04-18 21:19:29.156502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.450 qpair failed and we were unable to recover it. 00:26:13.450 [2024-04-18 21:19:29.156710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.450 [2024-04-18 21:19:29.156911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.450 [2024-04-18 21:19:29.156921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.450 qpair failed and we were unable to recover it. 00:26:13.450 [2024-04-18 21:19:29.157225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.450 [2024-04-18 21:19:29.157443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.450 [2024-04-18 21:19:29.157453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.450 qpair failed and we were unable to recover it. 00:26:13.450 [2024-04-18 21:19:29.157725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.450 [2024-04-18 21:19:29.157914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.450 [2024-04-18 21:19:29.157923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.450 qpair failed and we were unable to recover it. 00:26:13.450 [2024-04-18 21:19:29.158116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.450 [2024-04-18 21:19:29.158241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.450 [2024-04-18 21:19:29.158251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.450 qpair failed and we were unable to recover it. 00:26:13.450 [2024-04-18 21:19:29.158464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.450 [2024-04-18 21:19:29.158728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.450 [2024-04-18 21:19:29.158738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.450 qpair failed and we were unable to recover it. 00:26:13.450 [2024-04-18 21:19:29.158954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.450 [2024-04-18 21:19:29.159224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.450 [2024-04-18 21:19:29.159234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.450 qpair failed and we were unable to recover it. 00:26:13.450 [2024-04-18 21:19:29.159347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.450 [2024-04-18 21:19:29.159548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.450 [2024-04-18 21:19:29.159558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.450 qpair failed and we were unable to recover it. 00:26:13.450 [2024-04-18 21:19:29.159768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.450 [2024-04-18 21:19:29.160069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.450 [2024-04-18 21:19:29.160079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.450 qpair failed and we were unable to recover it. 00:26:13.450 [2024-04-18 21:19:29.160246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.451 [2024-04-18 21:19:29.160447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.451 [2024-04-18 21:19:29.160457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.451 qpair failed and we were unable to recover it. 00:26:13.451 [2024-04-18 21:19:29.160651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.451 [2024-04-18 21:19:29.160881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.451 [2024-04-18 21:19:29.160891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.451 qpair failed and we were unable to recover it. 00:26:13.451 [2024-04-18 21:19:29.161106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.451 [2024-04-18 21:19:29.161299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.451 [2024-04-18 21:19:29.161310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.451 qpair failed and we were unable to recover it. 00:26:13.451 [2024-04-18 21:19:29.161531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.451 [2024-04-18 21:19:29.161724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.451 [2024-04-18 21:19:29.161735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.451 qpair failed and we were unable to recover it. 00:26:13.451 [2024-04-18 21:19:29.162080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.451 [2024-04-18 21:19:29.162399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.451 [2024-04-18 21:19:29.162409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.451 qpair failed and we were unable to recover it. 00:26:13.451 [2024-04-18 21:19:29.162626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.451 [2024-04-18 21:19:29.162884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.451 [2024-04-18 21:19:29.162894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.451 qpair failed and we were unable to recover it. 00:26:13.451 [2024-04-18 21:19:29.163157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.451 [2024-04-18 21:19:29.163386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.451 [2024-04-18 21:19:29.163395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.451 qpair failed and we were unable to recover it. 00:26:13.451 [2024-04-18 21:19:29.163617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.451 [2024-04-18 21:19:29.163819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.451 [2024-04-18 21:19:29.163830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.451 qpair failed and we were unable to recover it. 00:26:13.451 [2024-04-18 21:19:29.164042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.451 [2024-04-18 21:19:29.164244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.451 [2024-04-18 21:19:29.164255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.451 qpair failed and we were unable to recover it. 00:26:13.451 [2024-04-18 21:19:29.164473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.451 [2024-04-18 21:19:29.164668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.451 [2024-04-18 21:19:29.164679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.451 qpair failed and we were unable to recover it. 00:26:13.451 [2024-04-18 21:19:29.164881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.451 [2024-04-18 21:19:29.165090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.451 [2024-04-18 21:19:29.165100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.451 qpair failed and we were unable to recover it. 00:26:13.451 [2024-04-18 21:19:29.165407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.451 [2024-04-18 21:19:29.165707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.451 [2024-04-18 21:19:29.165717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.451 qpair failed and we were unable to recover it. 00:26:13.451 [2024-04-18 21:19:29.165982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.451 [2024-04-18 21:19:29.166269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.451 [2024-04-18 21:19:29.166279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.451 qpair failed and we were unable to recover it. 00:26:13.451 [2024-04-18 21:19:29.166485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.451 [2024-04-18 21:19:29.166764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.451 [2024-04-18 21:19:29.166774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.451 qpair failed and we were unable to recover it. 00:26:13.451 [2024-04-18 21:19:29.166988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.451 [2024-04-18 21:19:29.167199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.451 [2024-04-18 21:19:29.167210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.451 qpair failed and we were unable to recover it. 00:26:13.451 [2024-04-18 21:19:29.167313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.451 [2024-04-18 21:19:29.167653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.451 [2024-04-18 21:19:29.167663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.451 qpair failed and we were unable to recover it. 00:26:13.451 [2024-04-18 21:19:29.167926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.451 [2024-04-18 21:19:29.168260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.451 [2024-04-18 21:19:29.168271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.451 qpair failed and we were unable to recover it. 00:26:13.451 [2024-04-18 21:19:29.168529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.451 [2024-04-18 21:19:29.168721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.451 [2024-04-18 21:19:29.168732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.451 qpair failed and we were unable to recover it. 00:26:13.451 [2024-04-18 21:19:29.168925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.451 [2024-04-18 21:19:29.169191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.451 [2024-04-18 21:19:29.169201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.451 qpair failed and we were unable to recover it. 00:26:13.452 [2024-04-18 21:19:29.169408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.452 [2024-04-18 21:19:29.169601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.452 [2024-04-18 21:19:29.169611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.452 qpair failed and we were unable to recover it. 00:26:13.452 [2024-04-18 21:19:29.169818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.452 [2024-04-18 21:19:29.170097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.452 [2024-04-18 21:19:29.170107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.452 qpair failed and we were unable to recover it. 00:26:13.452 [2024-04-18 21:19:29.170370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.452 [2024-04-18 21:19:29.170570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.452 [2024-04-18 21:19:29.170580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.452 qpair failed and we were unable to recover it. 00:26:13.452 [2024-04-18 21:19:29.170815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.452 [2024-04-18 21:19:29.171095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.452 [2024-04-18 21:19:29.171104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.452 qpair failed and we were unable to recover it. 00:26:13.452 [2024-04-18 21:19:29.171226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.452 [2024-04-18 21:19:29.171445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.452 [2024-04-18 21:19:29.171456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.452 qpair failed and we were unable to recover it. 00:26:13.452 [2024-04-18 21:19:29.171657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.452 [2024-04-18 21:19:29.171927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.452 [2024-04-18 21:19:29.171937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.452 qpair failed and we were unable to recover it. 00:26:13.452 [2024-04-18 21:19:29.172144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.452 [2024-04-18 21:19:29.172353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.452 [2024-04-18 21:19:29.172364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.452 qpair failed and we were unable to recover it. 00:26:13.452 [2024-04-18 21:19:29.172579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.452 [2024-04-18 21:19:29.172850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.452 [2024-04-18 21:19:29.172860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.452 qpair failed and we were unable to recover it. 00:26:13.452 [2024-04-18 21:19:29.172963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.452 [2024-04-18 21:19:29.173293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.452 [2024-04-18 21:19:29.173303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.452 qpair failed and we were unable to recover it. 00:26:13.452 [2024-04-18 21:19:29.173610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.452 [2024-04-18 21:19:29.173814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.452 [2024-04-18 21:19:29.173823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.452 qpair failed and we were unable to recover it. 00:26:13.452 [2024-04-18 21:19:29.174022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.452 [2024-04-18 21:19:29.174288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.452 [2024-04-18 21:19:29.174298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.452 qpair failed and we were unable to recover it. 00:26:13.452 [2024-04-18 21:19:29.174580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.452 [2024-04-18 21:19:29.174776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.452 [2024-04-18 21:19:29.174787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.452 qpair failed and we were unable to recover it. 00:26:13.452 [2024-04-18 21:19:29.174994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.452 [2024-04-18 21:19:29.175326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.452 [2024-04-18 21:19:29.175336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.452 qpair failed and we were unable to recover it. 00:26:13.452 [2024-04-18 21:19:29.175668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.452 [2024-04-18 21:19:29.175863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.452 [2024-04-18 21:19:29.175874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.452 qpair failed and we were unable to recover it. 00:26:13.452 [2024-04-18 21:19:29.176076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.452 [2024-04-18 21:19:29.176293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.452 [2024-04-18 21:19:29.176303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.452 qpair failed and we were unable to recover it. 00:26:13.452 [2024-04-18 21:19:29.176499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.452 [2024-04-18 21:19:29.176706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.452 [2024-04-18 21:19:29.176717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.452 qpair failed and we were unable to recover it. 00:26:13.452 [2024-04-18 21:19:29.176974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.452 [2024-04-18 21:19:29.177237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.452 [2024-04-18 21:19:29.177248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.452 qpair failed and we were unable to recover it. 00:26:13.452 [2024-04-18 21:19:29.177528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.452 [2024-04-18 21:19:29.177794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.452 [2024-04-18 21:19:29.177804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.452 qpair failed and we were unable to recover it. 00:26:13.452 [2024-04-18 21:19:29.178022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.452 [2024-04-18 21:19:29.178216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.452 [2024-04-18 21:19:29.178226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.452 qpair failed and we were unable to recover it. 00:26:13.452 [2024-04-18 21:19:29.178418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.453 [2024-04-18 21:19:29.178795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.453 [2024-04-18 21:19:29.178806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.453 qpair failed and we were unable to recover it. 00:26:13.453 [2024-04-18 21:19:29.179028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.453 [2024-04-18 21:19:29.179293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.453 [2024-04-18 21:19:29.179303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.453 qpair failed and we were unable to recover it. 00:26:13.453 [2024-04-18 21:19:29.179581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.453 [2024-04-18 21:19:29.179803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.453 [2024-04-18 21:19:29.179817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.453 qpair failed and we were unable to recover it. 00:26:13.453 [2024-04-18 21:19:29.180034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.453 [2024-04-18 21:19:29.180341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.453 [2024-04-18 21:19:29.180351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.453 qpair failed and we were unable to recover it. 00:26:13.453 [2024-04-18 21:19:29.180635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.453 [2024-04-18 21:19:29.180845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.453 [2024-04-18 21:19:29.180855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.453 qpair failed and we were unable to recover it. 00:26:13.453 [2024-04-18 21:19:29.181059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.453 [2024-04-18 21:19:29.181270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.453 [2024-04-18 21:19:29.181279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.453 qpair failed and we were unable to recover it. 00:26:13.453 [2024-04-18 21:19:29.181643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.453 [2024-04-18 21:19:29.181847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.453 [2024-04-18 21:19:29.181858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.453 qpair failed and we were unable to recover it. 00:26:13.453 [2024-04-18 21:19:29.182128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.453 [2024-04-18 21:19:29.182322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.453 [2024-04-18 21:19:29.182332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.453 qpair failed and we were unable to recover it. 00:26:13.453 [2024-04-18 21:19:29.182533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.453 [2024-04-18 21:19:29.182804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.453 [2024-04-18 21:19:29.182813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.453 qpair failed and we were unable to recover it. 00:26:13.453 [2024-04-18 21:19:29.183079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.453 [2024-04-18 21:19:29.183343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.453 [2024-04-18 21:19:29.183354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.453 qpair failed and we were unable to recover it. 00:26:13.453 [2024-04-18 21:19:29.183624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.453 [2024-04-18 21:19:29.183825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.453 [2024-04-18 21:19:29.183835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.453 qpair failed and we were unable to recover it. 00:26:13.453 [2024-04-18 21:19:29.184045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.453 [2024-04-18 21:19:29.184269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.453 [2024-04-18 21:19:29.184279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.453 qpair failed and we were unable to recover it. 00:26:13.453 [2024-04-18 21:19:29.184544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.453 [2024-04-18 21:19:29.184756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.453 [2024-04-18 21:19:29.184768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.453 qpair failed and we were unable to recover it. 00:26:13.453 [2024-04-18 21:19:29.185051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.453 [2024-04-18 21:19:29.185347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.453 [2024-04-18 21:19:29.185357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.453 qpair failed and we were unable to recover it. 00:26:13.453 [2024-04-18 21:19:29.185565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.453 [2024-04-18 21:19:29.185730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.453 [2024-04-18 21:19:29.185740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.453 qpair failed and we were unable to recover it. 00:26:13.453 [2024-04-18 21:19:29.185945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.453 [2024-04-18 21:19:29.186101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.453 [2024-04-18 21:19:29.186111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.453 qpair failed and we were unable to recover it. 00:26:13.453 [2024-04-18 21:19:29.186366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.453 [2024-04-18 21:19:29.186568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.453 [2024-04-18 21:19:29.186578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.453 qpair failed and we were unable to recover it. 00:26:13.453 [2024-04-18 21:19:29.186939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.453 [2024-04-18 21:19:29.187201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.453 [2024-04-18 21:19:29.187212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.453 qpair failed and we were unable to recover it. 00:26:13.453 [2024-04-18 21:19:29.187422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.453 [2024-04-18 21:19:29.187625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.453 [2024-04-18 21:19:29.187636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.453 qpair failed and we were unable to recover it. 00:26:13.453 [2024-04-18 21:19:29.187852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.453 [2024-04-18 21:19:29.187967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.453 [2024-04-18 21:19:29.187976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.453 qpair failed and we were unable to recover it. 00:26:13.454 [2024-04-18 21:19:29.188229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.454 [2024-04-18 21:19:29.188425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.454 [2024-04-18 21:19:29.188435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.454 qpair failed and we were unable to recover it. 00:26:13.454 [2024-04-18 21:19:29.188766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.454 [2024-04-18 21:19:29.189114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.454 [2024-04-18 21:19:29.189125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.454 qpair failed and we were unable to recover it. 00:26:13.454 [2024-04-18 21:19:29.189345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.454 [2024-04-18 21:19:29.189615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.454 [2024-04-18 21:19:29.189628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.454 qpair failed and we were unable to recover it. 00:26:13.454 [2024-04-18 21:19:29.189837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.454 [2024-04-18 21:19:29.190162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.454 [2024-04-18 21:19:29.190172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.454 qpair failed and we were unable to recover it. 00:26:13.454 [2024-04-18 21:19:29.190417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.454 [2024-04-18 21:19:29.190688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.454 [2024-04-18 21:19:29.190698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.454 qpair failed and we were unable to recover it. 00:26:13.454 [2024-04-18 21:19:29.190964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.454 [2024-04-18 21:19:29.191253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.454 [2024-04-18 21:19:29.191263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.454 qpair failed and we were unable to recover it. 00:26:13.454 [2024-04-18 21:19:29.191474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.454 [2024-04-18 21:19:29.191688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.454 [2024-04-18 21:19:29.191698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.454 qpair failed and we were unable to recover it. 00:26:13.454 [2024-04-18 21:19:29.191908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.454 [2024-04-18 21:19:29.192193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.454 [2024-04-18 21:19:29.192203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.454 qpair failed and we were unable to recover it. 00:26:13.454 [2024-04-18 21:19:29.192475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.454 [2024-04-18 21:19:29.192746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.454 [2024-04-18 21:19:29.192756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.454 qpair failed and we were unable to recover it. 00:26:13.454 [2024-04-18 21:19:29.192976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.454 [2024-04-18 21:19:29.193186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.454 [2024-04-18 21:19:29.193196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.454 qpair failed and we were unable to recover it. 00:26:13.454 [2024-04-18 21:19:29.193389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.454 [2024-04-18 21:19:29.193643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.454 [2024-04-18 21:19:29.193654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.454 qpair failed and we were unable to recover it. 00:26:13.454 [2024-04-18 21:19:29.193863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.454 [2024-04-18 21:19:29.194075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.454 [2024-04-18 21:19:29.194086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.454 qpair failed and we were unable to recover it. 00:26:13.454 [2024-04-18 21:19:29.194296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.454 [2024-04-18 21:19:29.194413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.454 [2024-04-18 21:19:29.194424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.454 qpair failed and we were unable to recover it. 00:26:13.454 [2024-04-18 21:19:29.194637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.454 [2024-04-18 21:19:29.194842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.454 [2024-04-18 21:19:29.194852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.454 qpair failed and we were unable to recover it. 00:26:13.454 [2024-04-18 21:19:29.195068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.454 [2024-04-18 21:19:29.195296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.454 [2024-04-18 21:19:29.195306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.454 qpair failed and we were unable to recover it. 00:26:13.454 [2024-04-18 21:19:29.195552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.454 [2024-04-18 21:19:29.195767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.454 [2024-04-18 21:19:29.195778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.454 qpair failed and we were unable to recover it. 00:26:13.454 [2024-04-18 21:19:29.196052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.454 [2024-04-18 21:19:29.196254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.454 [2024-04-18 21:19:29.196264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.454 qpair failed and we were unable to recover it. 00:26:13.454 [2024-04-18 21:19:29.196462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.454 [2024-04-18 21:19:29.196720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.454 [2024-04-18 21:19:29.196732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.454 qpair failed and we were unable to recover it. 00:26:13.454 [2024-04-18 21:19:29.196943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.454 [2024-04-18 21:19:29.197149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.454 [2024-04-18 21:19:29.197159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.454 qpair failed and we were unable to recover it. 00:26:13.455 [2024-04-18 21:19:29.197372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.455 [2024-04-18 21:19:29.197664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.455 [2024-04-18 21:19:29.197675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.455 qpair failed and we were unable to recover it. 00:26:13.455 [2024-04-18 21:19:29.197877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.455 [2024-04-18 21:19:29.198146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.455 [2024-04-18 21:19:29.198156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.455 qpair failed and we were unable to recover it. 00:26:13.455 [2024-04-18 21:19:29.198338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.455 [2024-04-18 21:19:29.198682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.455 [2024-04-18 21:19:29.198692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.455 qpair failed and we were unable to recover it. 00:26:13.455 [2024-04-18 21:19:29.198898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.455 [2024-04-18 21:19:29.199182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.455 [2024-04-18 21:19:29.199192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.455 qpair failed and we were unable to recover it. 00:26:13.455 [2024-04-18 21:19:29.199450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.455 [2024-04-18 21:19:29.199686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.455 [2024-04-18 21:19:29.199697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.455 qpair failed and we were unable to recover it. 00:26:13.455 [2024-04-18 21:19:29.199911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.455 [2024-04-18 21:19:29.200135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.455 [2024-04-18 21:19:29.200145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.455 qpair failed and we were unable to recover it. 00:26:13.455 [2024-04-18 21:19:29.200345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.455 [2024-04-18 21:19:29.200605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.455 [2024-04-18 21:19:29.200615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.455 qpair failed and we were unable to recover it. 00:26:13.455 [2024-04-18 21:19:29.200884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.455 [2024-04-18 21:19:29.201113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.455 [2024-04-18 21:19:29.201122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.455 qpair failed and we were unable to recover it. 00:26:13.455 [2024-04-18 21:19:29.201272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.455 [2024-04-18 21:19:29.201550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.455 [2024-04-18 21:19:29.201561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.455 qpair failed and we were unable to recover it. 00:26:13.455 [2024-04-18 21:19:29.201705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.455 [2024-04-18 21:19:29.201923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.455 [2024-04-18 21:19:29.201933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.455 qpair failed and we were unable to recover it. 00:26:13.455 [2024-04-18 21:19:29.202199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.455 [2024-04-18 21:19:29.202412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.455 [2024-04-18 21:19:29.202423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.455 qpair failed and we were unable to recover it. 00:26:13.455 [2024-04-18 21:19:29.202620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.455 [2024-04-18 21:19:29.202981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.455 [2024-04-18 21:19:29.202991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.455 qpair failed and we were unable to recover it. 00:26:13.455 [2024-04-18 21:19:29.203197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.455 [2024-04-18 21:19:29.203300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.455 [2024-04-18 21:19:29.203309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.455 qpair failed and we were unable to recover it. 00:26:13.455 [2024-04-18 21:19:29.203517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.456 [2024-04-18 21:19:29.203790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.456 [2024-04-18 21:19:29.203801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.456 qpair failed and we were unable to recover it. 00:26:13.456 [2024-04-18 21:19:29.204029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.456 [2024-04-18 21:19:29.204152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.456 [2024-04-18 21:19:29.204162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.456 qpair failed and we were unable to recover it. 00:26:13.456 [2024-04-18 21:19:29.204279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.456 [2024-04-18 21:19:29.204487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.456 [2024-04-18 21:19:29.204497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.456 qpair failed and we were unable to recover it. 00:26:13.456 [2024-04-18 21:19:29.204761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.456 [2024-04-18 21:19:29.205049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.456 [2024-04-18 21:19:29.205060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.456 qpair failed and we were unable to recover it. 00:26:13.456 [2024-04-18 21:19:29.205263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.456 [2024-04-18 21:19:29.205583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.456 [2024-04-18 21:19:29.205593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.456 qpair failed and we were unable to recover it. 00:26:13.456 [2024-04-18 21:19:29.205911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.456 [2024-04-18 21:19:29.206066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.456 [2024-04-18 21:19:29.206076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.456 qpair failed and we were unable to recover it. 00:26:13.456 [2024-04-18 21:19:29.206375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.456 [2024-04-18 21:19:29.206617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.456 [2024-04-18 21:19:29.206628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.456 qpair failed and we were unable to recover it. 00:26:13.456 [2024-04-18 21:19:29.206883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.456 [2024-04-18 21:19:29.207106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.456 [2024-04-18 21:19:29.207116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.456 qpair failed and we were unable to recover it. 00:26:13.456 [2024-04-18 21:19:29.207322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.456 [2024-04-18 21:19:29.207589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.456 [2024-04-18 21:19:29.207600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.456 qpair failed and we were unable to recover it. 00:26:13.456 [2024-04-18 21:19:29.207798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.456 [2024-04-18 21:19:29.208149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.456 [2024-04-18 21:19:29.208158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.456 qpair failed and we were unable to recover it. 00:26:13.456 [2024-04-18 21:19:29.208372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.456 [2024-04-18 21:19:29.208577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.456 [2024-04-18 21:19:29.208586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.456 qpair failed and we were unable to recover it. 00:26:13.456 [2024-04-18 21:19:29.208790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.456 [2024-04-18 21:19:29.209055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.456 [2024-04-18 21:19:29.209065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.456 qpair failed and we were unable to recover it. 00:26:13.456 [2024-04-18 21:19:29.209328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.456 [2024-04-18 21:19:29.209597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.456 [2024-04-18 21:19:29.209607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.456 qpair failed and we were unable to recover it. 00:26:13.456 [2024-04-18 21:19:29.209804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.456 [2024-04-18 21:19:29.210023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.456 [2024-04-18 21:19:29.210033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.456 qpair failed and we were unable to recover it. 00:26:13.456 [2024-04-18 21:19:29.210293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.456 [2024-04-18 21:19:29.210500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.456 [2024-04-18 21:19:29.210522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.456 qpair failed and we were unable to recover it. 00:26:13.456 [2024-04-18 21:19:29.210718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.456 [2024-04-18 21:19:29.210884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.456 [2024-04-18 21:19:29.210894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.456 qpair failed and we were unable to recover it. 00:26:13.456 [2024-04-18 21:19:29.211158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.456 [2024-04-18 21:19:29.211428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.456 [2024-04-18 21:19:29.211437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.456 qpair failed and we were unable to recover it. 00:26:13.456 [2024-04-18 21:19:29.211649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.456 [2024-04-18 21:19:29.211838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.456 [2024-04-18 21:19:29.211848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.456 qpair failed and we were unable to recover it. 00:26:13.456 [2024-04-18 21:19:29.212107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.456 [2024-04-18 21:19:29.212295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.456 [2024-04-18 21:19:29.212305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.456 qpair failed and we were unable to recover it. 00:26:13.456 [2024-04-18 21:19:29.212495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.457 [2024-04-18 21:19:29.212859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.457 [2024-04-18 21:19:29.212869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.457 qpair failed and we were unable to recover it. 00:26:13.457 [2024-04-18 21:19:29.213136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.457 [2024-04-18 21:19:29.213401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.457 [2024-04-18 21:19:29.213410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.457 qpair failed and we were unable to recover it. 00:26:13.457 [2024-04-18 21:19:29.213621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.457 [2024-04-18 21:19:29.213830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.457 [2024-04-18 21:19:29.213839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.457 qpair failed and we were unable to recover it. 00:26:13.457 [2024-04-18 21:19:29.214048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.457 [2024-04-18 21:19:29.214241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.457 [2024-04-18 21:19:29.214250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.457 qpair failed and we were unable to recover it. 00:26:13.457 [2024-04-18 21:19:29.214517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.457 [2024-04-18 21:19:29.214738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.457 [2024-04-18 21:19:29.214747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.457 qpair failed and we were unable to recover it. 00:26:13.457 [2024-04-18 21:19:29.214856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.457 [2024-04-18 21:19:29.215118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.457 [2024-04-18 21:19:29.215129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.457 qpair failed and we were unable to recover it. 00:26:13.457 [2024-04-18 21:19:29.215234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.457 [2024-04-18 21:19:29.215531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.457 [2024-04-18 21:19:29.215540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.457 qpair failed and we were unable to recover it. 00:26:13.457 [2024-04-18 21:19:29.215811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.457 [2024-04-18 21:19:29.216030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.457 [2024-04-18 21:19:29.216039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.457 qpair failed and we were unable to recover it. 00:26:13.457 [2024-04-18 21:19:29.216244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.457 [2024-04-18 21:19:29.216456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.457 [2024-04-18 21:19:29.216465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.457 qpair failed and we were unable to recover it. 00:26:13.457 [2024-04-18 21:19:29.216683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.457 [2024-04-18 21:19:29.216894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.457 [2024-04-18 21:19:29.216903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.457 qpair failed and we were unable to recover it. 00:26:13.457 [2024-04-18 21:19:29.217172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.457 [2024-04-18 21:19:29.217428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.457 [2024-04-18 21:19:29.217437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.457 qpair failed and we were unable to recover it. 00:26:13.457 [2024-04-18 21:19:29.217639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.457 [2024-04-18 21:19:29.217898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.457 [2024-04-18 21:19:29.217907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.457 qpair failed and we were unable to recover it. 00:26:13.457 [2024-04-18 21:19:29.218109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.457 [2024-04-18 21:19:29.218319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.457 [2024-04-18 21:19:29.218329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.457 qpair failed and we were unable to recover it. 00:26:13.457 [2024-04-18 21:19:29.218552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.457 [2024-04-18 21:19:29.218745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.457 [2024-04-18 21:19:29.218755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.457 qpair failed and we were unable to recover it. 00:26:13.457 [2024-04-18 21:19:29.218956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.457 [2024-04-18 21:19:29.219149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.457 [2024-04-18 21:19:29.219159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.457 qpair failed and we were unable to recover it. 00:26:13.457 [2024-04-18 21:19:29.219394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.457 [2024-04-18 21:19:29.219596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.457 [2024-04-18 21:19:29.219606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.457 qpair failed and we were unable to recover it. 00:26:13.457 [2024-04-18 21:19:29.219802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.457 [2024-04-18 21:19:29.219895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.457 [2024-04-18 21:19:29.219905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.457 qpair failed and we were unable to recover it. 00:26:13.457 [2024-04-18 21:19:29.220102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.457 [2024-04-18 21:19:29.220301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.457 [2024-04-18 21:19:29.220311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.457 qpair failed and we were unable to recover it. 00:26:13.457 [2024-04-18 21:19:29.220538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.457 [2024-04-18 21:19:29.220809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.457 [2024-04-18 21:19:29.220819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.457 qpair failed and we were unable to recover it. 00:26:13.457 [2024-04-18 21:19:29.221042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.457 [2024-04-18 21:19:29.221234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.458 [2024-04-18 21:19:29.221244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.458 qpair failed and we were unable to recover it. 00:26:13.458 [2024-04-18 21:19:29.221441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.458 [2024-04-18 21:19:29.221643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.458 [2024-04-18 21:19:29.221653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.458 qpair failed and we were unable to recover it. 00:26:13.458 [2024-04-18 21:19:29.221935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.458 [2024-04-18 21:19:29.222136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.458 [2024-04-18 21:19:29.222146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.458 qpair failed and we were unable to recover it. 00:26:13.458 [2024-04-18 21:19:29.222347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.458 [2024-04-18 21:19:29.222613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.458 [2024-04-18 21:19:29.222622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.458 qpair failed and we were unable to recover it. 00:26:13.458 [2024-04-18 21:19:29.222829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.458 [2024-04-18 21:19:29.223027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.458 [2024-04-18 21:19:29.223036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.458 qpair failed and we were unable to recover it. 00:26:13.458 [2024-04-18 21:19:29.223245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.458 [2024-04-18 21:19:29.223456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.458 [2024-04-18 21:19:29.223466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.458 qpair failed and we were unable to recover it. 00:26:13.458 [2024-04-18 21:19:29.223870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.458 [2024-04-18 21:19:29.224139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.458 [2024-04-18 21:19:29.224149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.458 qpair failed and we were unable to recover it. 00:26:13.458 [2024-04-18 21:19:29.224420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.458 [2024-04-18 21:19:29.224690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.458 [2024-04-18 21:19:29.224700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.458 qpair failed and we were unable to recover it. 00:26:13.458 [2024-04-18 21:19:29.224967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.458 [2024-04-18 21:19:29.225175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.458 [2024-04-18 21:19:29.225185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.458 qpair failed and we were unable to recover it. 00:26:13.458 [2024-04-18 21:19:29.225383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.458 [2024-04-18 21:19:29.225579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.458 [2024-04-18 21:19:29.225589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.458 qpair failed and we were unable to recover it. 00:26:13.458 [2024-04-18 21:19:29.225811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.458 [2024-04-18 21:19:29.226017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.458 [2024-04-18 21:19:29.226027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.458 qpair failed and we were unable to recover it. 00:26:13.458 [2024-04-18 21:19:29.226242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.458 [2024-04-18 21:19:29.226445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.458 [2024-04-18 21:19:29.226455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.458 qpair failed and we were unable to recover it. 00:26:13.458 [2024-04-18 21:19:29.226659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.458 [2024-04-18 21:19:29.226803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.458 [2024-04-18 21:19:29.226812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.458 qpair failed and we were unable to recover it. 00:26:13.458 [2024-04-18 21:19:29.227005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.458 [2024-04-18 21:19:29.227380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.458 [2024-04-18 21:19:29.227390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.458 qpair failed and we were unable to recover it. 00:26:13.458 [2024-04-18 21:19:29.227584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.458 [2024-04-18 21:19:29.227865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.458 [2024-04-18 21:19:29.227875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.458 qpair failed and we were unable to recover it. 00:26:13.458 [2024-04-18 21:19:29.228222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.458 [2024-04-18 21:19:29.228490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.458 [2024-04-18 21:19:29.228500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3088000b90 with addr=10.0.0.2, port=4420 00:26:13.458 qpair failed and we were unable to recover it. 00:26:13.458 [2024-04-18 21:19:29.228615] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x246f7f0 is same with the state(5) to be set 00:26:13.458 [2024-04-18 21:19:29.228994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.458 [2024-04-18 21:19:29.229227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.458 [2024-04-18 21:19:29.229245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.458 qpair failed and we were unable to recover it. 00:26:13.458 [2024-04-18 21:19:29.229456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.458 [2024-04-18 21:19:29.229696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.458 [2024-04-18 21:19:29.229711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.458 qpair failed and we were unable to recover it. 00:26:13.458 [2024-04-18 21:19:29.229932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.458 [2024-04-18 21:19:29.230209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.458 [2024-04-18 21:19:29.230222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.458 qpair failed and we were unable to recover it. 00:26:13.458 [2024-04-18 21:19:29.230450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.458 [2024-04-18 21:19:29.230664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.458 [2024-04-18 21:19:29.230679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.458 qpair failed and we were unable to recover it. 00:26:13.459 [2024-04-18 21:19:29.230959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.459 [2024-04-18 21:19:29.231265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.459 [2024-04-18 21:19:29.231278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.459 qpair failed and we were unable to recover it. 00:26:13.459 [2024-04-18 21:19:29.231490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.459 [2024-04-18 21:19:29.231720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.459 [2024-04-18 21:19:29.231734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.459 qpair failed and we were unable to recover it. 00:26:13.459 [2024-04-18 21:19:29.231959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.459 [2024-04-18 21:19:29.232162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.459 [2024-04-18 21:19:29.232175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.459 qpair failed and we were unable to recover it. 00:26:13.459 [2024-04-18 21:19:29.232456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.459 [2024-04-18 21:19:29.232737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.459 [2024-04-18 21:19:29.232751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.459 qpair failed and we were unable to recover it. 00:26:13.459 [2024-04-18 21:19:29.232978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.459 [2024-04-18 21:19:29.233243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.459 [2024-04-18 21:19:29.233257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.459 qpair failed and we were unable to recover it. 00:26:13.459 [2024-04-18 21:19:29.233523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.459 [2024-04-18 21:19:29.233738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.459 [2024-04-18 21:19:29.233751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.459 qpair failed and we were unable to recover it. 00:26:13.459 [2024-04-18 21:19:29.233957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.459 [2024-04-18 21:19:29.234174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.459 [2024-04-18 21:19:29.234188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.459 qpair failed and we were unable to recover it. 00:26:13.459 [2024-04-18 21:19:29.234342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.459 [2024-04-18 21:19:29.234618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.459 [2024-04-18 21:19:29.234632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.459 qpair failed and we were unable to recover it. 00:26:13.459 [2024-04-18 21:19:29.234842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.459 [2024-04-18 21:19:29.235111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.459 [2024-04-18 21:19:29.235124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.459 qpair failed and we were unable to recover it. 00:26:13.459 [2024-04-18 21:19:29.235238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.459 [2024-04-18 21:19:29.235455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.459 [2024-04-18 21:19:29.235468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.459 qpair failed and we were unable to recover it. 00:26:13.459 [2024-04-18 21:19:29.235695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.459 [2024-04-18 21:19:29.235914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.459 [2024-04-18 21:19:29.235928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.459 qpair failed and we were unable to recover it. 00:26:13.459 [2024-04-18 21:19:29.236134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.459 [2024-04-18 21:19:29.236338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.459 [2024-04-18 21:19:29.236352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.459 qpair failed and we were unable to recover it. 00:26:13.459 [2024-04-18 21:19:29.236577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.459 [2024-04-18 21:19:29.236761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.459 [2024-04-18 21:19:29.236774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.459 qpair failed and we were unable to recover it. 00:26:13.459 [2024-04-18 21:19:29.236991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.459 [2024-04-18 21:19:29.237214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.459 [2024-04-18 21:19:29.237227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.459 qpair failed and we were unable to recover it. 00:26:13.459 [2024-04-18 21:19:29.237448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.459 [2024-04-18 21:19:29.237657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.459 [2024-04-18 21:19:29.237671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.459 qpair failed and we were unable to recover it. 00:26:13.459 [2024-04-18 21:19:29.237886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.459 [2024-04-18 21:19:29.238251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.459 [2024-04-18 21:19:29.238265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.459 qpair failed and we were unable to recover it. 00:26:13.459 [2024-04-18 21:19:29.238551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.459 [2024-04-18 21:19:29.238902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.459 [2024-04-18 21:19:29.238915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.459 qpair failed and we were unable to recover it. 00:26:13.459 21:19:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:13.459 [2024-04-18 21:19:29.239252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.459 21:19:29 -- common/autotest_common.sh@850 -- # return 0 00:26:13.459 [2024-04-18 21:19:29.239540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.459 [2024-04-18 21:19:29.239555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.459 qpair failed and we were unable to recover it. 00:26:13.459 [2024-04-18 21:19:29.239715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.459 21:19:29 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:13.459 [2024-04-18 21:19:29.239923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.459 [2024-04-18 21:19:29.239938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.459 qpair failed and we were unable to recover it. 00:26:13.459 21:19:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:13.459 [2024-04-18 21:19:29.240220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.459 21:19:29 -- common/autotest_common.sh@10 -- # set +x 00:26:13.459 [2024-04-18 21:19:29.240484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.459 [2024-04-18 21:19:29.240498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.459 qpair failed and we were unable to recover it. 00:26:13.459 [2024-04-18 21:19:29.240814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.459 [2024-04-18 21:19:29.240956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.459 [2024-04-18 21:19:29.240969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.459 qpair failed and we were unable to recover it. 00:26:13.459 [2024-04-18 21:19:29.241170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.459 [2024-04-18 21:19:29.241402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.459 [2024-04-18 21:19:29.241417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.459 qpair failed and we were unable to recover it. 00:26:13.459 [2024-04-18 21:19:29.241648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.459 [2024-04-18 21:19:29.241871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.460 [2024-04-18 21:19:29.241885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.460 qpair failed and we were unable to recover it. 00:26:13.460 [2024-04-18 21:19:29.242102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.460 [2024-04-18 21:19:29.242404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.460 [2024-04-18 21:19:29.242417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.460 qpair failed and we were unable to recover it. 00:26:13.460 [2024-04-18 21:19:29.242670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.460 [2024-04-18 21:19:29.242881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.460 [2024-04-18 21:19:29.242896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.460 qpair failed and we were unable to recover it. 00:26:13.460 [2024-04-18 21:19:29.243238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.460 [2024-04-18 21:19:29.243443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.460 [2024-04-18 21:19:29.243457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.460 qpair failed and we were unable to recover it. 00:26:13.460 [2024-04-18 21:19:29.243675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.460 [2024-04-18 21:19:29.244044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.460 [2024-04-18 21:19:29.244058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.460 qpair failed and we were unable to recover it. 00:26:13.460 [2024-04-18 21:19:29.244336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.460 [2024-04-18 21:19:29.244682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.460 [2024-04-18 21:19:29.244698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.460 qpair failed and we were unable to recover it. 00:26:13.460 [2024-04-18 21:19:29.245068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.460 [2024-04-18 21:19:29.245267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.460 [2024-04-18 21:19:29.245281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.460 qpair failed and we were unable to recover it. 00:26:13.460 [2024-04-18 21:19:29.245570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.460 [2024-04-18 21:19:29.245782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.460 [2024-04-18 21:19:29.245796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.460 qpair failed and we were unable to recover it. 00:26:13.460 [2024-04-18 21:19:29.246075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.460 [2024-04-18 21:19:29.246292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.460 [2024-04-18 21:19:29.246308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.460 qpair failed and we were unable to recover it. 00:26:13.460 [2024-04-18 21:19:29.246494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.460 [2024-04-18 21:19:29.246725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.460 [2024-04-18 21:19:29.246739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.460 qpair failed and we were unable to recover it. 00:26:13.460 [2024-04-18 21:19:29.246946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.460 [2024-04-18 21:19:29.247160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.460 [2024-04-18 21:19:29.247173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.460 qpair failed and we were unable to recover it. 00:26:13.460 [2024-04-18 21:19:29.247377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.460 [2024-04-18 21:19:29.247610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.460 [2024-04-18 21:19:29.247625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.460 qpair failed and we were unable to recover it. 00:26:13.460 [2024-04-18 21:19:29.247904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.460 [2024-04-18 21:19:29.248241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.460 [2024-04-18 21:19:29.248255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.460 qpair failed and we were unable to recover it. 00:26:13.460 [2024-04-18 21:19:29.248462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.460 [2024-04-18 21:19:29.248692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.460 [2024-04-18 21:19:29.248706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.460 qpair failed and we were unable to recover it. 00:26:13.460 [2024-04-18 21:19:29.248981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.460 [2024-04-18 21:19:29.249609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.460 [2024-04-18 21:19:29.249631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.460 qpair failed and we were unable to recover it. 00:26:13.460 [2024-04-18 21:19:29.249867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.460 [2024-04-18 21:19:29.250087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.460 [2024-04-18 21:19:29.250100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.460 qpair failed and we were unable to recover it. 00:26:13.460 [2024-04-18 21:19:29.250316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.460 [2024-04-18 21:19:29.250608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.460 [2024-04-18 21:19:29.250623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.460 qpair failed and we were unable to recover it. 00:26:13.460 [2024-04-18 21:19:29.250901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.460 [2024-04-18 21:19:29.251111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.460 [2024-04-18 21:19:29.251127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.460 qpair failed and we were unable to recover it. 00:26:13.460 [2024-04-18 21:19:29.251338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.460 [2024-04-18 21:19:29.251549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.460 [2024-04-18 21:19:29.251563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.460 qpair failed and we were unable to recover it. 00:26:13.460 [2024-04-18 21:19:29.251835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.460 [2024-04-18 21:19:29.252036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.460 [2024-04-18 21:19:29.252050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.460 qpair failed and we were unable to recover it. 00:26:13.460 [2024-04-18 21:19:29.252397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.460 [2024-04-18 21:19:29.252612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.460 [2024-04-18 21:19:29.252627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.460 qpair failed and we were unable to recover it. 00:26:13.460 [2024-04-18 21:19:29.252901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.460 [2024-04-18 21:19:29.253116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.460 [2024-04-18 21:19:29.253129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.460 qpair failed and we were unable to recover it. 00:26:13.460 [2024-04-18 21:19:29.253415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.460 [2024-04-18 21:19:29.253640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.460 [2024-04-18 21:19:29.253654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.460 qpair failed and we were unable to recover it. 00:26:13.460 [2024-04-18 21:19:29.253887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.460 [2024-04-18 21:19:29.254107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.460 [2024-04-18 21:19:29.254121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.460 qpair failed and we were unable to recover it. 00:26:13.460 [2024-04-18 21:19:29.254328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.460 [2024-04-18 21:19:29.254621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.461 [2024-04-18 21:19:29.254635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.461 qpair failed and we were unable to recover it. 00:26:13.461 [2024-04-18 21:19:29.254849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.461 [2024-04-18 21:19:29.255075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.461 [2024-04-18 21:19:29.255089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.461 qpair failed and we were unable to recover it. 00:26:13.461 [2024-04-18 21:19:29.255297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.461 [2024-04-18 21:19:29.255582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.461 [2024-04-18 21:19:29.255596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.461 qpair failed and we were unable to recover it. 00:26:13.461 [2024-04-18 21:19:29.255796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.461 [2024-04-18 21:19:29.256011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.461 [2024-04-18 21:19:29.256024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.461 qpair failed and we were unable to recover it. 00:26:13.461 [2024-04-18 21:19:29.256317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.461 [2024-04-18 21:19:29.256541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.461 [2024-04-18 21:19:29.256555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.461 qpair failed and we were unable to recover it. 00:26:13.461 [2024-04-18 21:19:29.256818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.461 [2024-04-18 21:19:29.257033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.461 [2024-04-18 21:19:29.257047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.461 qpair failed and we were unable to recover it. 00:26:13.461 [2024-04-18 21:19:29.257265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.461 [2024-04-18 21:19:29.257494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.461 [2024-04-18 21:19:29.257508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.461 qpair failed and we were unable to recover it. 00:26:13.461 [2024-04-18 21:19:29.257741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.461 [2024-04-18 21:19:29.258062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.461 [2024-04-18 21:19:29.258076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.461 qpair failed and we were unable to recover it. 00:26:13.461 [2024-04-18 21:19:29.258342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.461 [2024-04-18 21:19:29.258549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.461 [2024-04-18 21:19:29.258563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.461 qpair failed and we were unable to recover it. 00:26:13.461 [2024-04-18 21:19:29.258760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.461 [2024-04-18 21:19:29.258974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.461 [2024-04-18 21:19:29.258988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.461 qpair failed and we were unable to recover it. 00:26:13.461 [2024-04-18 21:19:29.259223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.461 [2024-04-18 21:19:29.259431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.461 [2024-04-18 21:19:29.259445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.461 qpair failed and we were unable to recover it. 00:26:13.461 [2024-04-18 21:19:29.259817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.461 [2024-04-18 21:19:29.260108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.461 [2024-04-18 21:19:29.260121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.461 qpair failed and we were unable to recover it. 00:26:13.461 [2024-04-18 21:19:29.260887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.461 [2024-04-18 21:19:29.261180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.461 [2024-04-18 21:19:29.261195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.461 qpair failed and we were unable to recover it. 00:26:13.461 [2024-04-18 21:19:29.261483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.461 [2024-04-18 21:19:29.261609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.461 [2024-04-18 21:19:29.261623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.461 qpair failed and we were unable to recover it. 00:26:13.461 [2024-04-18 21:19:29.261842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.461 [2024-04-18 21:19:29.262051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.461 [2024-04-18 21:19:29.262065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.461 qpair failed and we were unable to recover it. 00:26:13.461 [2024-04-18 21:19:29.262295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.461 [2024-04-18 21:19:29.262518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.461 [2024-04-18 21:19:29.262533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.461 qpair failed and we were unable to recover it. 00:26:13.461 [2024-04-18 21:19:29.262741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.461 [2024-04-18 21:19:29.262956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.461 [2024-04-18 21:19:29.262980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.461 qpair failed and we were unable to recover it. 00:26:13.461 [2024-04-18 21:19:29.263213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.461 [2024-04-18 21:19:29.263433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.461 [2024-04-18 21:19:29.263447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.461 qpair failed and we were unable to recover it. 00:26:13.461 [2024-04-18 21:19:29.263661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.461 [2024-04-18 21:19:29.263872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.461 [2024-04-18 21:19:29.263885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.461 qpair failed and we were unable to recover it. 00:26:13.461 [2024-04-18 21:19:29.264232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.461 [2024-04-18 21:19:29.264450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.461 [2024-04-18 21:19:29.264464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.461 qpair failed and we were unable to recover it. 00:26:13.461 [2024-04-18 21:19:29.264686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.461 [2024-04-18 21:19:29.264910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.461 [2024-04-18 21:19:29.264923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.461 qpair failed and we were unable to recover it. 00:26:13.461 [2024-04-18 21:19:29.265152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.461 [2024-04-18 21:19:29.265368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.461 [2024-04-18 21:19:29.265382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.462 qpair failed and we were unable to recover it. 00:26:13.462 [2024-04-18 21:19:29.265590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.462 [2024-04-18 21:19:29.265864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.462 [2024-04-18 21:19:29.265879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.462 qpair failed and we were unable to recover it. 00:26:13.462 [2024-04-18 21:19:29.266154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.462 [2024-04-18 21:19:29.266357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.462 [2024-04-18 21:19:29.266372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.462 qpair failed and we were unable to recover it. 00:26:13.462 [2024-04-18 21:19:29.266597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.462 [2024-04-18 21:19:29.266813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.462 [2024-04-18 21:19:29.266827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.462 qpair failed and we were unable to recover it. 00:26:13.462 [2024-04-18 21:19:29.267050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.462 [2024-04-18 21:19:29.267252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.462 [2024-04-18 21:19:29.267266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.462 qpair failed and we were unable to recover it. 00:26:13.462 [2024-04-18 21:19:29.267468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.462 [2024-04-18 21:19:29.267684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.462 [2024-04-18 21:19:29.267703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.462 qpair failed and we were unable to recover it. 00:26:13.462 [2024-04-18 21:19:29.267933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.462 [2024-04-18 21:19:29.268148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.462 [2024-04-18 21:19:29.268162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.462 qpair failed and we were unable to recover it. 00:26:13.462 [2024-04-18 21:19:29.268554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.462 [2024-04-18 21:19:29.268935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.462 [2024-04-18 21:19:29.268949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.462 qpair failed and we were unable to recover it. 00:26:13.462 [2024-04-18 21:19:29.269164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.462 [2024-04-18 21:19:29.269379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.462 [2024-04-18 21:19:29.269393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.462 qpair failed and we were unable to recover it. 00:26:13.462 [2024-04-18 21:19:29.269612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.462 [2024-04-18 21:19:29.269994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.462 [2024-04-18 21:19:29.270009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.462 qpair failed and we were unable to recover it. 00:26:13.462 [2024-04-18 21:19:29.270229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.462 [2024-04-18 21:19:29.270443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.462 [2024-04-18 21:19:29.270457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.462 qpair failed and we were unable to recover it. 00:26:13.462 [2024-04-18 21:19:29.270688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.462 [2024-04-18 21:19:29.270894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.462 [2024-04-18 21:19:29.270909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.462 qpair failed and we were unable to recover it. 00:26:13.462 [2024-04-18 21:19:29.271120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.462 [2024-04-18 21:19:29.271340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.462 [2024-04-18 21:19:29.271353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.462 qpair failed and we were unable to recover it. 00:26:13.462 [2024-04-18 21:19:29.271579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.462 [2024-04-18 21:19:29.271852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.462 [2024-04-18 21:19:29.271865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.462 qpair failed and we were unable to recover it. 00:26:13.462 [2024-04-18 21:19:29.272098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.462 [2024-04-18 21:19:29.272312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.462 [2024-04-18 21:19:29.272327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.462 qpair failed and we were unable to recover it. 00:26:13.462 [2024-04-18 21:19:29.272686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.462 [2024-04-18 21:19:29.272896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.462 [2024-04-18 21:19:29.272913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.462 qpair failed and we were unable to recover it. 00:26:13.462 [2024-04-18 21:19:29.273129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.462 [2024-04-18 21:19:29.273355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.462 [2024-04-18 21:19:29.273369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.462 qpair failed and we were unable to recover it. 00:26:13.462 [2024-04-18 21:19:29.273596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.462 [2024-04-18 21:19:29.273800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.462 [2024-04-18 21:19:29.273814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.462 qpair failed and we were unable to recover it. 00:26:13.462 [2024-04-18 21:19:29.274020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.462 21:19:29 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:13.462 [2024-04-18 21:19:29.274246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.462 [2024-04-18 21:19:29.274263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.462 qpair failed and we were unable to recover it. 00:26:13.462 [2024-04-18 21:19:29.274470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.462 21:19:29 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:13.462 [2024-04-18 21:19:29.274673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.462 [2024-04-18 21:19:29.274689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.462 qpair failed and we were unable to recover it. 00:26:13.462 [2024-04-18 21:19:29.274897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.462 21:19:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:13.462 [2024-04-18 21:19:29.275118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.462 [2024-04-18 21:19:29.275133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.462 qpair failed and we were unable to recover it. 00:26:13.462 21:19:29 -- common/autotest_common.sh@10 -- # set +x 00:26:13.462 [2024-04-18 21:19:29.275337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.463 [2024-04-18 21:19:29.275553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.463 [2024-04-18 21:19:29.275568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.463 qpair failed and we were unable to recover it. 00:26:13.463 [2024-04-18 21:19:29.275906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.463 [2024-04-18 21:19:29.276121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.463 [2024-04-18 21:19:29.276135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.463 qpair failed and we were unable to recover it. 00:26:13.463 [2024-04-18 21:19:29.276353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.463 [2024-04-18 21:19:29.276586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.463 [2024-04-18 21:19:29.276600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.463 qpair failed and we were unable to recover it. 00:26:13.463 [2024-04-18 21:19:29.276818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.463 [2024-04-18 21:19:29.277135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.463 [2024-04-18 21:19:29.277148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.463 qpair failed and we were unable to recover it. 00:26:13.463 [2024-04-18 21:19:29.277361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.463 [2024-04-18 21:19:29.277579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.463 [2024-04-18 21:19:29.277593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.463 qpair failed and we were unable to recover it. 00:26:13.463 [2024-04-18 21:19:29.277804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.463 [2024-04-18 21:19:29.278082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.463 [2024-04-18 21:19:29.278095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.463 qpair failed and we were unable to recover it. 00:26:13.463 [2024-04-18 21:19:29.278298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.463 [2024-04-18 21:19:29.278576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.463 [2024-04-18 21:19:29.278591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.463 qpair failed and we were unable to recover it. 00:26:13.463 [2024-04-18 21:19:29.278804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.463 [2024-04-18 21:19:29.279010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.463 [2024-04-18 21:19:29.279024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.463 qpair failed and we were unable to recover it. 00:26:13.463 [2024-04-18 21:19:29.279243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.463 [2024-04-18 21:19:29.279456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.463 [2024-04-18 21:19:29.279469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.463 qpair failed and we were unable to recover it. 00:26:13.463 [2024-04-18 21:19:29.279686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.463 [2024-04-18 21:19:29.279863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.463 [2024-04-18 21:19:29.279877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.463 qpair failed and we were unable to recover it. 00:26:13.463 [2024-04-18 21:19:29.280100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.463 [2024-04-18 21:19:29.280311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.463 [2024-04-18 21:19:29.280325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.463 qpair failed and we were unable to recover it. 00:26:13.463 [2024-04-18 21:19:29.280538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.463 [2024-04-18 21:19:29.280753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.463 [2024-04-18 21:19:29.280767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.463 qpair failed and we were unable to recover it. 00:26:13.463 [2024-04-18 21:19:29.280995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.463 [2024-04-18 21:19:29.281200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.463 [2024-04-18 21:19:29.281213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.463 qpair failed and we were unable to recover it. 00:26:13.463 [2024-04-18 21:19:29.281413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.463 [2024-04-18 21:19:29.281623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.463 [2024-04-18 21:19:29.281637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.463 qpair failed and we were unable to recover it. 00:26:13.463 [2024-04-18 21:19:29.281804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.463 [2024-04-18 21:19:29.282003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.463 [2024-04-18 21:19:29.282017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.463 qpair failed and we were unable to recover it. 00:26:13.463 [2024-04-18 21:19:29.282289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.463 [2024-04-18 21:19:29.282497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.463 [2024-04-18 21:19:29.282514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.463 qpair failed and we were unable to recover it. 00:26:13.463 [2024-04-18 21:19:29.282732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.463 [2024-04-18 21:19:29.282940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.463 [2024-04-18 21:19:29.282954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.463 qpair failed and we were unable to recover it. 00:26:13.463 [2024-04-18 21:19:29.283166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.463 [2024-04-18 21:19:29.283372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.463 [2024-04-18 21:19:29.283385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.463 qpair failed and we were unable to recover it. 00:26:13.463 [2024-04-18 21:19:29.283735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.463 [2024-04-18 21:19:29.284000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.463 [2024-04-18 21:19:29.284013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.463 qpair failed and we were unable to recover it. 00:26:13.463 [2024-04-18 21:19:29.284222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.463 [2024-04-18 21:19:29.284368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.463 [2024-04-18 21:19:29.284382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.463 qpair failed and we were unable to recover it. 00:26:13.463 [2024-04-18 21:19:29.284601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.463 [2024-04-18 21:19:29.284810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.463 [2024-04-18 21:19:29.284824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.463 qpair failed and we were unable to recover it. 00:26:13.463 [2024-04-18 21:19:29.284938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.463 [2024-04-18 21:19:29.285146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.463 [2024-04-18 21:19:29.285159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.463 qpair failed and we were unable to recover it. 00:26:13.463 [2024-04-18 21:19:29.285441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.463 [2024-04-18 21:19:29.285644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.463 [2024-04-18 21:19:29.285659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.463 qpair failed and we were unable to recover it. 00:26:13.463 [2024-04-18 21:19:29.285873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.463 [2024-04-18 21:19:29.286089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.463 [2024-04-18 21:19:29.286104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.463 qpair failed and we were unable to recover it. 00:26:13.463 [2024-04-18 21:19:29.286317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.463 [2024-04-18 21:19:29.286521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.463 [2024-04-18 21:19:29.286536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.464 qpair failed and we were unable to recover it. 00:26:13.464 [2024-04-18 21:19:29.286643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.464 [2024-04-18 21:19:29.286850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.464 [2024-04-18 21:19:29.286865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.464 qpair failed and we were unable to recover it. 00:26:13.464 [2024-04-18 21:19:29.287140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.464 [2024-04-18 21:19:29.287344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.464 [2024-04-18 21:19:29.287358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.464 qpair failed and we were unable to recover it. 00:26:13.464 [2024-04-18 21:19:29.287573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.464 [2024-04-18 21:19:29.287790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.464 [2024-04-18 21:19:29.287804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.464 qpair failed and we were unable to recover it. 00:26:13.464 [2024-04-18 21:19:29.288005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.464 [2024-04-18 21:19:29.288206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.464 [2024-04-18 21:19:29.288222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.464 qpair failed and we were unable to recover it. 00:26:13.464 [2024-04-18 21:19:29.288352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.464 [2024-04-18 21:19:29.288562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.464 [2024-04-18 21:19:29.288577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.464 qpair failed and we were unable to recover it. 00:26:13.464 [2024-04-18 21:19:29.288790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.464 [2024-04-18 21:19:29.288994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.464 [2024-04-18 21:19:29.289009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.464 qpair failed and we were unable to recover it. 00:26:13.464 [2024-04-18 21:19:29.289217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.464 [2024-04-18 21:19:29.289424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.464 [2024-04-18 21:19:29.289441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.464 qpair failed and we were unable to recover it. 00:26:13.464 [2024-04-18 21:19:29.289828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.464 [2024-04-18 21:19:29.290045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.464 [2024-04-18 21:19:29.290059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.464 qpair failed and we were unable to recover it. 00:26:13.464 [2024-04-18 21:19:29.290332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.464 [2024-04-18 21:19:29.290546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.464 [2024-04-18 21:19:29.290562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.464 qpair failed and we were unable to recover it. 00:26:13.464 [2024-04-18 21:19:29.290844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.464 [2024-04-18 21:19:29.291113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.464 [2024-04-18 21:19:29.291128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.464 qpair failed and we were unable to recover it. 00:26:13.464 [2024-04-18 21:19:29.291345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.464 [2024-04-18 21:19:29.291558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.464 [2024-04-18 21:19:29.291573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.464 qpair failed and we were unable to recover it. 00:26:13.464 [2024-04-18 21:19:29.291773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.464 [2024-04-18 21:19:29.291990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.464 [2024-04-18 21:19:29.292003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.464 qpair failed and we were unable to recover it. 00:26:13.464 [2024-04-18 21:19:29.292159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.464 [2024-04-18 21:19:29.292361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.464 [2024-04-18 21:19:29.292375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.464 qpair failed and we were unable to recover it. 00:26:13.464 [2024-04-18 21:19:29.292660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.464 Malloc0 00:26:13.464 [2024-04-18 21:19:29.292876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.464 [2024-04-18 21:19:29.292891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.464 qpair failed and we were unable to recover it. 00:26:13.464 21:19:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:13.464 [2024-04-18 21:19:29.293164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.464 21:19:29 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:13.464 21:19:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:13.464 [2024-04-18 21:19:29.293384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.464 21:19:29 -- common/autotest_common.sh@10 -- # set +x 00:26:13.464 [2024-04-18 21:19:29.293397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.464 qpair failed and we were unable to recover it. 00:26:13.464 [2024-04-18 21:19:29.293685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.464 [2024-04-18 21:19:29.293904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.464 [2024-04-18 21:19:29.293918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.464 qpair failed and we were unable to recover it. 00:26:13.464 [2024-04-18 21:19:29.294198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.464 [2024-04-18 21:19:29.294478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.464 [2024-04-18 21:19:29.294491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.464 qpair failed and we were unable to recover it. 00:26:13.464 [2024-04-18 21:19:29.294699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.464 [2024-04-18 21:19:29.294968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.464 [2024-04-18 21:19:29.294981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.464 qpair failed and we were unable to recover it. 00:26:13.464 [2024-04-18 21:19:29.295283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.464 [2024-04-18 21:19:29.295507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.464 [2024-04-18 21:19:29.295527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.464 qpair failed and we were unable to recover it. 00:26:13.464 [2024-04-18 21:19:29.295753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.464 [2024-04-18 21:19:29.296012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.465 [2024-04-18 21:19:29.296025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.465 qpair failed and we were unable to recover it. 00:26:13.465 [2024-04-18 21:19:29.296301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.465 [2024-04-18 21:19:29.296544] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:13.465 [2024-04-18 21:19:29.296581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.465 [2024-04-18 21:19:29.296595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.465 qpair failed and we were unable to recover it. 00:26:13.465 [2024-04-18 21:19:29.296809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.465 [2024-04-18 21:19:29.297027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.465 [2024-04-18 21:19:29.297040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.465 qpair failed and we were unable to recover it. 00:26:13.465 [2024-04-18 21:19:29.297210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.465 [2024-04-18 21:19:29.297430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.465 [2024-04-18 21:19:29.297444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.465 qpair failed and we were unable to recover it. 00:26:13.465 [2024-04-18 21:19:29.297712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.465 [2024-04-18 21:19:29.297918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.465 [2024-04-18 21:19:29.297931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.465 qpair failed and we were unable to recover it. 00:26:13.465 [2024-04-18 21:19:29.298154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.465 [2024-04-18 21:19:29.298437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.465 [2024-04-18 21:19:29.298451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.465 qpair failed and we were unable to recover it. 00:26:13.465 [2024-04-18 21:19:29.298723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.465 [2024-04-18 21:19:29.299087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.465 [2024-04-18 21:19:29.299101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.465 qpair failed and we were unable to recover it. 00:26:13.465 [2024-04-18 21:19:29.299328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.465 [2024-04-18 21:19:29.299564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.465 [2024-04-18 21:19:29.299592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.465 qpair failed and we were unable to recover it. 00:26:13.465 [2024-04-18 21:19:29.299800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.465 [2024-04-18 21:19:29.300017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.465 [2024-04-18 21:19:29.300030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.465 qpair failed and we were unable to recover it. 00:26:13.465 [2024-04-18 21:19:29.300258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.465 [2024-04-18 21:19:29.300434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.465 [2024-04-18 21:19:29.300448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.465 qpair failed and we were unable to recover it. 00:26:13.465 [2024-04-18 21:19:29.300663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.465 [2024-04-18 21:19:29.300881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.465 [2024-04-18 21:19:29.300894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.465 qpair failed and we were unable to recover it. 00:26:13.465 [2024-04-18 21:19:29.301171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.465 [2024-04-18 21:19:29.301391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.465 [2024-04-18 21:19:29.301405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.465 qpair failed and we were unable to recover it. 00:26:13.465 [2024-04-18 21:19:29.301619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.465 [2024-04-18 21:19:29.301848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.465 [2024-04-18 21:19:29.301862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.465 qpair failed and we were unable to recover it. 00:26:13.465 [2024-04-18 21:19:29.302065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.465 [2024-04-18 21:19:29.302336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.465 [2024-04-18 21:19:29.302350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.465 qpair failed and we were unable to recover it. 00:26:13.465 [2024-04-18 21:19:29.302504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.465 [2024-04-18 21:19:29.302725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.465 [2024-04-18 21:19:29.302739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.465 qpair failed and we were unable to recover it. 00:26:13.465 [2024-04-18 21:19:29.302944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.465 [2024-04-18 21:19:29.303115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.465 [2024-04-18 21:19:29.303128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.465 qpair failed and we were unable to recover it. 00:26:13.465 [2024-04-18 21:19:29.303338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.465 [2024-04-18 21:19:29.303546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.465 [2024-04-18 21:19:29.303560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.465 qpair failed and we were unable to recover it. 00:26:13.465 [2024-04-18 21:19:29.303824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.465 [2024-04-18 21:19:29.304031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.465 [2024-04-18 21:19:29.304045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.465 qpair failed and we were unable to recover it. 00:26:13.465 [2024-04-18 21:19:29.304247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.465 [2024-04-18 21:19:29.304520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.465 [2024-04-18 21:19:29.304534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.465 qpair failed and we were unable to recover it. 00:26:13.465 21:19:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:13.465 [2024-04-18 21:19:29.304818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.466 21:19:29 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:13.466 21:19:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:13.466 21:19:29 -- common/autotest_common.sh@10 -- # set +x 00:26:13.466 [2024-04-18 21:19:29.305084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.466 [2024-04-18 21:19:29.305098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.466 qpair failed and we were unable to recover it. 00:26:13.466 [2024-04-18 21:19:29.305306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.466 [2024-04-18 21:19:29.305532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.466 [2024-04-18 21:19:29.305547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.466 qpair failed and we were unable to recover it. 00:26:13.466 [2024-04-18 21:19:29.305749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.466 [2024-04-18 21:19:29.305853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.466 [2024-04-18 21:19:29.305866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.466 qpair failed and we were unable to recover it. 00:26:13.466 [2024-04-18 21:19:29.306090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.466 [2024-04-18 21:19:29.306359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.466 [2024-04-18 21:19:29.306372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.466 qpair failed and we were unable to recover it. 00:26:13.466 [2024-04-18 21:19:29.306591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.466 [2024-04-18 21:19:29.306775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.466 [2024-04-18 21:19:29.306788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.466 qpair failed and we were unable to recover it. 00:26:13.466 [2024-04-18 21:19:29.307066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.466 [2024-04-18 21:19:29.307175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.466 [2024-04-18 21:19:29.307189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.466 qpair failed and we were unable to recover it. 00:26:13.466 [2024-04-18 21:19:29.307470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.466 [2024-04-18 21:19:29.307689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.466 [2024-04-18 21:19:29.307703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.466 qpair failed and we were unable to recover it. 00:26:13.466 [2024-04-18 21:19:29.308085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.466 [2024-04-18 21:19:29.308183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.466 [2024-04-18 21:19:29.308196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.466 qpair failed and we were unable to recover it. 00:26:13.466 [2024-04-18 21:19:29.308490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.466 [2024-04-18 21:19:29.308715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.466 [2024-04-18 21:19:29.308729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.466 qpair failed and we were unable to recover it. 00:26:13.466 [2024-04-18 21:19:29.309004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.466 [2024-04-18 21:19:29.309203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.466 [2024-04-18 21:19:29.309219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.466 qpair failed and we were unable to recover it. 00:26:13.466 [2024-04-18 21:19:29.309506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.466 [2024-04-18 21:19:29.309784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.466 [2024-04-18 21:19:29.309797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.466 qpair failed and we were unable to recover it. 00:26:13.466 [2024-04-18 21:19:29.310063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.466 [2024-04-18 21:19:29.310343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.466 [2024-04-18 21:19:29.310356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.466 qpair failed and we were unable to recover it. 00:26:13.466 [2024-04-18 21:19:29.310641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.466 [2024-04-18 21:19:29.310967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.466 [2024-04-18 21:19:29.310980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.466 qpair failed and we were unable to recover it. 00:26:13.466 [2024-04-18 21:19:29.311138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.466 [2024-04-18 21:19:29.311534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.466 [2024-04-18 21:19:29.311549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.466 qpair failed and we were unable to recover it. 00:26:13.466 [2024-04-18 21:19:29.311851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.466 [2024-04-18 21:19:29.312062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.466 [2024-04-18 21:19:29.312075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.466 qpair failed and we were unable to recover it. 00:26:13.466 [2024-04-18 21:19:29.312174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.466 [2024-04-18 21:19:29.312452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.466 [2024-04-18 21:19:29.312466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.466 qpair failed and we were unable to recover it. 00:26:13.466 21:19:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:13.466 21:19:29 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:13.466 21:19:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:13.466 21:19:29 -- common/autotest_common.sh@10 -- # set +x 00:26:13.466 [2024-04-18 21:19:29.313301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.466 [2024-04-18 21:19:29.313660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.466 [2024-04-18 21:19:29.313678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.466 qpair failed and we were unable to recover it. 00:26:13.466 [2024-04-18 21:19:29.313956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.466 [2024-04-18 21:19:29.314235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.466 [2024-04-18 21:19:29.314249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.466 qpair failed and we were unable to recover it. 00:26:13.466 [2024-04-18 21:19:29.314463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.467 [2024-04-18 21:19:29.314674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.467 [2024-04-18 21:19:29.314688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.467 qpair failed and we were unable to recover it. 00:26:13.467 [2024-04-18 21:19:29.314956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.467 [2024-04-18 21:19:29.315163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.467 [2024-04-18 21:19:29.315176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.467 qpair failed and we were unable to recover it. 00:26:13.467 [2024-04-18 21:19:29.315378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.467 [2024-04-18 21:19:29.315595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.467 [2024-04-18 21:19:29.315609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.467 qpair failed and we were unable to recover it. 00:26:13.467 [2024-04-18 21:19:29.315898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.467 [2024-04-18 21:19:29.316112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.467 [2024-04-18 21:19:29.316125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.467 qpair failed and we were unable to recover it. 00:26:13.467 [2024-04-18 21:19:29.316352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.467 [2024-04-18 21:19:29.316572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.467 [2024-04-18 21:19:29.316585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.467 qpair failed and we were unable to recover it. 00:26:13.467 [2024-04-18 21:19:29.316811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.467 [2024-04-18 21:19:29.317084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.467 [2024-04-18 21:19:29.317097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.467 qpair failed and we were unable to recover it. 00:26:13.467 [2024-04-18 21:19:29.317299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.467 [2024-04-18 21:19:29.317583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.467 [2024-04-18 21:19:29.317597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.467 qpair failed and we were unable to recover it. 00:26:13.467 [2024-04-18 21:19:29.317816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.467 [2024-04-18 21:19:29.318100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.467 [2024-04-18 21:19:29.318113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.467 qpair failed and we were unable to recover it. 00:26:13.467 [2024-04-18 21:19:29.318386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.467 [2024-04-18 21:19:29.318489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.467 [2024-04-18 21:19:29.318502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.467 qpair failed and we were unable to recover it. 00:26:13.467 [2024-04-18 21:19:29.318648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.467 [2024-04-18 21:19:29.319002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.467 [2024-04-18 21:19:29.319015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.467 qpair failed and we were unable to recover it. 00:26:13.467 [2024-04-18 21:19:29.319228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.467 [2024-04-18 21:19:29.319428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.467 [2024-04-18 21:19:29.319441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.467 qpair failed and we were unable to recover it. 00:26:13.467 [2024-04-18 21:19:29.319774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.467 [2024-04-18 21:19:29.319978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.467 [2024-04-18 21:19:29.319992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.467 qpair failed and we were unable to recover it. 00:26:13.467 [2024-04-18 21:19:29.320325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.467 [2024-04-18 21:19:29.320521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.467 [2024-04-18 21:19:29.320534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.467 qpair failed and we were unable to recover it. 00:26:13.467 [2024-04-18 21:19:29.320735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.467 [2024-04-18 21:19:29.321068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.467 [2024-04-18 21:19:29.321081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.467 qpair failed and we were unable to recover it. 00:26:13.467 21:19:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:13.467 [2024-04-18 21:19:29.321294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.467 [2024-04-18 21:19:29.321575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.467 21:19:29 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:13.467 [2024-04-18 21:19:29.321589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.467 qpair failed and we were unable to recover it. 00:26:13.467 [2024-04-18 21:19:29.321708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.467 21:19:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:13.467 [2024-04-18 21:19:29.321985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.467 [2024-04-18 21:19:29.321999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.467 qpair failed and we were unable to recover it. 00:26:13.467 21:19:29 -- common/autotest_common.sh@10 -- # set +x 00:26:13.467 [2024-04-18 21:19:29.322229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.467 [2024-04-18 21:19:29.322514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.467 [2024-04-18 21:19:29.322528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.467 qpair failed and we were unable to recover it. 00:26:13.467 [2024-04-18 21:19:29.322902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.467 [2024-04-18 21:19:29.323118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.467 [2024-04-18 21:19:29.323131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.467 qpair failed and we were unable to recover it. 00:26:13.467 [2024-04-18 21:19:29.323408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.467 [2024-04-18 21:19:29.323678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.468 [2024-04-18 21:19:29.323691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.468 qpair failed and we were unable to recover it. 00:26:13.468 [2024-04-18 21:19:29.323891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.468 [2024-04-18 21:19:29.324174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.468 [2024-04-18 21:19:29.324187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.468 qpair failed and we were unable to recover it. 00:26:13.468 [2024-04-18 21:19:29.324401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.468 [2024-04-18 21:19:29.324517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.468 [2024-04-18 21:19:29.324531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3080000b90 with addr=10.0.0.2, port=4420 00:26:13.468 qpair failed and we were unable to recover it. 00:26:13.468 [2024-04-18 21:19:29.324761] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:13.468 [2024-04-18 21:19:29.324810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.468 [2024-04-18 21:19:29.327860] posix.c: 675:posix_sock_psk_use_session_client_cb: *ERROR*: PSK is not set 00:26:13.468 [2024-04-18 21:19:29.327908] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f3080000b90 (107): Transport endpoint is not connected 00:26:13.468 [2024-04-18 21:19:29.327956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.468 qpair failed and we were unable to recover it. 00:26:13.468 21:19:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:13.468 21:19:29 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:13.468 21:19:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:13.468 21:19:29 -- common/autotest_common.sh@10 -- # set +x 00:26:13.468 [2024-04-18 21:19:29.337165] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.468 21:19:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:13.468 [2024-04-18 21:19:29.337294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.468 [2024-04-18 21:19:29.337316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.468 [2024-04-18 21:19:29.337324] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.468 [2024-04-18 21:19:29.337331] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.468 [2024-04-18 21:19:29.337349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.468 qpair failed and we were unable to recover it. 00:26:13.468 21:19:29 -- host/target_disconnect.sh@58 -- # wait 3204047 00:26:13.468 [2024-04-18 21:19:29.347135] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.468 [2024-04-18 21:19:29.347285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.468 [2024-04-18 21:19:29.347305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.468 [2024-04-18 21:19:29.347312] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.468 [2024-04-18 21:19:29.347319] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.468 [2024-04-18 21:19:29.347336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.468 qpair failed and we were unable to recover it. 00:26:13.468 [2024-04-18 21:19:29.357128] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.468 [2024-04-18 21:19:29.357230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.468 [2024-04-18 21:19:29.357247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.468 [2024-04-18 21:19:29.357255] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.468 [2024-04-18 21:19:29.357261] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.468 [2024-04-18 21:19:29.357280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.468 qpair failed and we were unable to recover it. 00:26:13.468 [2024-04-18 21:19:29.367088] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.468 [2024-04-18 21:19:29.367188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.468 [2024-04-18 21:19:29.367207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.468 [2024-04-18 21:19:29.367214] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.468 [2024-04-18 21:19:29.367220] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.468 [2024-04-18 21:19:29.367236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.468 qpair failed and we were unable to recover it. 00:26:13.729 [2024-04-18 21:19:29.377090] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.729 [2024-04-18 21:19:29.377187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.729 [2024-04-18 21:19:29.377206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.729 [2024-04-18 21:19:29.377213] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.729 [2024-04-18 21:19:29.377219] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.729 [2024-04-18 21:19:29.377234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.729 qpair failed and we were unable to recover it. 00:26:13.729 [2024-04-18 21:19:29.387080] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.729 [2024-04-18 21:19:29.387174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.729 [2024-04-18 21:19:29.387193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.729 [2024-04-18 21:19:29.387200] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.729 [2024-04-18 21:19:29.387206] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.729 [2024-04-18 21:19:29.387222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.729 qpair failed and we were unable to recover it. 00:26:13.729 [2024-04-18 21:19:29.397163] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.729 [2024-04-18 21:19:29.397260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.729 [2024-04-18 21:19:29.397278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.729 [2024-04-18 21:19:29.397285] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.729 [2024-04-18 21:19:29.397291] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.729 [2024-04-18 21:19:29.397307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.729 qpair failed and we were unable to recover it. 00:26:13.729 [2024-04-18 21:19:29.407138] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.729 [2024-04-18 21:19:29.407239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.729 [2024-04-18 21:19:29.407259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.729 [2024-04-18 21:19:29.407267] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.729 [2024-04-18 21:19:29.407273] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.729 [2024-04-18 21:19:29.407289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.729 qpair failed and we were unable to recover it. 00:26:13.729 [2024-04-18 21:19:29.417232] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.729 [2024-04-18 21:19:29.417341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.729 [2024-04-18 21:19:29.417359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.729 [2024-04-18 21:19:29.417366] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.729 [2024-04-18 21:19:29.417372] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.729 [2024-04-18 21:19:29.417389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.729 qpair failed and we were unable to recover it. 00:26:13.729 [2024-04-18 21:19:29.427265] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.729 [2024-04-18 21:19:29.427358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.729 [2024-04-18 21:19:29.427376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.729 [2024-04-18 21:19:29.427383] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.729 [2024-04-18 21:19:29.427389] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.729 [2024-04-18 21:19:29.427405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.729 qpair failed and we were unable to recover it. 00:26:13.729 [2024-04-18 21:19:29.437299] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.729 [2024-04-18 21:19:29.437395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.729 [2024-04-18 21:19:29.437413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.729 [2024-04-18 21:19:29.437421] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.729 [2024-04-18 21:19:29.437427] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.729 [2024-04-18 21:19:29.437443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.729 qpair failed and we were unable to recover it. 00:26:13.729 [2024-04-18 21:19:29.447287] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.729 [2024-04-18 21:19:29.447387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.729 [2024-04-18 21:19:29.447405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.729 [2024-04-18 21:19:29.447412] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.729 [2024-04-18 21:19:29.447418] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.729 [2024-04-18 21:19:29.447438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.729 qpair failed and we were unable to recover it. 00:26:13.729 [2024-04-18 21:19:29.457352] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.729 [2024-04-18 21:19:29.457447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.729 [2024-04-18 21:19:29.457464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.729 [2024-04-18 21:19:29.457472] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.729 [2024-04-18 21:19:29.457478] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.729 [2024-04-18 21:19:29.457494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.729 qpair failed and we were unable to recover it. 00:26:13.729 [2024-04-18 21:19:29.467345] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.729 [2024-04-18 21:19:29.467442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.729 [2024-04-18 21:19:29.467460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.729 [2024-04-18 21:19:29.467467] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.729 [2024-04-18 21:19:29.467473] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.729 [2024-04-18 21:19:29.467489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.729 qpair failed and we were unable to recover it. 00:26:13.729 [2024-04-18 21:19:29.477331] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.729 [2024-04-18 21:19:29.477427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.729 [2024-04-18 21:19:29.477445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.729 [2024-04-18 21:19:29.477452] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.729 [2024-04-18 21:19:29.477458] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.729 [2024-04-18 21:19:29.477474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.729 qpair failed and we were unable to recover it. 00:26:13.729 [2024-04-18 21:19:29.487424] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.729 [2024-04-18 21:19:29.487538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.729 [2024-04-18 21:19:29.487556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.729 [2024-04-18 21:19:29.487563] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.729 [2024-04-18 21:19:29.487568] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.729 [2024-04-18 21:19:29.487585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.729 qpair failed and we were unable to recover it. 00:26:13.729 [2024-04-18 21:19:29.497482] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.729 [2024-04-18 21:19:29.497582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.729 [2024-04-18 21:19:29.497603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.730 [2024-04-18 21:19:29.497611] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.730 [2024-04-18 21:19:29.497617] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.730 [2024-04-18 21:19:29.497633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.730 qpair failed and we were unable to recover it. 00:26:13.730 [2024-04-18 21:19:29.507498] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.730 [2024-04-18 21:19:29.507598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.730 [2024-04-18 21:19:29.507623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.730 [2024-04-18 21:19:29.507630] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.730 [2024-04-18 21:19:29.507636] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.730 [2024-04-18 21:19:29.507653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.730 qpair failed and we were unable to recover it. 00:26:13.730 [2024-04-18 21:19:29.517520] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.730 [2024-04-18 21:19:29.517614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.730 [2024-04-18 21:19:29.517632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.730 [2024-04-18 21:19:29.517639] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.730 [2024-04-18 21:19:29.517645] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.730 [2024-04-18 21:19:29.517662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.730 qpair failed and we were unable to recover it. 00:26:13.730 [2024-04-18 21:19:29.527514] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.730 [2024-04-18 21:19:29.527612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.730 [2024-04-18 21:19:29.527629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.730 [2024-04-18 21:19:29.527636] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.730 [2024-04-18 21:19:29.527642] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.730 [2024-04-18 21:19:29.527659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.730 qpair failed and we were unable to recover it. 00:26:13.730 [2024-04-18 21:19:29.537581] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.730 [2024-04-18 21:19:29.537706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.730 [2024-04-18 21:19:29.537724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.730 [2024-04-18 21:19:29.537731] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.730 [2024-04-18 21:19:29.537737] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.730 [2024-04-18 21:19:29.537757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.730 qpair failed and we were unable to recover it. 00:26:13.730 [2024-04-18 21:19:29.547621] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.730 [2024-04-18 21:19:29.547726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.730 [2024-04-18 21:19:29.547744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.730 [2024-04-18 21:19:29.547751] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.730 [2024-04-18 21:19:29.547757] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.730 [2024-04-18 21:19:29.547774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.730 qpair failed and we were unable to recover it. 00:26:13.730 [2024-04-18 21:19:29.557640] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.730 [2024-04-18 21:19:29.557848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.730 [2024-04-18 21:19:29.557865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.730 [2024-04-18 21:19:29.557872] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.730 [2024-04-18 21:19:29.557878] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.730 [2024-04-18 21:19:29.557895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.730 qpair failed and we were unable to recover it. 00:26:13.730 [2024-04-18 21:19:29.567738] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.730 [2024-04-18 21:19:29.567856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.730 [2024-04-18 21:19:29.567874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.730 [2024-04-18 21:19:29.567881] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.730 [2024-04-18 21:19:29.567887] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.730 [2024-04-18 21:19:29.567903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.730 qpair failed and we were unable to recover it. 00:26:13.730 [2024-04-18 21:19:29.577740] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.730 [2024-04-18 21:19:29.577837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.730 [2024-04-18 21:19:29.577855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.730 [2024-04-18 21:19:29.577862] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.730 [2024-04-18 21:19:29.577869] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.730 [2024-04-18 21:19:29.577885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.730 qpair failed and we were unable to recover it. 00:26:13.730 [2024-04-18 21:19:29.587710] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.730 [2024-04-18 21:19:29.587807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.730 [2024-04-18 21:19:29.587825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.730 [2024-04-18 21:19:29.587832] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.730 [2024-04-18 21:19:29.587838] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.730 [2024-04-18 21:19:29.587854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.730 qpair failed and we were unable to recover it. 00:26:13.730 [2024-04-18 21:19:29.597788] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.730 [2024-04-18 21:19:29.597892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.730 [2024-04-18 21:19:29.597908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.730 [2024-04-18 21:19:29.597915] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.730 [2024-04-18 21:19:29.597921] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.730 [2024-04-18 21:19:29.597937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.730 qpair failed and we were unable to recover it. 00:26:13.730 [2024-04-18 21:19:29.607763] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.730 [2024-04-18 21:19:29.607861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.730 [2024-04-18 21:19:29.607879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.730 [2024-04-18 21:19:29.607886] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.730 [2024-04-18 21:19:29.607893] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.730 [2024-04-18 21:19:29.607909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.730 qpair failed and we were unable to recover it. 00:26:13.730 [2024-04-18 21:19:29.617806] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.730 [2024-04-18 21:19:29.617899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.730 [2024-04-18 21:19:29.617917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.730 [2024-04-18 21:19:29.617924] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.730 [2024-04-18 21:19:29.617930] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.730 [2024-04-18 21:19:29.617947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.730 qpair failed and we were unable to recover it. 00:26:13.730 [2024-04-18 21:19:29.627839] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.730 [2024-04-18 21:19:29.627934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.730 [2024-04-18 21:19:29.627951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.730 [2024-04-18 21:19:29.627959] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.730 [2024-04-18 21:19:29.627968] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.730 [2024-04-18 21:19:29.627984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.730 qpair failed and we were unable to recover it. 00:26:13.730 [2024-04-18 21:19:29.637950] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.730 [2024-04-18 21:19:29.638049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.730 [2024-04-18 21:19:29.638066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.730 [2024-04-18 21:19:29.638073] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.730 [2024-04-18 21:19:29.638079] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.730 [2024-04-18 21:19:29.638096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.730 qpair failed and we were unable to recover it. 00:26:13.730 [2024-04-18 21:19:29.647867] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.730 [2024-04-18 21:19:29.647964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.730 [2024-04-18 21:19:29.647981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.730 [2024-04-18 21:19:29.647989] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.730 [2024-04-18 21:19:29.647995] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.730 [2024-04-18 21:19:29.648011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.730 qpair failed and we were unable to recover it. 00:26:13.730 [2024-04-18 21:19:29.657963] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.730 [2024-04-18 21:19:29.658066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.730 [2024-04-18 21:19:29.658084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.730 [2024-04-18 21:19:29.658091] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.730 [2024-04-18 21:19:29.658097] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.730 [2024-04-18 21:19:29.658113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.730 qpair failed and we were unable to recover it. 00:26:13.990 [2024-04-18 21:19:29.667950] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.990 [2024-04-18 21:19:29.668042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.990 [2024-04-18 21:19:29.668060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.990 [2024-04-18 21:19:29.668067] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.990 [2024-04-18 21:19:29.668073] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.990 [2024-04-18 21:19:29.668090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.990 qpair failed and we were unable to recover it. 00:26:13.990 [2024-04-18 21:19:29.677970] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.990 [2024-04-18 21:19:29.678068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.990 [2024-04-18 21:19:29.678085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.990 [2024-04-18 21:19:29.678093] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.990 [2024-04-18 21:19:29.678099] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.990 [2024-04-18 21:19:29.678115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.990 qpair failed and we were unable to recover it. 00:26:13.990 [2024-04-18 21:19:29.687985] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.990 [2024-04-18 21:19:29.688082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.990 [2024-04-18 21:19:29.688099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.990 [2024-04-18 21:19:29.688106] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.990 [2024-04-18 21:19:29.688112] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.990 [2024-04-18 21:19:29.688129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.990 qpair failed and we were unable to recover it. 00:26:13.990 [2024-04-18 21:19:29.698019] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.990 [2024-04-18 21:19:29.698118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.990 [2024-04-18 21:19:29.698136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.990 [2024-04-18 21:19:29.698143] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.990 [2024-04-18 21:19:29.698149] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.990 [2024-04-18 21:19:29.698165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.990 qpair failed and we were unable to recover it. 00:26:13.990 [2024-04-18 21:19:29.708043] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.990 [2024-04-18 21:19:29.708134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.990 [2024-04-18 21:19:29.708151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.990 [2024-04-18 21:19:29.708159] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.990 [2024-04-18 21:19:29.708165] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.990 [2024-04-18 21:19:29.708181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.990 qpair failed and we were unable to recover it. 00:26:13.990 [2024-04-18 21:19:29.718082] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.990 [2024-04-18 21:19:29.718178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.990 [2024-04-18 21:19:29.718196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.990 [2024-04-18 21:19:29.718206] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.990 [2024-04-18 21:19:29.718212] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.990 [2024-04-18 21:19:29.718228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.990 qpair failed and we were unable to recover it. 00:26:13.990 [2024-04-18 21:19:29.728093] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.990 [2024-04-18 21:19:29.728235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.990 [2024-04-18 21:19:29.728253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.990 [2024-04-18 21:19:29.728261] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.990 [2024-04-18 21:19:29.728266] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.990 [2024-04-18 21:19:29.728283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.990 qpair failed and we were unable to recover it. 00:26:13.990 [2024-04-18 21:19:29.738131] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.990 [2024-04-18 21:19:29.738230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.990 [2024-04-18 21:19:29.738247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.990 [2024-04-18 21:19:29.738255] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.990 [2024-04-18 21:19:29.738261] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.990 [2024-04-18 21:19:29.738277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.990 qpair failed and we were unable to recover it. 00:26:13.990 [2024-04-18 21:19:29.748191] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.990 [2024-04-18 21:19:29.748295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.990 [2024-04-18 21:19:29.748312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.990 [2024-04-18 21:19:29.748319] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.990 [2024-04-18 21:19:29.748326] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.990 [2024-04-18 21:19:29.748342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.990 qpair failed and we were unable to recover it. 00:26:13.990 [2024-04-18 21:19:29.758114] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.990 [2024-04-18 21:19:29.758258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.990 [2024-04-18 21:19:29.758276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.990 [2024-04-18 21:19:29.758284] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.991 [2024-04-18 21:19:29.758290] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.991 [2024-04-18 21:19:29.758306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.991 qpair failed and we were unable to recover it. 00:26:13.991 [2024-04-18 21:19:29.768180] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.991 [2024-04-18 21:19:29.768286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.991 [2024-04-18 21:19:29.768303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.991 [2024-04-18 21:19:29.768311] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.991 [2024-04-18 21:19:29.768316] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.991 [2024-04-18 21:19:29.768332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.991 qpair failed and we were unable to recover it. 00:26:13.991 [2024-04-18 21:19:29.778221] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.991 [2024-04-18 21:19:29.778314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.991 [2024-04-18 21:19:29.778332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.991 [2024-04-18 21:19:29.778340] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.991 [2024-04-18 21:19:29.778346] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.991 [2024-04-18 21:19:29.778362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.991 qpair failed and we were unable to recover it. 00:26:13.991 [2024-04-18 21:19:29.788244] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.991 [2024-04-18 21:19:29.788335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.991 [2024-04-18 21:19:29.788352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.991 [2024-04-18 21:19:29.788359] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.991 [2024-04-18 21:19:29.788365] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.991 [2024-04-18 21:19:29.788382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.991 qpair failed and we were unable to recover it. 00:26:13.991 [2024-04-18 21:19:29.798305] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.991 [2024-04-18 21:19:29.798431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.991 [2024-04-18 21:19:29.798449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.991 [2024-04-18 21:19:29.798457] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.991 [2024-04-18 21:19:29.798463] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.991 [2024-04-18 21:19:29.798478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.991 qpair failed and we were unable to recover it. 00:26:13.991 [2024-04-18 21:19:29.808302] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.991 [2024-04-18 21:19:29.808406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.991 [2024-04-18 21:19:29.808427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.991 [2024-04-18 21:19:29.808435] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.991 [2024-04-18 21:19:29.808441] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.991 [2024-04-18 21:19:29.808457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.991 qpair failed and we were unable to recover it. 00:26:13.991 [2024-04-18 21:19:29.818347] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.991 [2024-04-18 21:19:29.818442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.991 [2024-04-18 21:19:29.818460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.991 [2024-04-18 21:19:29.818467] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.991 [2024-04-18 21:19:29.818473] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.991 [2024-04-18 21:19:29.818490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.991 qpair failed and we were unable to recover it. 00:26:13.991 [2024-04-18 21:19:29.828384] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.991 [2024-04-18 21:19:29.828474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.991 [2024-04-18 21:19:29.828491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.991 [2024-04-18 21:19:29.828499] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.991 [2024-04-18 21:19:29.828505] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.991 [2024-04-18 21:19:29.828527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.991 qpair failed and we were unable to recover it. 00:26:13.991 [2024-04-18 21:19:29.838420] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.991 [2024-04-18 21:19:29.838517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.991 [2024-04-18 21:19:29.838534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.991 [2024-04-18 21:19:29.838542] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.991 [2024-04-18 21:19:29.838547] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.991 [2024-04-18 21:19:29.838563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.991 qpair failed and we were unable to recover it. 00:26:13.991 [2024-04-18 21:19:29.848430] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.991 [2024-04-18 21:19:29.848534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.991 [2024-04-18 21:19:29.848552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.991 [2024-04-18 21:19:29.848560] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.991 [2024-04-18 21:19:29.848566] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.991 [2024-04-18 21:19:29.848586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.991 qpair failed and we were unable to recover it. 00:26:13.991 [2024-04-18 21:19:29.858464] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.991 [2024-04-18 21:19:29.858560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.991 [2024-04-18 21:19:29.858578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.991 [2024-04-18 21:19:29.858586] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.991 [2024-04-18 21:19:29.858591] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.991 [2024-04-18 21:19:29.858609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.991 qpair failed and we were unable to recover it. 00:26:13.991 [2024-04-18 21:19:29.868419] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.991 [2024-04-18 21:19:29.868522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.991 [2024-04-18 21:19:29.868540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.991 [2024-04-18 21:19:29.868547] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.991 [2024-04-18 21:19:29.868554] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.991 [2024-04-18 21:19:29.868570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.991 qpair failed and we were unable to recover it. 00:26:13.991 [2024-04-18 21:19:29.878551] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.991 [2024-04-18 21:19:29.878650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.991 [2024-04-18 21:19:29.878667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.991 [2024-04-18 21:19:29.878674] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.991 [2024-04-18 21:19:29.878680] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.991 [2024-04-18 21:19:29.878697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.991 qpair failed and we were unable to recover it. 00:26:13.991 [2024-04-18 21:19:29.888571] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.991 [2024-04-18 21:19:29.888663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.991 [2024-04-18 21:19:29.888680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.991 [2024-04-18 21:19:29.888688] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.991 [2024-04-18 21:19:29.888694] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.991 [2024-04-18 21:19:29.888710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.991 qpair failed and we were unable to recover it. 00:26:13.991 [2024-04-18 21:19:29.898490] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.991 [2024-04-18 21:19:29.898591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.991 [2024-04-18 21:19:29.898612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.991 [2024-04-18 21:19:29.898619] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.991 [2024-04-18 21:19:29.898625] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.991 [2024-04-18 21:19:29.898641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.991 qpair failed and we were unable to recover it. 00:26:13.991 [2024-04-18 21:19:29.908582] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.991 [2024-04-18 21:19:29.908721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.991 [2024-04-18 21:19:29.908739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.991 [2024-04-18 21:19:29.908746] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.991 [2024-04-18 21:19:29.908752] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.991 [2024-04-18 21:19:29.908768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.991 qpair failed and we were unable to recover it. 00:26:13.991 [2024-04-18 21:19:29.918635] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.991 [2024-04-18 21:19:29.918765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.991 [2024-04-18 21:19:29.918782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.991 [2024-04-18 21:19:29.918790] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.991 [2024-04-18 21:19:29.918796] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:13.991 [2024-04-18 21:19:29.918812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:13.991 qpair failed and we were unable to recover it. 00:26:14.263 [2024-04-18 21:19:29.928647] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.263 [2024-04-18 21:19:29.928744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.264 [2024-04-18 21:19:29.928761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.264 [2024-04-18 21:19:29.928768] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.264 [2024-04-18 21:19:29.928774] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.264 [2024-04-18 21:19:29.928790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.264 qpair failed and we were unable to recover it. 00:26:14.264 [2024-04-18 21:19:29.938688] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.264 [2024-04-18 21:19:29.938784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.264 [2024-04-18 21:19:29.938802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.264 [2024-04-18 21:19:29.938809] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.264 [2024-04-18 21:19:29.938815] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.264 [2024-04-18 21:19:29.938835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.264 qpair failed and we were unable to recover it. 00:26:14.264 [2024-04-18 21:19:29.948725] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.264 [2024-04-18 21:19:29.948850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.264 [2024-04-18 21:19:29.948867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.264 [2024-04-18 21:19:29.948874] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.264 [2024-04-18 21:19:29.948880] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.264 [2024-04-18 21:19:29.948897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.264 qpair failed and we were unable to recover it. 00:26:14.264 [2024-04-18 21:19:29.958754] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.264 [2024-04-18 21:19:29.958848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.264 [2024-04-18 21:19:29.958866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.264 [2024-04-18 21:19:29.958873] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.264 [2024-04-18 21:19:29.958879] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.264 [2024-04-18 21:19:29.958896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.264 qpair failed and we were unable to recover it. 00:26:14.264 [2024-04-18 21:19:29.968759] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.264 [2024-04-18 21:19:29.968854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.264 [2024-04-18 21:19:29.968872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.264 [2024-04-18 21:19:29.968879] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.264 [2024-04-18 21:19:29.968885] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.264 [2024-04-18 21:19:29.968901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.264 qpair failed and we were unable to recover it. 00:26:14.264 [2024-04-18 21:19:29.978893] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.264 [2024-04-18 21:19:29.978988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.264 [2024-04-18 21:19:29.979005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.264 [2024-04-18 21:19:29.979012] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.264 [2024-04-18 21:19:29.979018] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.264 [2024-04-18 21:19:29.979035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.264 qpair failed and we were unable to recover it. 00:26:14.264 [2024-04-18 21:19:29.988813] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.264 [2024-04-18 21:19:29.988905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.264 [2024-04-18 21:19:29.988926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.264 [2024-04-18 21:19:29.988933] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.264 [2024-04-18 21:19:29.988939] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.264 [2024-04-18 21:19:29.988955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.264 qpair failed and we were unable to recover it. 00:26:14.264 [2024-04-18 21:19:29.998871] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.264 [2024-04-18 21:19:29.998966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.264 [2024-04-18 21:19:29.998984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.264 [2024-04-18 21:19:29.998991] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.264 [2024-04-18 21:19:29.998997] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.264 [2024-04-18 21:19:29.999014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.264 qpair failed and we were unable to recover it. 00:26:14.264 [2024-04-18 21:19:30.008915] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.264 [2024-04-18 21:19:30.009064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.264 [2024-04-18 21:19:30.009082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.264 [2024-04-18 21:19:30.009089] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.264 [2024-04-18 21:19:30.009095] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.264 [2024-04-18 21:19:30.009112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.264 qpair failed and we were unable to recover it. 00:26:14.264 [2024-04-18 21:19:30.018948] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.264 [2024-04-18 21:19:30.019051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.264 [2024-04-18 21:19:30.019069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.264 [2024-04-18 21:19:30.019077] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.264 [2024-04-18 21:19:30.019084] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.264 [2024-04-18 21:19:30.019101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.264 qpair failed and we were unable to recover it. 00:26:14.264 [2024-04-18 21:19:30.028962] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.264 [2024-04-18 21:19:30.029062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.264 [2024-04-18 21:19:30.029081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.264 [2024-04-18 21:19:30.029088] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.264 [2024-04-18 21:19:30.029097] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.264 [2024-04-18 21:19:30.029114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.264 qpair failed and we were unable to recover it. 00:26:14.264 [2024-04-18 21:19:30.039010] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.264 [2024-04-18 21:19:30.039117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.264 [2024-04-18 21:19:30.039135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.264 [2024-04-18 21:19:30.039143] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.264 [2024-04-18 21:19:30.039150] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.264 [2024-04-18 21:19:30.039167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.264 qpair failed and we were unable to recover it. 00:26:14.264 [2024-04-18 21:19:30.049062] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.264 [2024-04-18 21:19:30.049166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.264 [2024-04-18 21:19:30.049185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.264 [2024-04-18 21:19:30.049193] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.264 [2024-04-18 21:19:30.049199] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.264 [2024-04-18 21:19:30.049216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.264 qpair failed and we were unable to recover it. 00:26:14.264 [2024-04-18 21:19:30.058977] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.264 [2024-04-18 21:19:30.059069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.264 [2024-04-18 21:19:30.059087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.264 [2024-04-18 21:19:30.059095] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.264 [2024-04-18 21:19:30.059100] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.264 [2024-04-18 21:19:30.059117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.264 qpair failed and we were unable to recover it. 00:26:14.264 [2024-04-18 21:19:30.069010] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.264 [2024-04-18 21:19:30.069108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.264 [2024-04-18 21:19:30.069126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.264 [2024-04-18 21:19:30.069133] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.264 [2024-04-18 21:19:30.069139] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.264 [2024-04-18 21:19:30.069155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.264 qpair failed and we were unable to recover it. 00:26:14.264 [2024-04-18 21:19:30.079093] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.264 [2024-04-18 21:19:30.079300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.264 [2024-04-18 21:19:30.079328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.264 [2024-04-18 21:19:30.079335] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.264 [2024-04-18 21:19:30.079342] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.264 [2024-04-18 21:19:30.079371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.264 qpair failed and we were unable to recover it. 00:26:14.264 [2024-04-18 21:19:30.089164] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.264 [2024-04-18 21:19:30.089294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.264 [2024-04-18 21:19:30.089311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.264 [2024-04-18 21:19:30.089318] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.264 [2024-04-18 21:19:30.089324] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.264 [2024-04-18 21:19:30.089341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.264 qpair failed and we were unable to recover it. 00:26:14.264 [2024-04-18 21:19:30.099179] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.264 [2024-04-18 21:19:30.099293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.264 [2024-04-18 21:19:30.099310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.264 [2024-04-18 21:19:30.099317] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.264 [2024-04-18 21:19:30.099323] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.264 [2024-04-18 21:19:30.099339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.264 qpair failed and we were unable to recover it. 00:26:14.264 [2024-04-18 21:19:30.109168] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.264 [2024-04-18 21:19:30.109267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.264 [2024-04-18 21:19:30.109284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.264 [2024-04-18 21:19:30.109291] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.264 [2024-04-18 21:19:30.109297] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.264 [2024-04-18 21:19:30.109312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.265 qpair failed and we were unable to recover it. 00:26:14.265 [2024-04-18 21:19:30.119133] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.265 [2024-04-18 21:19:30.119257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.265 [2024-04-18 21:19:30.119274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.265 [2024-04-18 21:19:30.119285] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.265 [2024-04-18 21:19:30.119291] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.265 [2024-04-18 21:19:30.119307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.265 qpair failed and we were unable to recover it. 00:26:14.265 [2024-04-18 21:19:30.129255] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.265 [2024-04-18 21:19:30.129352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.265 [2024-04-18 21:19:30.129369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.265 [2024-04-18 21:19:30.129376] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.265 [2024-04-18 21:19:30.129382] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.265 [2024-04-18 21:19:30.129398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.265 qpair failed and we were unable to recover it. 00:26:14.265 [2024-04-18 21:19:30.139267] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.265 [2024-04-18 21:19:30.139366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.265 [2024-04-18 21:19:30.139383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.265 [2024-04-18 21:19:30.139390] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.265 [2024-04-18 21:19:30.139396] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.265 [2024-04-18 21:19:30.139412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.265 qpair failed and we were unable to recover it. 00:26:14.265 [2024-04-18 21:19:30.149289] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.265 [2024-04-18 21:19:30.149385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.265 [2024-04-18 21:19:30.149401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.265 [2024-04-18 21:19:30.149408] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.265 [2024-04-18 21:19:30.149414] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.265 [2024-04-18 21:19:30.149430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.265 qpair failed and we were unable to recover it. 00:26:14.265 [2024-04-18 21:19:30.159322] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.265 [2024-04-18 21:19:30.159422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.265 [2024-04-18 21:19:30.159440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.265 [2024-04-18 21:19:30.159447] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.265 [2024-04-18 21:19:30.159453] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.265 [2024-04-18 21:19:30.159469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.265 qpair failed and we were unable to recover it. 00:26:14.265 [2024-04-18 21:19:30.169324] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.265 [2024-04-18 21:19:30.169423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.265 [2024-04-18 21:19:30.169441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.265 [2024-04-18 21:19:30.169449] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.265 [2024-04-18 21:19:30.169455] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.265 [2024-04-18 21:19:30.169471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.265 qpair failed and we were unable to recover it. 00:26:14.265 [2024-04-18 21:19:30.179371] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.265 [2024-04-18 21:19:30.179466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.265 [2024-04-18 21:19:30.179484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.265 [2024-04-18 21:19:30.179492] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.265 [2024-04-18 21:19:30.179498] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.265 [2024-04-18 21:19:30.179520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.265 qpair failed and we were unable to recover it. 00:26:14.529 [2024-04-18 21:19:30.189395] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.529 [2024-04-18 21:19:30.189488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.529 [2024-04-18 21:19:30.189504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.529 [2024-04-18 21:19:30.189516] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.529 [2024-04-18 21:19:30.189522] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.529 [2024-04-18 21:19:30.189539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.529 qpair failed and we were unable to recover it. 00:26:14.529 [2024-04-18 21:19:30.199382] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.529 [2024-04-18 21:19:30.199491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.529 [2024-04-18 21:19:30.199509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.529 [2024-04-18 21:19:30.199522] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.529 [2024-04-18 21:19:30.199529] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.529 [2024-04-18 21:19:30.199545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.529 qpair failed and we were unable to recover it. 00:26:14.529 [2024-04-18 21:19:30.209435] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.529 [2024-04-18 21:19:30.209541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.529 [2024-04-18 21:19:30.209558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.529 [2024-04-18 21:19:30.209568] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.529 [2024-04-18 21:19:30.209574] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.529 [2024-04-18 21:19:30.209591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.529 qpair failed and we were unable to recover it. 00:26:14.529 [2024-04-18 21:19:30.219459] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.529 [2024-04-18 21:19:30.219554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.529 [2024-04-18 21:19:30.219571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.529 [2024-04-18 21:19:30.219578] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.529 [2024-04-18 21:19:30.219585] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.529 [2024-04-18 21:19:30.219600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.529 qpair failed and we were unable to recover it. 00:26:14.529 [2024-04-18 21:19:30.229489] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.529 [2024-04-18 21:19:30.229633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.529 [2024-04-18 21:19:30.229651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.529 [2024-04-18 21:19:30.229658] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.529 [2024-04-18 21:19:30.229664] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.529 [2024-04-18 21:19:30.229679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.529 qpair failed and we were unable to recover it. 00:26:14.529 [2024-04-18 21:19:30.239563] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.529 [2024-04-18 21:19:30.239662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.529 [2024-04-18 21:19:30.239679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.529 [2024-04-18 21:19:30.239686] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.529 [2024-04-18 21:19:30.239692] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.529 [2024-04-18 21:19:30.239709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.529 qpair failed and we were unable to recover it. 00:26:14.529 [2024-04-18 21:19:30.249595] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.529 [2024-04-18 21:19:30.249689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.529 [2024-04-18 21:19:30.249707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.529 [2024-04-18 21:19:30.249714] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.529 [2024-04-18 21:19:30.249719] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.529 [2024-04-18 21:19:30.249735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.529 qpair failed and we were unable to recover it. 00:26:14.529 [2024-04-18 21:19:30.259597] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.529 [2024-04-18 21:19:30.259694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.529 [2024-04-18 21:19:30.259712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.529 [2024-04-18 21:19:30.259720] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.529 [2024-04-18 21:19:30.259726] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.529 [2024-04-18 21:19:30.259742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.529 qpair failed and we were unable to recover it. 00:26:14.529 [2024-04-18 21:19:30.269656] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.529 [2024-04-18 21:19:30.269753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.529 [2024-04-18 21:19:30.269770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.529 [2024-04-18 21:19:30.269777] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.529 [2024-04-18 21:19:30.269783] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.529 [2024-04-18 21:19:30.269800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.529 qpair failed and we were unable to recover it. 00:26:14.529 [2024-04-18 21:19:30.279678] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.529 [2024-04-18 21:19:30.279800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.529 [2024-04-18 21:19:30.279817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.529 [2024-04-18 21:19:30.279824] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.530 [2024-04-18 21:19:30.279831] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.530 [2024-04-18 21:19:30.279847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.530 qpair failed and we were unable to recover it. 00:26:14.530 [2024-04-18 21:19:30.289673] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.530 [2024-04-18 21:19:30.289772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.530 [2024-04-18 21:19:30.289789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.530 [2024-04-18 21:19:30.289797] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.530 [2024-04-18 21:19:30.289802] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.530 [2024-04-18 21:19:30.289818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.530 qpair failed and we were unable to recover it. 00:26:14.530 [2024-04-18 21:19:30.299713] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.530 [2024-04-18 21:19:30.299807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.530 [2024-04-18 21:19:30.299828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.530 [2024-04-18 21:19:30.299835] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.530 [2024-04-18 21:19:30.299841] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.530 [2024-04-18 21:19:30.299857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.530 qpair failed and we were unable to recover it. 00:26:14.530 [2024-04-18 21:19:30.309747] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.530 [2024-04-18 21:19:30.309841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.530 [2024-04-18 21:19:30.309858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.530 [2024-04-18 21:19:30.309865] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.530 [2024-04-18 21:19:30.309871] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.530 [2024-04-18 21:19:30.309888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.530 qpair failed and we were unable to recover it. 00:26:14.530 [2024-04-18 21:19:30.319741] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.530 [2024-04-18 21:19:30.319841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.530 [2024-04-18 21:19:30.319859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.530 [2024-04-18 21:19:30.319866] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.530 [2024-04-18 21:19:30.319871] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.530 [2024-04-18 21:19:30.319888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.530 qpair failed and we were unable to recover it. 00:26:14.530 [2024-04-18 21:19:30.329784] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.530 [2024-04-18 21:19:30.329884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.530 [2024-04-18 21:19:30.329901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.530 [2024-04-18 21:19:30.329908] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.530 [2024-04-18 21:19:30.329914] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.530 [2024-04-18 21:19:30.329930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.530 qpair failed and we were unable to recover it. 00:26:14.530 [2024-04-18 21:19:30.339820] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.530 [2024-04-18 21:19:30.339916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.530 [2024-04-18 21:19:30.339933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.530 [2024-04-18 21:19:30.339940] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.530 [2024-04-18 21:19:30.339946] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.530 [2024-04-18 21:19:30.339965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.530 qpair failed and we were unable to recover it. 00:26:14.530 [2024-04-18 21:19:30.349831] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.530 [2024-04-18 21:19:30.349928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.530 [2024-04-18 21:19:30.349944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.530 [2024-04-18 21:19:30.349952] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.530 [2024-04-18 21:19:30.349959] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.530 [2024-04-18 21:19:30.349975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.530 qpair failed and we were unable to recover it. 00:26:14.530 [2024-04-18 21:19:30.359862] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.530 [2024-04-18 21:19:30.359961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.530 [2024-04-18 21:19:30.359979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.530 [2024-04-18 21:19:30.359987] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.530 [2024-04-18 21:19:30.359993] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.530 [2024-04-18 21:19:30.360009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.530 qpair failed and we were unable to recover it. 00:26:14.530 [2024-04-18 21:19:30.369902] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.530 [2024-04-18 21:19:30.369999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.530 [2024-04-18 21:19:30.370017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.530 [2024-04-18 21:19:30.370025] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.530 [2024-04-18 21:19:30.370031] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.530 [2024-04-18 21:19:30.370047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.530 qpair failed and we were unable to recover it. 00:26:14.530 [2024-04-18 21:19:30.379903] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.530 [2024-04-18 21:19:30.379997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.530 [2024-04-18 21:19:30.380015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.530 [2024-04-18 21:19:30.380023] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.530 [2024-04-18 21:19:30.380029] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.530 [2024-04-18 21:19:30.380045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.530 qpair failed and we were unable to recover it. 00:26:14.530 [2024-04-18 21:19:30.389951] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.530 [2024-04-18 21:19:30.390046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.530 [2024-04-18 21:19:30.390068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.530 [2024-04-18 21:19:30.390076] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.530 [2024-04-18 21:19:30.390082] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.530 [2024-04-18 21:19:30.390098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.530 qpair failed and we were unable to recover it. 00:26:14.530 [2024-04-18 21:19:30.399974] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.530 [2024-04-18 21:19:30.400070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.530 [2024-04-18 21:19:30.400087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.530 [2024-04-18 21:19:30.400094] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.530 [2024-04-18 21:19:30.400100] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.530 [2024-04-18 21:19:30.400116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.530 qpair failed and we were unable to recover it. 00:26:14.530 [2024-04-18 21:19:30.410031] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.530 [2024-04-18 21:19:30.410165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.530 [2024-04-18 21:19:30.410181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.530 [2024-04-18 21:19:30.410188] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.530 [2024-04-18 21:19:30.410194] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.530 [2024-04-18 21:19:30.410209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.530 qpair failed and we were unable to recover it. 00:26:14.530 [2024-04-18 21:19:30.420038] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.530 [2024-04-18 21:19:30.420182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.530 [2024-04-18 21:19:30.420199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.530 [2024-04-18 21:19:30.420205] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.530 [2024-04-18 21:19:30.420211] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.530 [2024-04-18 21:19:30.420227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.530 qpair failed and we were unable to recover it. 00:26:14.530 [2024-04-18 21:19:30.430083] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.530 [2024-04-18 21:19:30.430175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.530 [2024-04-18 21:19:30.430193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.530 [2024-04-18 21:19:30.430201] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.530 [2024-04-18 21:19:30.430210] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.530 [2024-04-18 21:19:30.430226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.530 qpair failed and we were unable to recover it. 00:26:14.530 [2024-04-18 21:19:30.440072] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.530 [2024-04-18 21:19:30.440211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.530 [2024-04-18 21:19:30.440228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.531 [2024-04-18 21:19:30.440236] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.531 [2024-04-18 21:19:30.440242] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.531 [2024-04-18 21:19:30.440258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.531 qpair failed and we were unable to recover it. 00:26:14.531 [2024-04-18 21:19:30.450130] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.531 [2024-04-18 21:19:30.450225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.531 [2024-04-18 21:19:30.450243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.531 [2024-04-18 21:19:30.450250] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.531 [2024-04-18 21:19:30.450255] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.531 [2024-04-18 21:19:30.450271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.531 qpair failed and we were unable to recover it. 00:26:14.791 [2024-04-18 21:19:30.460105] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.791 [2024-04-18 21:19:30.460204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.791 [2024-04-18 21:19:30.460221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.791 [2024-04-18 21:19:30.460229] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.791 [2024-04-18 21:19:30.460234] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.791 [2024-04-18 21:19:30.460251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.791 qpair failed and we were unable to recover it. 00:26:14.791 [2024-04-18 21:19:30.470228] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.791 [2024-04-18 21:19:30.470320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.791 [2024-04-18 21:19:30.470337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.791 [2024-04-18 21:19:30.470344] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.791 [2024-04-18 21:19:30.470350] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.791 [2024-04-18 21:19:30.470366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.791 qpair failed and we were unable to recover it. 00:26:14.791 [2024-04-18 21:19:30.480245] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.791 [2024-04-18 21:19:30.480354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.791 [2024-04-18 21:19:30.480372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.791 [2024-04-18 21:19:30.480379] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.791 [2024-04-18 21:19:30.480385] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.791 [2024-04-18 21:19:30.480401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.791 qpair failed and we were unable to recover it. 00:26:14.791 [2024-04-18 21:19:30.490269] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.791 [2024-04-18 21:19:30.490365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.791 [2024-04-18 21:19:30.490381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.791 [2024-04-18 21:19:30.490388] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.791 [2024-04-18 21:19:30.490394] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.791 [2024-04-18 21:19:30.490410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.791 qpair failed and we were unable to recover it. 00:26:14.791 [2024-04-18 21:19:30.500284] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.791 [2024-04-18 21:19:30.500414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.791 [2024-04-18 21:19:30.500432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.791 [2024-04-18 21:19:30.500439] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.791 [2024-04-18 21:19:30.500445] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.791 [2024-04-18 21:19:30.500461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.791 qpair failed and we were unable to recover it. 00:26:14.791 [2024-04-18 21:19:30.510270] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.791 [2024-04-18 21:19:30.510366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.791 [2024-04-18 21:19:30.510384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.791 [2024-04-18 21:19:30.510392] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.791 [2024-04-18 21:19:30.510397] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.791 [2024-04-18 21:19:30.510413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.791 qpair failed and we were unable to recover it. 00:26:14.791 [2024-04-18 21:19:30.520346] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.791 [2024-04-18 21:19:30.520480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.791 [2024-04-18 21:19:30.520497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.791 [2024-04-18 21:19:30.520515] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.791 [2024-04-18 21:19:30.520522] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.791 [2024-04-18 21:19:30.520538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.791 qpair failed and we were unable to recover it. 00:26:14.791 [2024-04-18 21:19:30.530347] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.791 [2024-04-18 21:19:30.530444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.791 [2024-04-18 21:19:30.530462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.791 [2024-04-18 21:19:30.530469] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.791 [2024-04-18 21:19:30.530475] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.791 [2024-04-18 21:19:30.530490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.791 qpair failed and we were unable to recover it. 00:26:14.791 [2024-04-18 21:19:30.540380] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.791 [2024-04-18 21:19:30.540476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.791 [2024-04-18 21:19:30.540493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.791 [2024-04-18 21:19:30.540499] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.791 [2024-04-18 21:19:30.540505] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.791 [2024-04-18 21:19:30.540529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.791 qpair failed and we were unable to recover it. 00:26:14.792 [2024-04-18 21:19:30.550407] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.792 [2024-04-18 21:19:30.550505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.792 [2024-04-18 21:19:30.550528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.792 [2024-04-18 21:19:30.550535] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.792 [2024-04-18 21:19:30.550540] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.792 [2024-04-18 21:19:30.550556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.792 qpair failed and we were unable to recover it. 00:26:14.792 [2024-04-18 21:19:30.560446] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.792 [2024-04-18 21:19:30.560553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.792 [2024-04-18 21:19:30.560571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.792 [2024-04-18 21:19:30.560578] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.792 [2024-04-18 21:19:30.560585] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3080000b90 00:26:14.792 [2024-04-18 21:19:30.560601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.792 qpair failed and we were unable to recover it. 00:26:14.792 [2024-04-18 21:19:30.570470] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.792 [2024-04-18 21:19:30.570609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.792 [2024-04-18 21:19:30.570640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.792 [2024-04-18 21:19:30.570652] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.792 [2024-04-18 21:19:30.570661] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:14.792 [2024-04-18 21:19:30.570696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.792 qpair failed and we were unable to recover it. 00:26:14.792 [2024-04-18 21:19:30.580623] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.792 [2024-04-18 21:19:30.580731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.792 [2024-04-18 21:19:30.580751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.792 [2024-04-18 21:19:30.580759] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.792 [2024-04-18 21:19:30.580765] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:14.792 [2024-04-18 21:19:30.580783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.792 qpair failed and we were unable to recover it. 00:26:14.792 [2024-04-18 21:19:30.590527] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.792 [2024-04-18 21:19:30.590627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.792 [2024-04-18 21:19:30.590647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.792 [2024-04-18 21:19:30.590654] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.792 [2024-04-18 21:19:30.590660] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:14.792 [2024-04-18 21:19:30.590677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.792 qpair failed and we were unable to recover it. 00:26:14.792 [2024-04-18 21:19:30.600567] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.792 [2024-04-18 21:19:30.600668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.792 [2024-04-18 21:19:30.600688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.792 [2024-04-18 21:19:30.600696] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.792 [2024-04-18 21:19:30.600702] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:14.792 [2024-04-18 21:19:30.600719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.792 qpair failed and we were unable to recover it. 00:26:14.792 [2024-04-18 21:19:30.610635] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.792 [2024-04-18 21:19:30.610746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.792 [2024-04-18 21:19:30.610766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.792 [2024-04-18 21:19:30.610777] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.792 [2024-04-18 21:19:30.610784] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:14.792 [2024-04-18 21:19:30.610801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.792 qpair failed and we were unable to recover it. 00:26:14.792 [2024-04-18 21:19:30.620606] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.792 [2024-04-18 21:19:30.620705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.792 [2024-04-18 21:19:30.620726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.792 [2024-04-18 21:19:30.620734] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.792 [2024-04-18 21:19:30.620740] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:14.792 [2024-04-18 21:19:30.620757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.792 qpair failed and we were unable to recover it. 00:26:14.792 [2024-04-18 21:19:30.630680] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.792 [2024-04-18 21:19:30.630780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.792 [2024-04-18 21:19:30.630800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.792 [2024-04-18 21:19:30.630807] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.792 [2024-04-18 21:19:30.630814] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:14.792 [2024-04-18 21:19:30.630831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.792 qpair failed and we were unable to recover it. 00:26:14.792 [2024-04-18 21:19:30.640691] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.792 [2024-04-18 21:19:30.640790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.792 [2024-04-18 21:19:30.640809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.792 [2024-04-18 21:19:30.640817] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.792 [2024-04-18 21:19:30.640823] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:14.792 [2024-04-18 21:19:30.640839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.792 qpair failed and we were unable to recover it. 00:26:14.792 [2024-04-18 21:19:30.650747] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.792 [2024-04-18 21:19:30.650845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.792 [2024-04-18 21:19:30.650864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.792 [2024-04-18 21:19:30.650871] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.792 [2024-04-18 21:19:30.650877] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:14.792 [2024-04-18 21:19:30.650893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.792 qpair failed and we were unable to recover it. 00:26:14.792 [2024-04-18 21:19:30.660762] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.792 [2024-04-18 21:19:30.660891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.792 [2024-04-18 21:19:30.660911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.792 [2024-04-18 21:19:30.660918] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.792 [2024-04-18 21:19:30.660924] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:14.792 [2024-04-18 21:19:30.660940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.792 qpair failed and we were unable to recover it. 00:26:14.792 [2024-04-18 21:19:30.670723] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.792 [2024-04-18 21:19:30.670818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.792 [2024-04-18 21:19:30.670837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.792 [2024-04-18 21:19:30.670845] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.792 [2024-04-18 21:19:30.670851] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:14.792 [2024-04-18 21:19:30.670867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.792 qpair failed and we were unable to recover it. 00:26:14.792 [2024-04-18 21:19:30.680803] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.792 [2024-04-18 21:19:30.680899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.792 [2024-04-18 21:19:30.680919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.793 [2024-04-18 21:19:30.680926] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.793 [2024-04-18 21:19:30.680932] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:14.793 [2024-04-18 21:19:30.680949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.793 qpair failed and we were unable to recover it. 00:26:14.793 [2024-04-18 21:19:30.690818] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.793 [2024-04-18 21:19:30.690916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.793 [2024-04-18 21:19:30.690935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.793 [2024-04-18 21:19:30.690943] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.793 [2024-04-18 21:19:30.690949] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:14.793 [2024-04-18 21:19:30.690965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.793 qpair failed and we were unable to recover it. 00:26:14.793 [2024-04-18 21:19:30.700806] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.793 [2024-04-18 21:19:30.700902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.793 [2024-04-18 21:19:30.700920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.793 [2024-04-18 21:19:30.700931] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.793 [2024-04-18 21:19:30.700937] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:14.793 [2024-04-18 21:19:30.700953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.793 qpair failed and we were unable to recover it. 00:26:14.793 [2024-04-18 21:19:30.710869] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.793 [2024-04-18 21:19:30.710961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.793 [2024-04-18 21:19:30.710981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.793 [2024-04-18 21:19:30.710988] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.793 [2024-04-18 21:19:30.710994] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:14.793 [2024-04-18 21:19:30.711010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.793 qpair failed and we were unable to recover it. 00:26:15.053 [2024-04-18 21:19:30.720937] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.053 [2024-04-18 21:19:30.721033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.053 [2024-04-18 21:19:30.721051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.053 [2024-04-18 21:19:30.721058] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.053 [2024-04-18 21:19:30.721064] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.053 [2024-04-18 21:19:30.721081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.053 qpair failed and we were unable to recover it. 00:26:15.053 [2024-04-18 21:19:30.730952] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.053 [2024-04-18 21:19:30.731055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.053 [2024-04-18 21:19:30.731071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.053 [2024-04-18 21:19:30.731079] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.053 [2024-04-18 21:19:30.731084] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.053 [2024-04-18 21:19:30.731100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.053 qpair failed and we were unable to recover it. 00:26:15.053 [2024-04-18 21:19:30.740938] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.053 [2024-04-18 21:19:30.741066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.053 [2024-04-18 21:19:30.741085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.053 [2024-04-18 21:19:30.741092] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.053 [2024-04-18 21:19:30.741098] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.054 [2024-04-18 21:19:30.741114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.054 qpair failed and we were unable to recover it. 00:26:15.054 [2024-04-18 21:19:30.751031] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.054 [2024-04-18 21:19:30.751130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.054 [2024-04-18 21:19:30.751149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.054 [2024-04-18 21:19:30.751157] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.054 [2024-04-18 21:19:30.751163] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.054 [2024-04-18 21:19:30.751178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.054 qpair failed and we were unable to recover it. 00:26:15.054 [2024-04-18 21:19:30.761031] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.054 [2024-04-18 21:19:30.761140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.054 [2024-04-18 21:19:30.761158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.054 [2024-04-18 21:19:30.761166] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.054 [2024-04-18 21:19:30.761172] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.054 [2024-04-18 21:19:30.761193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.054 qpair failed and we were unable to recover it. 00:26:15.054 [2024-04-18 21:19:30.771013] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.054 [2024-04-18 21:19:30.771113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.054 [2024-04-18 21:19:30.771132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.054 [2024-04-18 21:19:30.771140] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.054 [2024-04-18 21:19:30.771146] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.054 [2024-04-18 21:19:30.771162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.054 qpair failed and we were unable to recover it. 00:26:15.054 [2024-04-18 21:19:30.781125] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.054 [2024-04-18 21:19:30.781223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.054 [2024-04-18 21:19:30.781242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.054 [2024-04-18 21:19:30.781250] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.054 [2024-04-18 21:19:30.781256] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.054 [2024-04-18 21:19:30.781272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.054 qpair failed and we were unable to recover it. 00:26:15.054 [2024-04-18 21:19:30.791156] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.054 [2024-04-18 21:19:30.791252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.054 [2024-04-18 21:19:30.791275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.054 [2024-04-18 21:19:30.791282] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.054 [2024-04-18 21:19:30.791288] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.054 [2024-04-18 21:19:30.791304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.054 qpair failed and we were unable to recover it. 00:26:15.054 [2024-04-18 21:19:30.801215] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.054 [2024-04-18 21:19:30.801359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.054 [2024-04-18 21:19:30.801378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.054 [2024-04-18 21:19:30.801385] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.054 [2024-04-18 21:19:30.801391] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.054 [2024-04-18 21:19:30.801407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.054 qpair failed and we were unable to recover it. 00:26:15.054 [2024-04-18 21:19:30.811227] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.054 [2024-04-18 21:19:30.811341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.054 [2024-04-18 21:19:30.811361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.054 [2024-04-18 21:19:30.811368] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.054 [2024-04-18 21:19:30.811374] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.054 [2024-04-18 21:19:30.811390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.054 qpair failed and we were unable to recover it. 00:26:15.054 [2024-04-18 21:19:30.821218] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.054 [2024-04-18 21:19:30.821316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.054 [2024-04-18 21:19:30.821335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.054 [2024-04-18 21:19:30.821342] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.054 [2024-04-18 21:19:30.821348] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.054 [2024-04-18 21:19:30.821363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.054 qpair failed and we were unable to recover it. 00:26:15.054 [2024-04-18 21:19:30.831276] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.054 [2024-04-18 21:19:30.831374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.054 [2024-04-18 21:19:30.831393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.054 [2024-04-18 21:19:30.831401] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.054 [2024-04-18 21:19:30.831407] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.054 [2024-04-18 21:19:30.831426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.054 qpair failed and we were unable to recover it. 00:26:15.054 [2024-04-18 21:19:30.841303] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.054 [2024-04-18 21:19:30.841406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.054 [2024-04-18 21:19:30.841424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.054 [2024-04-18 21:19:30.841432] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.054 [2024-04-18 21:19:30.841438] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.054 [2024-04-18 21:19:30.841453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.054 qpair failed and we were unable to recover it. 00:26:15.054 [2024-04-18 21:19:30.851371] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.054 [2024-04-18 21:19:30.851469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.054 [2024-04-18 21:19:30.851488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.054 [2024-04-18 21:19:30.851496] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.054 [2024-04-18 21:19:30.851502] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.054 [2024-04-18 21:19:30.851525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.054 qpair failed and we were unable to recover it. 00:26:15.054 [2024-04-18 21:19:30.861365] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.054 [2024-04-18 21:19:30.861455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.054 [2024-04-18 21:19:30.861475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.054 [2024-04-18 21:19:30.861483] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.054 [2024-04-18 21:19:30.861489] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.054 [2024-04-18 21:19:30.861506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.054 qpair failed and we were unable to recover it. 00:26:15.054 [2024-04-18 21:19:30.871378] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.054 [2024-04-18 21:19:30.871472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.054 [2024-04-18 21:19:30.871491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.054 [2024-04-18 21:19:30.871499] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.054 [2024-04-18 21:19:30.871505] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.054 [2024-04-18 21:19:30.871526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.054 qpair failed and we were unable to recover it. 00:26:15.054 [2024-04-18 21:19:30.881428] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.054 [2024-04-18 21:19:30.881532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.055 [2024-04-18 21:19:30.881554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.055 [2024-04-18 21:19:30.881562] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.055 [2024-04-18 21:19:30.881568] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.055 [2024-04-18 21:19:30.881584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.055 qpair failed and we were unable to recover it. 00:26:15.055 [2024-04-18 21:19:30.891391] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.055 [2024-04-18 21:19:30.891540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.055 [2024-04-18 21:19:30.891559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.055 [2024-04-18 21:19:30.891567] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.055 [2024-04-18 21:19:30.891573] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.055 [2024-04-18 21:19:30.891588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.055 qpair failed and we were unable to recover it. 00:26:15.055 [2024-04-18 21:19:30.901472] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.055 [2024-04-18 21:19:30.901574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.055 [2024-04-18 21:19:30.901593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.055 [2024-04-18 21:19:30.901601] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.055 [2024-04-18 21:19:30.901606] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.055 [2024-04-18 21:19:30.901622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.055 qpair failed and we were unable to recover it. 00:26:15.055 [2024-04-18 21:19:30.911525] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.055 [2024-04-18 21:19:30.911624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.055 [2024-04-18 21:19:30.911642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.055 [2024-04-18 21:19:30.911650] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.055 [2024-04-18 21:19:30.911655] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.055 [2024-04-18 21:19:30.911672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.055 qpair failed and we were unable to recover it. 00:26:15.055 [2024-04-18 21:19:30.921556] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.055 [2024-04-18 21:19:30.921661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.055 [2024-04-18 21:19:30.921680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.055 [2024-04-18 21:19:30.921688] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.055 [2024-04-18 21:19:30.921694] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.055 [2024-04-18 21:19:30.921714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.055 qpair failed and we were unable to recover it. 00:26:15.055 [2024-04-18 21:19:30.931486] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.055 [2024-04-18 21:19:30.931589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.055 [2024-04-18 21:19:30.931608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.055 [2024-04-18 21:19:30.931616] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.055 [2024-04-18 21:19:30.931622] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.055 [2024-04-18 21:19:30.931639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.055 qpair failed and we were unable to recover it. 00:26:15.055 [2024-04-18 21:19:30.941595] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.055 [2024-04-18 21:19:30.941693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.055 [2024-04-18 21:19:30.941712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.055 [2024-04-18 21:19:30.941719] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.055 [2024-04-18 21:19:30.941725] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.055 [2024-04-18 21:19:30.941741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.055 qpair failed and we were unable to recover it. 00:26:15.055 [2024-04-18 21:19:30.951619] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.055 [2024-04-18 21:19:30.951712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.055 [2024-04-18 21:19:30.951731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.055 [2024-04-18 21:19:30.951738] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.055 [2024-04-18 21:19:30.951744] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.055 [2024-04-18 21:19:30.951760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.055 qpair failed and we were unable to recover it. 00:26:15.055 [2024-04-18 21:19:30.961634] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.055 [2024-04-18 21:19:30.961733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.055 [2024-04-18 21:19:30.961751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.055 [2024-04-18 21:19:30.961758] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.055 [2024-04-18 21:19:30.961765] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.055 [2024-04-18 21:19:30.961781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.055 qpair failed and we were unable to recover it. 00:26:15.055 [2024-04-18 21:19:30.971678] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.055 [2024-04-18 21:19:30.971776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.055 [2024-04-18 21:19:30.971798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.055 [2024-04-18 21:19:30.971806] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.055 [2024-04-18 21:19:30.971812] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.055 [2024-04-18 21:19:30.971827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.055 qpair failed and we were unable to recover it. 00:26:15.055 [2024-04-18 21:19:30.981719] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.055 [2024-04-18 21:19:30.981818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.055 [2024-04-18 21:19:30.981837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.055 [2024-04-18 21:19:30.981845] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.055 [2024-04-18 21:19:30.981851] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.055 [2024-04-18 21:19:30.981867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.055 qpair failed and we were unable to recover it. 00:26:15.316 [2024-04-18 21:19:30.991739] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.316 [2024-04-18 21:19:30.991833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.316 [2024-04-18 21:19:30.991851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.316 [2024-04-18 21:19:30.991858] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.316 [2024-04-18 21:19:30.991864] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.316 [2024-04-18 21:19:30.991880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.316 qpair failed and we were unable to recover it. 00:26:15.316 [2024-04-18 21:19:31.001781] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.316 [2024-04-18 21:19:31.001878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.316 [2024-04-18 21:19:31.001897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.316 [2024-04-18 21:19:31.001904] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.316 [2024-04-18 21:19:31.001910] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.316 [2024-04-18 21:19:31.001926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.316 qpair failed and we were unable to recover it. 00:26:15.316 [2024-04-18 21:19:31.011817] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.316 [2024-04-18 21:19:31.011915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.316 [2024-04-18 21:19:31.011934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.316 [2024-04-18 21:19:31.011941] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.316 [2024-04-18 21:19:31.011947] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.316 [2024-04-18 21:19:31.011966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.316 qpair failed and we were unable to recover it. 00:26:15.316 [2024-04-18 21:19:31.021850] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.316 [2024-04-18 21:19:31.021954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.316 [2024-04-18 21:19:31.021973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.316 [2024-04-18 21:19:31.021981] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.316 [2024-04-18 21:19:31.021987] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.316 [2024-04-18 21:19:31.022003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.316 qpair failed and we were unable to recover it. 00:26:15.316 [2024-04-18 21:19:31.031838] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.316 [2024-04-18 21:19:31.031933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.316 [2024-04-18 21:19:31.031952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.316 [2024-04-18 21:19:31.031960] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.316 [2024-04-18 21:19:31.031966] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.316 [2024-04-18 21:19:31.031982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.316 qpair failed and we were unable to recover it. 00:26:15.316 [2024-04-18 21:19:31.041996] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.316 [2024-04-18 21:19:31.042094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.316 [2024-04-18 21:19:31.042113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.316 [2024-04-18 21:19:31.042120] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.316 [2024-04-18 21:19:31.042126] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.316 [2024-04-18 21:19:31.042142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.316 qpair failed and we were unable to recover it. 00:26:15.316 [2024-04-18 21:19:31.051909] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.316 [2024-04-18 21:19:31.052011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.316 [2024-04-18 21:19:31.052029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.316 [2024-04-18 21:19:31.052037] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.316 [2024-04-18 21:19:31.052043] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.316 [2024-04-18 21:19:31.052059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.316 qpair failed and we were unable to recover it. 00:26:15.316 [2024-04-18 21:19:31.061925] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.316 [2024-04-18 21:19:31.062019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.316 [2024-04-18 21:19:31.062042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.316 [2024-04-18 21:19:31.062050] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.316 [2024-04-18 21:19:31.062055] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.316 [2024-04-18 21:19:31.062071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.316 qpair failed and we were unable to recover it. 00:26:15.316 [2024-04-18 21:19:31.071946] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.316 [2024-04-18 21:19:31.072043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.316 [2024-04-18 21:19:31.072062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.316 [2024-04-18 21:19:31.072069] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.317 [2024-04-18 21:19:31.072075] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.317 [2024-04-18 21:19:31.072091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.317 qpair failed and we were unable to recover it. 00:26:15.317 [2024-04-18 21:19:31.082012] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.317 [2024-04-18 21:19:31.082108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.317 [2024-04-18 21:19:31.082127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.317 [2024-04-18 21:19:31.082134] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.317 [2024-04-18 21:19:31.082140] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.317 [2024-04-18 21:19:31.082155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.317 qpair failed and we were unable to recover it. 00:26:15.317 [2024-04-18 21:19:31.092028] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.317 [2024-04-18 21:19:31.092128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.317 [2024-04-18 21:19:31.092147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.317 [2024-04-18 21:19:31.092154] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.317 [2024-04-18 21:19:31.092160] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.317 [2024-04-18 21:19:31.092176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.317 qpair failed and we were unable to recover it. 00:26:15.317 [2024-04-18 21:19:31.102057] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.317 [2024-04-18 21:19:31.102152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.317 [2024-04-18 21:19:31.102171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.317 [2024-04-18 21:19:31.102178] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.317 [2024-04-18 21:19:31.102188] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.317 [2024-04-18 21:19:31.102204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.317 qpair failed and we were unable to recover it. 00:26:15.317 [2024-04-18 21:19:31.112084] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.317 [2024-04-18 21:19:31.112182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.317 [2024-04-18 21:19:31.112200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.317 [2024-04-18 21:19:31.112209] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.317 [2024-04-18 21:19:31.112215] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.317 [2024-04-18 21:19:31.112231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.317 qpair failed and we were unable to recover it. 00:26:15.317 [2024-04-18 21:19:31.122080] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.317 [2024-04-18 21:19:31.122173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.317 [2024-04-18 21:19:31.122192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.317 [2024-04-18 21:19:31.122199] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.317 [2024-04-18 21:19:31.122205] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.317 [2024-04-18 21:19:31.122220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.317 qpair failed and we were unable to recover it. 00:26:15.317 [2024-04-18 21:19:31.132143] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.317 [2024-04-18 21:19:31.132240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.317 [2024-04-18 21:19:31.132259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.317 [2024-04-18 21:19:31.132266] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.317 [2024-04-18 21:19:31.132272] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.317 [2024-04-18 21:19:31.132288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.317 qpair failed and we were unable to recover it. 00:26:15.317 [2024-04-18 21:19:31.142221] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.317 [2024-04-18 21:19:31.142313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.317 [2024-04-18 21:19:31.142332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.317 [2024-04-18 21:19:31.142340] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.317 [2024-04-18 21:19:31.142346] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.317 [2024-04-18 21:19:31.142362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.317 qpair failed and we were unable to recover it. 00:26:15.317 [2024-04-18 21:19:31.152202] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.317 [2024-04-18 21:19:31.152297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.317 [2024-04-18 21:19:31.152316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.317 [2024-04-18 21:19:31.152323] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.317 [2024-04-18 21:19:31.152329] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.317 [2024-04-18 21:19:31.152346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.317 qpair failed and we were unable to recover it. 00:26:15.317 [2024-04-18 21:19:31.162240] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.317 [2024-04-18 21:19:31.162340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.317 [2024-04-18 21:19:31.162360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.317 [2024-04-18 21:19:31.162367] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.317 [2024-04-18 21:19:31.162373] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.317 [2024-04-18 21:19:31.162389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.317 qpair failed and we were unable to recover it. 00:26:15.317 [2024-04-18 21:19:31.172268] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.317 [2024-04-18 21:19:31.172366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.317 [2024-04-18 21:19:31.172386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.317 [2024-04-18 21:19:31.172394] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.317 [2024-04-18 21:19:31.172400] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.317 [2024-04-18 21:19:31.172418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.317 qpair failed and we were unable to recover it. 00:26:15.317 [2024-04-18 21:19:31.182290] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.317 [2024-04-18 21:19:31.182385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.317 [2024-04-18 21:19:31.182404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.317 [2024-04-18 21:19:31.182412] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.317 [2024-04-18 21:19:31.182418] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.317 [2024-04-18 21:19:31.182434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.317 qpair failed and we were unable to recover it. 00:26:15.317 [2024-04-18 21:19:31.192225] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.317 [2024-04-18 21:19:31.192320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.317 [2024-04-18 21:19:31.192339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.317 [2024-04-18 21:19:31.192347] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.317 [2024-04-18 21:19:31.192356] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.317 [2024-04-18 21:19:31.192373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.317 qpair failed and we were unable to recover it. 00:26:15.317 [2024-04-18 21:19:31.202375] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.317 [2024-04-18 21:19:31.202470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.317 [2024-04-18 21:19:31.202489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.317 [2024-04-18 21:19:31.202496] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.317 [2024-04-18 21:19:31.202502] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.317 [2024-04-18 21:19:31.202523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.317 qpair failed and we were unable to recover it. 00:26:15.317 [2024-04-18 21:19:31.212362] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.318 [2024-04-18 21:19:31.212464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.318 [2024-04-18 21:19:31.212482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.318 [2024-04-18 21:19:31.212490] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.318 [2024-04-18 21:19:31.212496] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.318 [2024-04-18 21:19:31.212519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.318 qpair failed and we were unable to recover it. 00:26:15.318 [2024-04-18 21:19:31.222397] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.318 [2024-04-18 21:19:31.222534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.318 [2024-04-18 21:19:31.222553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.318 [2024-04-18 21:19:31.222561] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.318 [2024-04-18 21:19:31.222567] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.318 [2024-04-18 21:19:31.222583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.318 qpair failed and we were unable to recover it. 00:26:15.318 [2024-04-18 21:19:31.232427] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.318 [2024-04-18 21:19:31.232535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.318 [2024-04-18 21:19:31.232554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.318 [2024-04-18 21:19:31.232562] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.318 [2024-04-18 21:19:31.232568] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.318 [2024-04-18 21:19:31.232584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.318 qpair failed and we were unable to recover it. 00:26:15.318 [2024-04-18 21:19:31.242476] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.318 [2024-04-18 21:19:31.242585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.318 [2024-04-18 21:19:31.242604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.318 [2024-04-18 21:19:31.242611] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.318 [2024-04-18 21:19:31.242617] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.318 [2024-04-18 21:19:31.242633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.318 qpair failed and we were unable to recover it. 00:26:15.579 [2024-04-18 21:19:31.252525] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.579 [2024-04-18 21:19:31.252627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.579 [2024-04-18 21:19:31.252646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.579 [2024-04-18 21:19:31.252654] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.579 [2024-04-18 21:19:31.252660] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.579 [2024-04-18 21:19:31.252676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.579 qpair failed and we were unable to recover it. 00:26:15.579 [2024-04-18 21:19:31.262503] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.579 [2024-04-18 21:19:31.262606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.579 [2024-04-18 21:19:31.262625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.579 [2024-04-18 21:19:31.262633] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.579 [2024-04-18 21:19:31.262638] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.579 [2024-04-18 21:19:31.262655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.579 qpair failed and we were unable to recover it. 00:26:15.579 [2024-04-18 21:19:31.272555] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.579 [2024-04-18 21:19:31.272651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.579 [2024-04-18 21:19:31.272669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.579 [2024-04-18 21:19:31.272677] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.579 [2024-04-18 21:19:31.272682] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.579 [2024-04-18 21:19:31.272699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.579 qpair failed and we were unable to recover it. 00:26:15.579 [2024-04-18 21:19:31.282618] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.579 [2024-04-18 21:19:31.282714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.579 [2024-04-18 21:19:31.282732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.579 [2024-04-18 21:19:31.282745] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.579 [2024-04-18 21:19:31.282751] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.579 [2024-04-18 21:19:31.282767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.579 qpair failed and we were unable to recover it. 00:26:15.579 [2024-04-18 21:19:31.292584] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.579 [2024-04-18 21:19:31.292683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.579 [2024-04-18 21:19:31.292702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.579 [2024-04-18 21:19:31.292709] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.579 [2024-04-18 21:19:31.292715] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.579 [2024-04-18 21:19:31.292731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.579 qpair failed and we were unable to recover it. 00:26:15.579 [2024-04-18 21:19:31.302639] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.579 [2024-04-18 21:19:31.302735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.579 [2024-04-18 21:19:31.302754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.579 [2024-04-18 21:19:31.302761] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.579 [2024-04-18 21:19:31.302767] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.579 [2024-04-18 21:19:31.302783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.579 qpair failed and we were unable to recover it. 00:26:15.579 [2024-04-18 21:19:31.312668] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.579 [2024-04-18 21:19:31.312765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.579 [2024-04-18 21:19:31.312784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.579 [2024-04-18 21:19:31.312791] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.579 [2024-04-18 21:19:31.312797] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.579 [2024-04-18 21:19:31.312812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.579 qpair failed and we were unable to recover it. 00:26:15.579 [2024-04-18 21:19:31.322658] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.579 [2024-04-18 21:19:31.322790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.579 [2024-04-18 21:19:31.322809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.579 [2024-04-18 21:19:31.322817] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.579 [2024-04-18 21:19:31.322822] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.579 [2024-04-18 21:19:31.322838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.579 qpair failed and we were unable to recover it. 00:26:15.580 [2024-04-18 21:19:31.332727] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.580 [2024-04-18 21:19:31.332842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.580 [2024-04-18 21:19:31.332860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.580 [2024-04-18 21:19:31.332867] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.580 [2024-04-18 21:19:31.332873] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.580 [2024-04-18 21:19:31.332889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.580 qpair failed and we were unable to recover it. 00:26:15.580 [2024-04-18 21:19:31.342752] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.580 [2024-04-18 21:19:31.342848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.580 [2024-04-18 21:19:31.342866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.580 [2024-04-18 21:19:31.342874] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.580 [2024-04-18 21:19:31.342880] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.580 [2024-04-18 21:19:31.342896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.580 qpair failed and we were unable to recover it. 00:26:15.580 [2024-04-18 21:19:31.352781] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.580 [2024-04-18 21:19:31.352878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.580 [2024-04-18 21:19:31.352897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.580 [2024-04-18 21:19:31.352905] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.580 [2024-04-18 21:19:31.352911] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.580 [2024-04-18 21:19:31.352926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.580 qpair failed and we were unable to recover it. 00:26:15.580 [2024-04-18 21:19:31.362770] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.580 [2024-04-18 21:19:31.362865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.580 [2024-04-18 21:19:31.362884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.580 [2024-04-18 21:19:31.362891] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.580 [2024-04-18 21:19:31.362898] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.580 [2024-04-18 21:19:31.362915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.580 qpair failed and we were unable to recover it. 00:26:15.580 [2024-04-18 21:19:31.372870] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.580 [2024-04-18 21:19:31.373003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.580 [2024-04-18 21:19:31.373021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.580 [2024-04-18 21:19:31.373032] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.580 [2024-04-18 21:19:31.373038] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.580 [2024-04-18 21:19:31.373054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.580 qpair failed and we were unable to recover it. 00:26:15.580 [2024-04-18 21:19:31.382798] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.580 [2024-04-18 21:19:31.382935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.580 [2024-04-18 21:19:31.382954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.580 [2024-04-18 21:19:31.382961] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.580 [2024-04-18 21:19:31.382967] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.580 [2024-04-18 21:19:31.382983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.580 qpair failed and we were unable to recover it. 00:26:15.580 [2024-04-18 21:19:31.392914] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.580 [2024-04-18 21:19:31.393008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.580 [2024-04-18 21:19:31.393026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.580 [2024-04-18 21:19:31.393034] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.580 [2024-04-18 21:19:31.393040] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.580 [2024-04-18 21:19:31.393056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.580 qpair failed and we were unable to recover it. 00:26:15.580 [2024-04-18 21:19:31.402939] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.580 [2024-04-18 21:19:31.403037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.580 [2024-04-18 21:19:31.403055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.580 [2024-04-18 21:19:31.403063] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.580 [2024-04-18 21:19:31.403069] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.580 [2024-04-18 21:19:31.403085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.580 qpair failed and we were unable to recover it. 00:26:15.580 [2024-04-18 21:19:31.412955] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.580 [2024-04-18 21:19:31.413052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.580 [2024-04-18 21:19:31.413069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.580 [2024-04-18 21:19:31.413077] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.580 [2024-04-18 21:19:31.413083] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.580 [2024-04-18 21:19:31.413099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.580 qpair failed and we were unable to recover it. 00:26:15.580 [2024-04-18 21:19:31.422977] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.580 [2024-04-18 21:19:31.423074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.580 [2024-04-18 21:19:31.423093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.580 [2024-04-18 21:19:31.423101] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.580 [2024-04-18 21:19:31.423107] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.580 [2024-04-18 21:19:31.423123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.580 qpair failed and we were unable to recover it. 00:26:15.580 [2024-04-18 21:19:31.433014] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.580 [2024-04-18 21:19:31.433105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.580 [2024-04-18 21:19:31.433123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.580 [2024-04-18 21:19:31.433130] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.580 [2024-04-18 21:19:31.433137] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.580 [2024-04-18 21:19:31.433152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.580 qpair failed and we were unable to recover it. 00:26:15.580 [2024-04-18 21:19:31.443039] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.580 [2024-04-18 21:19:31.443136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.580 [2024-04-18 21:19:31.443156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.580 [2024-04-18 21:19:31.443163] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.580 [2024-04-18 21:19:31.443169] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.580 [2024-04-18 21:19:31.443185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.580 qpair failed and we were unable to recover it. 00:26:15.580 [2024-04-18 21:19:31.453062] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.580 [2024-04-18 21:19:31.453158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.580 [2024-04-18 21:19:31.453177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.580 [2024-04-18 21:19:31.453184] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.580 [2024-04-18 21:19:31.453191] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.580 [2024-04-18 21:19:31.453207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.580 qpair failed and we were unable to recover it. 00:26:15.581 [2024-04-18 21:19:31.463086] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.581 [2024-04-18 21:19:31.463183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.581 [2024-04-18 21:19:31.463202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.581 [2024-04-18 21:19:31.463214] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.581 [2024-04-18 21:19:31.463220] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.581 [2024-04-18 21:19:31.463236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.581 qpair failed and we were unable to recover it. 00:26:15.581 [2024-04-18 21:19:31.473095] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.581 [2024-04-18 21:19:31.473189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.581 [2024-04-18 21:19:31.473209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.581 [2024-04-18 21:19:31.473217] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.581 [2024-04-18 21:19:31.473223] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.581 [2024-04-18 21:19:31.473239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.581 qpair failed and we were unable to recover it. 00:26:15.581 [2024-04-18 21:19:31.483149] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.581 [2024-04-18 21:19:31.483248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.581 [2024-04-18 21:19:31.483267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.581 [2024-04-18 21:19:31.483274] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.581 [2024-04-18 21:19:31.483281] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.581 [2024-04-18 21:19:31.483296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.581 qpair failed and we were unable to recover it. 00:26:15.581 [2024-04-18 21:19:31.493167] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.581 [2024-04-18 21:19:31.493263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.581 [2024-04-18 21:19:31.493281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.581 [2024-04-18 21:19:31.493289] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.581 [2024-04-18 21:19:31.493295] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.581 [2024-04-18 21:19:31.493311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.581 qpair failed and we were unable to recover it. 00:26:15.581 [2024-04-18 21:19:31.503333] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.581 [2024-04-18 21:19:31.503432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.581 [2024-04-18 21:19:31.503451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.581 [2024-04-18 21:19:31.503458] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.581 [2024-04-18 21:19:31.503464] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.581 [2024-04-18 21:19:31.503480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.581 qpair failed and we were unable to recover it. 00:26:15.842 [2024-04-18 21:19:31.513226] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.842 [2024-04-18 21:19:31.513325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.842 [2024-04-18 21:19:31.513345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.842 [2024-04-18 21:19:31.513353] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.842 [2024-04-18 21:19:31.513359] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.842 [2024-04-18 21:19:31.513375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.842 qpair failed and we were unable to recover it. 00:26:15.842 [2024-04-18 21:19:31.523267] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.842 [2024-04-18 21:19:31.523363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.842 [2024-04-18 21:19:31.523381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.842 [2024-04-18 21:19:31.523389] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.842 [2024-04-18 21:19:31.523395] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.842 [2024-04-18 21:19:31.523411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.842 qpair failed and we were unable to recover it. 00:26:15.842 [2024-04-18 21:19:31.533290] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.842 [2024-04-18 21:19:31.533391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.842 [2024-04-18 21:19:31.533409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.842 [2024-04-18 21:19:31.533417] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.842 [2024-04-18 21:19:31.533423] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.842 [2024-04-18 21:19:31.533438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.842 qpair failed and we were unable to recover it. 00:26:15.842 [2024-04-18 21:19:31.543317] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.842 [2024-04-18 21:19:31.543412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.842 [2024-04-18 21:19:31.543431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.842 [2024-04-18 21:19:31.543438] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.842 [2024-04-18 21:19:31.543444] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.842 [2024-04-18 21:19:31.543460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.842 qpair failed and we were unable to recover it. 00:26:15.842 [2024-04-18 21:19:31.553337] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.842 [2024-04-18 21:19:31.553431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.842 [2024-04-18 21:19:31.553453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.842 [2024-04-18 21:19:31.553461] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.842 [2024-04-18 21:19:31.553466] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.842 [2024-04-18 21:19:31.553482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.842 qpair failed and we were unable to recover it. 00:26:15.842 [2024-04-18 21:19:31.563379] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.842 [2024-04-18 21:19:31.563478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.842 [2024-04-18 21:19:31.563498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.842 [2024-04-18 21:19:31.563506] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.842 [2024-04-18 21:19:31.563517] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.842 [2024-04-18 21:19:31.563534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.842 qpair failed and we were unable to recover it. 00:26:15.842 [2024-04-18 21:19:31.573551] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.842 [2024-04-18 21:19:31.573658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.842 [2024-04-18 21:19:31.573678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.842 [2024-04-18 21:19:31.573686] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.842 [2024-04-18 21:19:31.573692] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.842 [2024-04-18 21:19:31.573708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.842 qpair failed and we were unable to recover it. 00:26:15.842 [2024-04-18 21:19:31.583475] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.842 [2024-04-18 21:19:31.583680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.842 [2024-04-18 21:19:31.583699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.842 [2024-04-18 21:19:31.583706] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.842 [2024-04-18 21:19:31.583713] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.842 [2024-04-18 21:19:31.583729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.842 qpair failed and we were unable to recover it. 00:26:15.842 [2024-04-18 21:19:31.593565] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.842 [2024-04-18 21:19:31.593662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.842 [2024-04-18 21:19:31.593682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.842 [2024-04-18 21:19:31.593690] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.842 [2024-04-18 21:19:31.593696] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.842 [2024-04-18 21:19:31.593713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.842 qpair failed and we were unable to recover it. 00:26:15.842 [2024-04-18 21:19:31.603444] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.842 [2024-04-18 21:19:31.603553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.842 [2024-04-18 21:19:31.603573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.842 [2024-04-18 21:19:31.603580] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.842 [2024-04-18 21:19:31.603586] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.842 [2024-04-18 21:19:31.603602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.842 qpair failed and we were unable to recover it. 00:26:15.842 [2024-04-18 21:19:31.613543] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.842 [2024-04-18 21:19:31.613646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.842 [2024-04-18 21:19:31.613664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.842 [2024-04-18 21:19:31.613673] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.842 [2024-04-18 21:19:31.613679] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.842 [2024-04-18 21:19:31.613695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.842 qpair failed and we were unable to recover it. 00:26:15.842 [2024-04-18 21:19:31.623468] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.842 [2024-04-18 21:19:31.623568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.842 [2024-04-18 21:19:31.623591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.842 [2024-04-18 21:19:31.623599] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.842 [2024-04-18 21:19:31.623605] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.842 [2024-04-18 21:19:31.623621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.842 qpair failed and we were unable to recover it. 00:26:15.842 [2024-04-18 21:19:31.633490] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.843 [2024-04-18 21:19:31.633589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.843 [2024-04-18 21:19:31.633608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.843 [2024-04-18 21:19:31.633615] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.843 [2024-04-18 21:19:31.633621] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.843 [2024-04-18 21:19:31.633637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.843 qpair failed and we were unable to recover it. 00:26:15.843 [2024-04-18 21:19:31.643606] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.843 [2024-04-18 21:19:31.643704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.843 [2024-04-18 21:19:31.643726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.843 [2024-04-18 21:19:31.643733] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.843 [2024-04-18 21:19:31.643739] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.843 [2024-04-18 21:19:31.643755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.843 qpair failed and we were unable to recover it. 00:26:15.843 [2024-04-18 21:19:31.653626] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.843 [2024-04-18 21:19:31.653724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.843 [2024-04-18 21:19:31.653743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.843 [2024-04-18 21:19:31.653750] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.843 [2024-04-18 21:19:31.653756] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.843 [2024-04-18 21:19:31.653772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.843 qpair failed and we were unable to recover it. 00:26:15.843 [2024-04-18 21:19:31.663613] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.843 [2024-04-18 21:19:31.663705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.843 [2024-04-18 21:19:31.663724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.843 [2024-04-18 21:19:31.663731] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.843 [2024-04-18 21:19:31.663738] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.843 [2024-04-18 21:19:31.663754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.843 qpair failed and we were unable to recover it. 00:26:15.843 [2024-04-18 21:19:31.673685] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.843 [2024-04-18 21:19:31.673780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.843 [2024-04-18 21:19:31.673799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.843 [2024-04-18 21:19:31.673807] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.843 [2024-04-18 21:19:31.673813] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.843 [2024-04-18 21:19:31.673829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.843 qpair failed and we were unable to recover it. 00:26:15.843 [2024-04-18 21:19:31.683723] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.843 [2024-04-18 21:19:31.683819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.843 [2024-04-18 21:19:31.683838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.843 [2024-04-18 21:19:31.683845] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.843 [2024-04-18 21:19:31.683851] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.843 [2024-04-18 21:19:31.683871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.843 qpair failed and we were unable to recover it. 00:26:15.843 [2024-04-18 21:19:31.693730] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.843 [2024-04-18 21:19:31.693823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.843 [2024-04-18 21:19:31.693842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.843 [2024-04-18 21:19:31.693849] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.843 [2024-04-18 21:19:31.693855] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.843 [2024-04-18 21:19:31.693871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.843 qpair failed and we were unable to recover it. 00:26:15.843 [2024-04-18 21:19:31.703770] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.843 [2024-04-18 21:19:31.703868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.843 [2024-04-18 21:19:31.703887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.843 [2024-04-18 21:19:31.703894] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.843 [2024-04-18 21:19:31.703900] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.843 [2024-04-18 21:19:31.703916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.843 qpair failed and we were unable to recover it. 00:26:15.843 [2024-04-18 21:19:31.713790] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.843 [2024-04-18 21:19:31.713883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.843 [2024-04-18 21:19:31.713902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.843 [2024-04-18 21:19:31.713909] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.843 [2024-04-18 21:19:31.713915] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.843 [2024-04-18 21:19:31.713930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.843 qpair failed and we were unable to recover it. 00:26:15.843 [2024-04-18 21:19:31.723827] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.843 [2024-04-18 21:19:31.723924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.843 [2024-04-18 21:19:31.723943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.843 [2024-04-18 21:19:31.723951] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.843 [2024-04-18 21:19:31.723957] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.843 [2024-04-18 21:19:31.723973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.843 qpair failed and we were unable to recover it. 00:26:15.843 [2024-04-18 21:19:31.733848] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.843 [2024-04-18 21:19:31.733942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.843 [2024-04-18 21:19:31.733964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.843 [2024-04-18 21:19:31.733971] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.843 [2024-04-18 21:19:31.733977] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.843 [2024-04-18 21:19:31.733993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.843 qpair failed and we were unable to recover it. 00:26:15.843 [2024-04-18 21:19:31.743904] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.843 [2024-04-18 21:19:31.744015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.843 [2024-04-18 21:19:31.744033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.843 [2024-04-18 21:19:31.744041] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.843 [2024-04-18 21:19:31.744047] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.843 [2024-04-18 21:19:31.744062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.843 qpair failed and we were unable to recover it. 00:26:15.843 [2024-04-18 21:19:31.753912] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.843 [2024-04-18 21:19:31.754007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.843 [2024-04-18 21:19:31.754026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.843 [2024-04-18 21:19:31.754033] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.843 [2024-04-18 21:19:31.754039] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.843 [2024-04-18 21:19:31.754055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.843 qpair failed and we were unable to recover it. 00:26:15.843 [2024-04-18 21:19:31.763921] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.843 [2024-04-18 21:19:31.764018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.843 [2024-04-18 21:19:31.764036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.843 [2024-04-18 21:19:31.764044] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.843 [2024-04-18 21:19:31.764050] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:15.844 [2024-04-18 21:19:31.764066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.844 qpair failed and we were unable to recover it. 00:26:16.116 [2024-04-18 21:19:31.773967] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.116 [2024-04-18 21:19:31.774063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.116 [2024-04-18 21:19:31.774082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.116 [2024-04-18 21:19:31.774089] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.116 [2024-04-18 21:19:31.774095] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.116 [2024-04-18 21:19:31.774115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.116 qpair failed and we were unable to recover it. 00:26:16.116 [2024-04-18 21:19:31.783992] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.116 [2024-04-18 21:19:31.784087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.116 [2024-04-18 21:19:31.784105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.116 [2024-04-18 21:19:31.784113] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.116 [2024-04-18 21:19:31.784119] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.116 [2024-04-18 21:19:31.784135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.116 qpair failed and we were unable to recover it. 00:26:16.116 [2024-04-18 21:19:31.794022] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.116 [2024-04-18 21:19:31.794121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.116 [2024-04-18 21:19:31.794139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.116 [2024-04-18 21:19:31.794147] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.116 [2024-04-18 21:19:31.794152] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.116 [2024-04-18 21:19:31.794168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.116 qpair failed and we were unable to recover it. 00:26:16.116 [2024-04-18 21:19:31.803995] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.116 [2024-04-18 21:19:31.804093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.116 [2024-04-18 21:19:31.804112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.116 [2024-04-18 21:19:31.804120] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.116 [2024-04-18 21:19:31.804126] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.116 [2024-04-18 21:19:31.804142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.116 qpair failed and we were unable to recover it. 00:26:16.116 [2024-04-18 21:19:31.814023] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.116 [2024-04-18 21:19:31.814121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.116 [2024-04-18 21:19:31.814140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.116 [2024-04-18 21:19:31.814147] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.116 [2024-04-18 21:19:31.814153] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.116 [2024-04-18 21:19:31.814169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.116 qpair failed and we were unable to recover it. 00:26:16.116 [2024-04-18 21:19:31.824086] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.116 [2024-04-18 21:19:31.824180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.116 [2024-04-18 21:19:31.824202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.116 [2024-04-18 21:19:31.824209] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.116 [2024-04-18 21:19:31.824215] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.116 [2024-04-18 21:19:31.824231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.116 qpair failed and we were unable to recover it. 00:26:16.116 [2024-04-18 21:19:31.834072] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.116 [2024-04-18 21:19:31.834164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.116 [2024-04-18 21:19:31.834183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.116 [2024-04-18 21:19:31.834190] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.116 [2024-04-18 21:19:31.834196] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.116 [2024-04-18 21:19:31.834212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.116 qpair failed and we were unable to recover it. 00:26:16.116 [2024-04-18 21:19:31.844149] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.116 [2024-04-18 21:19:31.844250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.116 [2024-04-18 21:19:31.844269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.116 [2024-04-18 21:19:31.844276] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.116 [2024-04-18 21:19:31.844282] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.116 [2024-04-18 21:19:31.844298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.116 qpair failed and we were unable to recover it. 00:26:16.116 [2024-04-18 21:19:31.854175] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.116 [2024-04-18 21:19:31.854270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.116 [2024-04-18 21:19:31.854289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.116 [2024-04-18 21:19:31.854296] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.116 [2024-04-18 21:19:31.854302] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.116 [2024-04-18 21:19:31.854317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.116 qpair failed and we were unable to recover it. 00:26:16.116 [2024-04-18 21:19:31.864179] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.116 [2024-04-18 21:19:31.864277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.116 [2024-04-18 21:19:31.864296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.116 [2024-04-18 21:19:31.864303] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.116 [2024-04-18 21:19:31.864313] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.116 [2024-04-18 21:19:31.864329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.116 qpair failed and we were unable to recover it. 00:26:16.116 [2024-04-18 21:19:31.874172] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.116 [2024-04-18 21:19:31.874263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.116 [2024-04-18 21:19:31.874282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.116 [2024-04-18 21:19:31.874289] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.116 [2024-04-18 21:19:31.874295] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.116 [2024-04-18 21:19:31.874311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.116 qpair failed and we were unable to recover it. 00:26:16.116 [2024-04-18 21:19:31.884326] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.116 [2024-04-18 21:19:31.884421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.116 [2024-04-18 21:19:31.884440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.116 [2024-04-18 21:19:31.884448] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.116 [2024-04-18 21:19:31.884453] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.116 [2024-04-18 21:19:31.884469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.116 qpair failed and we were unable to recover it. 00:26:16.116 [2024-04-18 21:19:31.894259] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.116 [2024-04-18 21:19:31.894356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.116 [2024-04-18 21:19:31.894374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.116 [2024-04-18 21:19:31.894381] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.116 [2024-04-18 21:19:31.894387] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.116 [2024-04-18 21:19:31.894403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.116 qpair failed and we were unable to recover it. 00:26:16.116 [2024-04-18 21:19:31.904312] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.116 [2024-04-18 21:19:31.904408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.116 [2024-04-18 21:19:31.904427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.116 [2024-04-18 21:19:31.904434] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.117 [2024-04-18 21:19:31.904440] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.117 [2024-04-18 21:19:31.904456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.117 qpair failed and we were unable to recover it. 00:26:16.117 [2024-04-18 21:19:31.914292] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.117 [2024-04-18 21:19:31.914391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.117 [2024-04-18 21:19:31.914409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.117 [2024-04-18 21:19:31.914417] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.117 [2024-04-18 21:19:31.914423] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.117 [2024-04-18 21:19:31.914438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.117 qpair failed and we were unable to recover it. 00:26:16.117 [2024-04-18 21:19:31.924370] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.117 [2024-04-18 21:19:31.924468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.117 [2024-04-18 21:19:31.924485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.117 [2024-04-18 21:19:31.924493] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.117 [2024-04-18 21:19:31.924499] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.117 [2024-04-18 21:19:31.924523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.117 qpair failed and we were unable to recover it. 00:26:16.117 [2024-04-18 21:19:31.934409] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.117 [2024-04-18 21:19:31.934503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.117 [2024-04-18 21:19:31.934535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.117 [2024-04-18 21:19:31.934543] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.117 [2024-04-18 21:19:31.934548] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.117 [2024-04-18 21:19:31.934565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.117 qpair failed and we were unable to recover it. 00:26:16.117 [2024-04-18 21:19:31.944428] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.117 [2024-04-18 21:19:31.944531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.117 [2024-04-18 21:19:31.944549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.117 [2024-04-18 21:19:31.944556] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.117 [2024-04-18 21:19:31.944562] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.117 [2024-04-18 21:19:31.944579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.117 qpair failed and we were unable to recover it. 00:26:16.117 [2024-04-18 21:19:31.954397] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.117 [2024-04-18 21:19:31.954496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.117 [2024-04-18 21:19:31.954520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.117 [2024-04-18 21:19:31.954528] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.117 [2024-04-18 21:19:31.954538] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.117 [2024-04-18 21:19:31.954554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.117 qpair failed and we were unable to recover it. 00:26:16.117 [2024-04-18 21:19:31.964528] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.117 [2024-04-18 21:19:31.964631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.117 [2024-04-18 21:19:31.964649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.117 [2024-04-18 21:19:31.964655] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.117 [2024-04-18 21:19:31.964661] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.117 [2024-04-18 21:19:31.964677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.117 qpair failed and we were unable to recover it. 00:26:16.117 [2024-04-18 21:19:31.974532] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.117 [2024-04-18 21:19:31.974628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.117 [2024-04-18 21:19:31.974646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.117 [2024-04-18 21:19:31.974654] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.117 [2024-04-18 21:19:31.974660] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.117 [2024-04-18 21:19:31.974676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.117 qpair failed and we were unable to recover it. 00:26:16.117 [2024-04-18 21:19:31.984546] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.118 [2024-04-18 21:19:31.984664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.118 [2024-04-18 21:19:31.984683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.118 [2024-04-18 21:19:31.984690] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.118 [2024-04-18 21:19:31.984696] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.118 [2024-04-18 21:19:31.984712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.118 qpair failed and we were unable to recover it. 00:26:16.118 [2024-04-18 21:19:31.994578] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.118 [2024-04-18 21:19:31.994674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.118 [2024-04-18 21:19:31.994693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.118 [2024-04-18 21:19:31.994700] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.118 [2024-04-18 21:19:31.994706] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.118 [2024-04-18 21:19:31.994722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.118 qpair failed and we were unable to recover it. 00:26:16.118 [2024-04-18 21:19:32.004620] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.118 [2024-04-18 21:19:32.004721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.118 [2024-04-18 21:19:32.004740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.118 [2024-04-18 21:19:32.004747] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.118 [2024-04-18 21:19:32.004753] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.118 [2024-04-18 21:19:32.004769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.118 qpair failed and we were unable to recover it. 00:26:16.118 [2024-04-18 21:19:32.014631] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.118 [2024-04-18 21:19:32.014726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.118 [2024-04-18 21:19:32.014744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.118 [2024-04-18 21:19:32.014752] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.118 [2024-04-18 21:19:32.014758] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.118 [2024-04-18 21:19:32.014774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.118 qpair failed and we were unable to recover it. 00:26:16.118 [2024-04-18 21:19:32.024668] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.118 [2024-04-18 21:19:32.024766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.118 [2024-04-18 21:19:32.024783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.118 [2024-04-18 21:19:32.024791] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.118 [2024-04-18 21:19:32.024797] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.118 [2024-04-18 21:19:32.024813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.118 qpair failed and we were unable to recover it. 00:26:16.118 [2024-04-18 21:19:32.034736] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.118 [2024-04-18 21:19:32.034834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.118 [2024-04-18 21:19:32.034853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.118 [2024-04-18 21:19:32.034861] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.118 [2024-04-18 21:19:32.034867] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.118 [2024-04-18 21:19:32.034884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.118 qpair failed and we were unable to recover it. 00:26:16.387 [2024-04-18 21:19:32.044653] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.387 [2024-04-18 21:19:32.044753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.387 [2024-04-18 21:19:32.044772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.387 [2024-04-18 21:19:32.044779] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.387 [2024-04-18 21:19:32.044790] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.387 [2024-04-18 21:19:32.044806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.387 qpair failed and we were unable to recover it. 00:26:16.387 [2024-04-18 21:19:32.054699] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.387 [2024-04-18 21:19:32.054800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.387 [2024-04-18 21:19:32.054818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.387 [2024-04-18 21:19:32.054826] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.387 [2024-04-18 21:19:32.054832] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.387 [2024-04-18 21:19:32.054848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.387 qpair failed and we were unable to recover it. 00:26:16.387 [2024-04-18 21:19:32.064698] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.387 [2024-04-18 21:19:32.064898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.387 [2024-04-18 21:19:32.064916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.387 [2024-04-18 21:19:32.064923] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.387 [2024-04-18 21:19:32.064930] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.387 [2024-04-18 21:19:32.064946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.387 qpair failed and we were unable to recover it. 00:26:16.387 [2024-04-18 21:19:32.074730] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.387 [2024-04-18 21:19:32.074841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.387 [2024-04-18 21:19:32.074859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.387 [2024-04-18 21:19:32.074867] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.387 [2024-04-18 21:19:32.074873] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.387 [2024-04-18 21:19:32.074889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.387 qpair failed and we were unable to recover it. 00:26:16.387 [2024-04-18 21:19:32.084822] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.387 [2024-04-18 21:19:32.084923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.387 [2024-04-18 21:19:32.084942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.387 [2024-04-18 21:19:32.084950] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.387 [2024-04-18 21:19:32.084955] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.387 [2024-04-18 21:19:32.084972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.387 qpair failed and we were unable to recover it. 00:26:16.387 [2024-04-18 21:19:32.094865] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.387 [2024-04-18 21:19:32.094963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.387 [2024-04-18 21:19:32.094981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.387 [2024-04-18 21:19:32.094988] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.387 [2024-04-18 21:19:32.094994] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.387 [2024-04-18 21:19:32.095010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.387 qpair failed and we were unable to recover it. 00:26:16.387 [2024-04-18 21:19:32.104885] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.387 [2024-04-18 21:19:32.104978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.387 [2024-04-18 21:19:32.104997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.387 [2024-04-18 21:19:32.105004] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.387 [2024-04-18 21:19:32.105010] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.387 [2024-04-18 21:19:32.105026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.387 qpair failed and we were unable to recover it. 00:26:16.387 [2024-04-18 21:19:32.114908] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.387 [2024-04-18 21:19:32.115003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.387 [2024-04-18 21:19:32.115021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.387 [2024-04-18 21:19:32.115029] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.387 [2024-04-18 21:19:32.115035] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.387 [2024-04-18 21:19:32.115051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.387 qpair failed and we were unable to recover it. 00:26:16.387 [2024-04-18 21:19:32.124954] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.387 [2024-04-18 21:19:32.125053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.387 [2024-04-18 21:19:32.125071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.387 [2024-04-18 21:19:32.125079] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.387 [2024-04-18 21:19:32.125084] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.387 [2024-04-18 21:19:32.125101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.387 qpair failed and we were unable to recover it. 00:26:16.387 [2024-04-18 21:19:32.134954] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.388 [2024-04-18 21:19:32.135049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.388 [2024-04-18 21:19:32.135068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.388 [2024-04-18 21:19:32.135078] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.388 [2024-04-18 21:19:32.135084] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.388 [2024-04-18 21:19:32.135101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.388 qpair failed and we were unable to recover it. 00:26:16.388 [2024-04-18 21:19:32.145011] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.388 [2024-04-18 21:19:32.145108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.388 [2024-04-18 21:19:32.145127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.388 [2024-04-18 21:19:32.145135] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.388 [2024-04-18 21:19:32.145140] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.388 [2024-04-18 21:19:32.145156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.388 qpair failed and we were unable to recover it. 00:26:16.388 [2024-04-18 21:19:32.154960] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.388 [2024-04-18 21:19:32.155061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.388 [2024-04-18 21:19:32.155079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.388 [2024-04-18 21:19:32.155087] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.388 [2024-04-18 21:19:32.155092] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.388 [2024-04-18 21:19:32.155108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.388 qpair failed and we were unable to recover it. 00:26:16.388 [2024-04-18 21:19:32.165031] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.388 [2024-04-18 21:19:32.165125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.388 [2024-04-18 21:19:32.165143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.388 [2024-04-18 21:19:32.165150] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.388 [2024-04-18 21:19:32.165156] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.388 [2024-04-18 21:19:32.165172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.388 qpair failed and we were unable to recover it. 00:26:16.388 [2024-04-18 21:19:32.175013] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.388 [2024-04-18 21:19:32.175166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.388 [2024-04-18 21:19:32.175185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.388 [2024-04-18 21:19:32.175192] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.388 [2024-04-18 21:19:32.175197] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.388 [2024-04-18 21:19:32.175213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.388 qpair failed and we were unable to recover it. 00:26:16.388 [2024-04-18 21:19:32.185126] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.388 [2024-04-18 21:19:32.185223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.388 [2024-04-18 21:19:32.185241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.388 [2024-04-18 21:19:32.185248] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.388 [2024-04-18 21:19:32.185253] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.388 [2024-04-18 21:19:32.185270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.388 qpair failed and we were unable to recover it. 00:26:16.388 [2024-04-18 21:19:32.195122] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.388 [2024-04-18 21:19:32.195221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.388 [2024-04-18 21:19:32.195239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.388 [2024-04-18 21:19:32.195246] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.388 [2024-04-18 21:19:32.195252] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.388 [2024-04-18 21:19:32.195268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.388 qpair failed and we were unable to recover it. 00:26:16.388 [2024-04-18 21:19:32.205184] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.388 [2024-04-18 21:19:32.205279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.388 [2024-04-18 21:19:32.205298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.388 [2024-04-18 21:19:32.205305] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.388 [2024-04-18 21:19:32.205311] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.388 [2024-04-18 21:19:32.205327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.388 qpair failed and we were unable to recover it. 00:26:16.388 [2024-04-18 21:19:32.215166] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.388 [2024-04-18 21:19:32.215267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.388 [2024-04-18 21:19:32.215285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.388 [2024-04-18 21:19:32.215293] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.388 [2024-04-18 21:19:32.215298] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.388 [2024-04-18 21:19:32.215314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.388 qpair failed and we were unable to recover it. 00:26:16.388 [2024-04-18 21:19:32.225155] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.388 [2024-04-18 21:19:32.225252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.388 [2024-04-18 21:19:32.225271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.388 [2024-04-18 21:19:32.225282] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.388 [2024-04-18 21:19:32.225288] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.388 [2024-04-18 21:19:32.225303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.388 qpair failed and we were unable to recover it. 00:26:16.388 [2024-04-18 21:19:32.235275] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.388 [2024-04-18 21:19:32.235387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.388 [2024-04-18 21:19:32.235406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.388 [2024-04-18 21:19:32.235412] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.388 [2024-04-18 21:19:32.235418] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.388 [2024-04-18 21:19:32.235433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.388 qpair failed and we were unable to recover it. 00:26:16.388 [2024-04-18 21:19:32.245226] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.388 [2024-04-18 21:19:32.245324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.388 [2024-04-18 21:19:32.245343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.388 [2024-04-18 21:19:32.245351] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.388 [2024-04-18 21:19:32.245356] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.388 [2024-04-18 21:19:32.245372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.388 qpair failed and we were unable to recover it. 00:26:16.388 [2024-04-18 21:19:32.255320] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.388 [2024-04-18 21:19:32.255418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.388 [2024-04-18 21:19:32.255437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.388 [2024-04-18 21:19:32.255444] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.388 [2024-04-18 21:19:32.255450] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.388 [2024-04-18 21:19:32.255466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.388 qpair failed and we were unable to recover it. 00:26:16.388 [2024-04-18 21:19:32.265353] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.388 [2024-04-18 21:19:32.265450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.388 [2024-04-18 21:19:32.265469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.388 [2024-04-18 21:19:32.265476] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.388 [2024-04-18 21:19:32.265482] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.388 [2024-04-18 21:19:32.265497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.388 qpair failed and we were unable to recover it. 00:26:16.388 [2024-04-18 21:19:32.275376] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.388 [2024-04-18 21:19:32.275471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.388 [2024-04-18 21:19:32.275490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.388 [2024-04-18 21:19:32.275498] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.388 [2024-04-18 21:19:32.275504] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.388 [2024-04-18 21:19:32.275527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.388 qpair failed and we were unable to recover it. 00:26:16.388 [2024-04-18 21:19:32.285413] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.388 [2024-04-18 21:19:32.285507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.388 [2024-04-18 21:19:32.285530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.388 [2024-04-18 21:19:32.285537] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.388 [2024-04-18 21:19:32.285543] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.388 [2024-04-18 21:19:32.285559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.388 qpair failed and we were unable to recover it. 00:26:16.388 [2024-04-18 21:19:32.295430] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.388 [2024-04-18 21:19:32.295528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.388 [2024-04-18 21:19:32.295547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.388 [2024-04-18 21:19:32.295554] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.388 [2024-04-18 21:19:32.295560] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.388 [2024-04-18 21:19:32.295576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.388 qpair failed and we were unable to recover it. 00:26:16.388 [2024-04-18 21:19:32.305459] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.389 [2024-04-18 21:19:32.305589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.389 [2024-04-18 21:19:32.305608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.389 [2024-04-18 21:19:32.305615] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.389 [2024-04-18 21:19:32.305621] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.389 [2024-04-18 21:19:32.305637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.389 qpair failed and we were unable to recover it. 00:26:16.389 [2024-04-18 21:19:32.315507] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.389 [2024-04-18 21:19:32.315610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.389 [2024-04-18 21:19:32.315639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.389 [2024-04-18 21:19:32.315647] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.389 [2024-04-18 21:19:32.315653] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.389 [2024-04-18 21:19:32.315670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.389 qpair failed and we were unable to recover it. 00:26:16.649 [2024-04-18 21:19:32.325565] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.649 [2024-04-18 21:19:32.325664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.649 [2024-04-18 21:19:32.325683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.649 [2024-04-18 21:19:32.325690] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.649 [2024-04-18 21:19:32.325696] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.649 [2024-04-18 21:19:32.325713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.649 qpair failed and we were unable to recover it. 00:26:16.649 [2024-04-18 21:19:32.335560] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.649 [2024-04-18 21:19:32.335659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.649 [2024-04-18 21:19:32.335678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.649 [2024-04-18 21:19:32.335685] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.649 [2024-04-18 21:19:32.335691] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.649 [2024-04-18 21:19:32.335707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.649 qpair failed and we were unable to recover it. 00:26:16.649 [2024-04-18 21:19:32.345561] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.649 [2024-04-18 21:19:32.345668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.649 [2024-04-18 21:19:32.345686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.649 [2024-04-18 21:19:32.345694] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.649 [2024-04-18 21:19:32.345700] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.649 [2024-04-18 21:19:32.345715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.649 qpair failed and we were unable to recover it. 00:26:16.649 [2024-04-18 21:19:32.355612] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.649 [2024-04-18 21:19:32.355706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.649 [2024-04-18 21:19:32.355725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.649 [2024-04-18 21:19:32.355733] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.649 [2024-04-18 21:19:32.355739] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.649 [2024-04-18 21:19:32.355755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.649 qpair failed and we were unable to recover it. 00:26:16.649 [2024-04-18 21:19:32.365650] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.649 [2024-04-18 21:19:32.365748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.649 [2024-04-18 21:19:32.365767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.649 [2024-04-18 21:19:32.365774] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.649 [2024-04-18 21:19:32.365780] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.649 [2024-04-18 21:19:32.365797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.649 qpair failed and we were unable to recover it. 00:26:16.649 [2024-04-18 21:19:32.375602] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.649 [2024-04-18 21:19:32.375706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.649 [2024-04-18 21:19:32.375725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.649 [2024-04-18 21:19:32.375732] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.649 [2024-04-18 21:19:32.375738] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.649 [2024-04-18 21:19:32.375755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.649 qpair failed and we were unable to recover it. 00:26:16.649 [2024-04-18 21:19:32.385701] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.649 [2024-04-18 21:19:32.385797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.649 [2024-04-18 21:19:32.385816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.649 [2024-04-18 21:19:32.385824] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.649 [2024-04-18 21:19:32.385830] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.649 [2024-04-18 21:19:32.385846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.649 qpair failed and we were unable to recover it. 00:26:16.649 [2024-04-18 21:19:32.395661] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.649 [2024-04-18 21:19:32.395759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.649 [2024-04-18 21:19:32.395777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.649 [2024-04-18 21:19:32.395784] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.649 [2024-04-18 21:19:32.395790] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.649 [2024-04-18 21:19:32.395806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.649 qpair failed and we were unable to recover it. 00:26:16.649 [2024-04-18 21:19:32.405785] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.649 [2024-04-18 21:19:32.405890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.649 [2024-04-18 21:19:32.405912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.649 [2024-04-18 21:19:32.405920] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.649 [2024-04-18 21:19:32.405926] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.649 [2024-04-18 21:19:32.405941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.649 qpair failed and we were unable to recover it. 00:26:16.649 [2024-04-18 21:19:32.415785] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.649 [2024-04-18 21:19:32.415883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.649 [2024-04-18 21:19:32.415901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.649 [2024-04-18 21:19:32.415908] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.649 [2024-04-18 21:19:32.415914] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.649 [2024-04-18 21:19:32.415930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.649 qpair failed and we were unable to recover it. 00:26:16.649 [2024-04-18 21:19:32.425746] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.649 [2024-04-18 21:19:32.425841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.649 [2024-04-18 21:19:32.425859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.649 [2024-04-18 21:19:32.425867] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.649 [2024-04-18 21:19:32.425873] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.649 [2024-04-18 21:19:32.425888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.649 qpair failed and we were unable to recover it. 00:26:16.649 [2024-04-18 21:19:32.435799] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.649 [2024-04-18 21:19:32.435892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.649 [2024-04-18 21:19:32.435910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.649 [2024-04-18 21:19:32.435918] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.649 [2024-04-18 21:19:32.435924] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.649 [2024-04-18 21:19:32.435940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.649 qpair failed and we were unable to recover it. 00:26:16.649 [2024-04-18 21:19:32.445873] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.649 [2024-04-18 21:19:32.445976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.649 [2024-04-18 21:19:32.445994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.649 [2024-04-18 21:19:32.446002] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.649 [2024-04-18 21:19:32.446008] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.649 [2024-04-18 21:19:32.446026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.649 qpair failed and we were unable to recover it. 00:26:16.649 [2024-04-18 21:19:32.455905] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.649 [2024-04-18 21:19:32.456004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.650 [2024-04-18 21:19:32.456023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.650 [2024-04-18 21:19:32.456030] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.650 [2024-04-18 21:19:32.456036] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.650 [2024-04-18 21:19:32.456052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.650 qpair failed and we were unable to recover it. 00:26:16.650 [2024-04-18 21:19:32.465931] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.650 [2024-04-18 21:19:32.466023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.650 [2024-04-18 21:19:32.466042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.650 [2024-04-18 21:19:32.466049] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.650 [2024-04-18 21:19:32.466055] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.650 [2024-04-18 21:19:32.466071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.650 qpair failed and we were unable to recover it. 00:26:16.650 [2024-04-18 21:19:32.475892] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.650 [2024-04-18 21:19:32.475987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.650 [2024-04-18 21:19:32.476006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.650 [2024-04-18 21:19:32.476013] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.650 [2024-04-18 21:19:32.476019] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.650 [2024-04-18 21:19:32.476034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.650 qpair failed and we were unable to recover it. 00:26:16.650 [2024-04-18 21:19:32.485991] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.650 [2024-04-18 21:19:32.486096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.650 [2024-04-18 21:19:32.486114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.650 [2024-04-18 21:19:32.486121] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.650 [2024-04-18 21:19:32.486127] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.650 [2024-04-18 21:19:32.486143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.650 qpair failed and we were unable to recover it. 00:26:16.650 [2024-04-18 21:19:32.495942] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.650 [2024-04-18 21:19:32.496040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.650 [2024-04-18 21:19:32.496061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.650 [2024-04-18 21:19:32.496069] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.650 [2024-04-18 21:19:32.496075] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.650 [2024-04-18 21:19:32.496091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.650 qpair failed and we were unable to recover it. 00:26:16.650 [2024-04-18 21:19:32.506058] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.650 [2024-04-18 21:19:32.506153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.650 [2024-04-18 21:19:32.506172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.650 [2024-04-18 21:19:32.506179] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.650 [2024-04-18 21:19:32.506185] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.650 [2024-04-18 21:19:32.506201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.650 qpair failed and we were unable to recover it. 00:26:16.650 [2024-04-18 21:19:32.516099] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.650 [2024-04-18 21:19:32.516207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.650 [2024-04-18 21:19:32.516226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.650 [2024-04-18 21:19:32.516234] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.650 [2024-04-18 21:19:32.516239] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.650 [2024-04-18 21:19:32.516255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.650 qpair failed and we were unable to recover it. 00:26:16.650 [2024-04-18 21:19:32.526043] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.650 [2024-04-18 21:19:32.526143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.650 [2024-04-18 21:19:32.526161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.650 [2024-04-18 21:19:32.526168] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.650 [2024-04-18 21:19:32.526175] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.650 [2024-04-18 21:19:32.526190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.650 qpair failed and we were unable to recover it. 00:26:16.650 [2024-04-18 21:19:32.536122] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.650 [2024-04-18 21:19:32.536223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.650 [2024-04-18 21:19:32.536241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.650 [2024-04-18 21:19:32.536249] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.650 [2024-04-18 21:19:32.536255] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.650 [2024-04-18 21:19:32.536275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.650 qpair failed and we were unable to recover it. 00:26:16.650 [2024-04-18 21:19:32.546161] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.650 [2024-04-18 21:19:32.546270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.650 [2024-04-18 21:19:32.546290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.650 [2024-04-18 21:19:32.546297] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.650 [2024-04-18 21:19:32.546304] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.650 [2024-04-18 21:19:32.546320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.650 qpair failed and we were unable to recover it. 00:26:16.650 [2024-04-18 21:19:32.556180] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.650 [2024-04-18 21:19:32.556282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.650 [2024-04-18 21:19:32.556304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.650 [2024-04-18 21:19:32.556312] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.650 [2024-04-18 21:19:32.556318] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.650 [2024-04-18 21:19:32.556334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.650 qpair failed and we were unable to recover it. 00:26:16.650 [2024-04-18 21:19:32.566239] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.650 [2024-04-18 21:19:32.566336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.650 [2024-04-18 21:19:32.566355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.650 [2024-04-18 21:19:32.566362] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.650 [2024-04-18 21:19:32.566368] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.650 [2024-04-18 21:19:32.566384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.650 qpair failed and we were unable to recover it. 00:26:16.650 [2024-04-18 21:19:32.576294] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.650 [2024-04-18 21:19:32.576392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.650 [2024-04-18 21:19:32.576411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.650 [2024-04-18 21:19:32.576418] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.650 [2024-04-18 21:19:32.576424] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.650 [2024-04-18 21:19:32.576439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.650 qpair failed and we were unable to recover it. 00:26:16.911 [2024-04-18 21:19:32.586275] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.911 [2024-04-18 21:19:32.586367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.911 [2024-04-18 21:19:32.586389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.911 [2024-04-18 21:19:32.586397] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.911 [2024-04-18 21:19:32.586402] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.911 [2024-04-18 21:19:32.586418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.911 qpair failed and we were unable to recover it. 00:26:16.911 [2024-04-18 21:19:32.596254] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.911 [2024-04-18 21:19:32.596350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.911 [2024-04-18 21:19:32.596368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.911 [2024-04-18 21:19:32.596375] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.911 [2024-04-18 21:19:32.596381] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.911 [2024-04-18 21:19:32.596397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.911 qpair failed and we were unable to recover it. 00:26:16.911 [2024-04-18 21:19:32.606485] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.911 [2024-04-18 21:19:32.606585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.911 [2024-04-18 21:19:32.606604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.911 [2024-04-18 21:19:32.606611] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.911 [2024-04-18 21:19:32.606617] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.911 [2024-04-18 21:19:32.606632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.911 qpair failed and we were unable to recover it. 00:26:16.911 [2024-04-18 21:19:32.616388] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.911 [2024-04-18 21:19:32.616484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.911 [2024-04-18 21:19:32.616502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.911 [2024-04-18 21:19:32.616517] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.911 [2024-04-18 21:19:32.616524] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.911 [2024-04-18 21:19:32.616541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.911 qpair failed and we were unable to recover it. 00:26:16.911 [2024-04-18 21:19:32.626395] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.911 [2024-04-18 21:19:32.626492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.911 [2024-04-18 21:19:32.626517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.911 [2024-04-18 21:19:32.626525] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.911 [2024-04-18 21:19:32.626537] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.911 [2024-04-18 21:19:32.626554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.911 qpair failed and we were unable to recover it. 00:26:16.911 [2024-04-18 21:19:32.636480] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.911 [2024-04-18 21:19:32.636610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.911 [2024-04-18 21:19:32.636629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.911 [2024-04-18 21:19:32.636636] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.911 [2024-04-18 21:19:32.636642] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.911 [2024-04-18 21:19:32.636658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.911 qpair failed and we were unable to recover it. 00:26:16.911 [2024-04-18 21:19:32.646475] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.911 [2024-04-18 21:19:32.646577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.911 [2024-04-18 21:19:32.646596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.911 [2024-04-18 21:19:32.646604] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.911 [2024-04-18 21:19:32.646610] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.911 [2024-04-18 21:19:32.646625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.911 qpair failed and we were unable to recover it. 00:26:16.911 [2024-04-18 21:19:32.656496] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.911 [2024-04-18 21:19:32.656597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.911 [2024-04-18 21:19:32.656616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.911 [2024-04-18 21:19:32.656623] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.911 [2024-04-18 21:19:32.656629] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.911 [2024-04-18 21:19:32.656646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.911 qpair failed and we were unable to recover it. 00:26:16.911 [2024-04-18 21:19:32.666493] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.912 [2024-04-18 21:19:32.666591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.912 [2024-04-18 21:19:32.666609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.912 [2024-04-18 21:19:32.666617] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.912 [2024-04-18 21:19:32.666622] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.912 [2024-04-18 21:19:32.666639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.912 qpair failed and we were unable to recover it. 00:26:16.912 [2024-04-18 21:19:32.676554] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.912 [2024-04-18 21:19:32.676652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.912 [2024-04-18 21:19:32.676671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.912 [2024-04-18 21:19:32.676678] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.912 [2024-04-18 21:19:32.676684] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.912 [2024-04-18 21:19:32.676700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.912 qpair failed and we were unable to recover it. 00:26:16.912 [2024-04-18 21:19:32.686599] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.912 [2024-04-18 21:19:32.686732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.912 [2024-04-18 21:19:32.686751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.912 [2024-04-18 21:19:32.686758] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.912 [2024-04-18 21:19:32.686764] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.912 [2024-04-18 21:19:32.686780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.912 qpair failed and we were unable to recover it. 00:26:16.912 [2024-04-18 21:19:32.696600] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.912 [2024-04-18 21:19:32.696694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.912 [2024-04-18 21:19:32.696712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.912 [2024-04-18 21:19:32.696719] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.912 [2024-04-18 21:19:32.696726] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.912 [2024-04-18 21:19:32.696742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.912 qpair failed and we were unable to recover it. 00:26:16.912 [2024-04-18 21:19:32.706606] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.912 [2024-04-18 21:19:32.706710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.912 [2024-04-18 21:19:32.706728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.912 [2024-04-18 21:19:32.706736] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.912 [2024-04-18 21:19:32.706742] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.912 [2024-04-18 21:19:32.706758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.912 qpair failed and we were unable to recover it. 00:26:16.912 [2024-04-18 21:19:32.716658] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.912 [2024-04-18 21:19:32.716755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.912 [2024-04-18 21:19:32.716773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.912 [2024-04-18 21:19:32.716781] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.912 [2024-04-18 21:19:32.716790] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.912 [2024-04-18 21:19:32.716806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.912 qpair failed and we were unable to recover it. 00:26:16.912 [2024-04-18 21:19:32.726693] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.912 [2024-04-18 21:19:32.726797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.912 [2024-04-18 21:19:32.726815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.912 [2024-04-18 21:19:32.726822] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.912 [2024-04-18 21:19:32.726829] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.912 [2024-04-18 21:19:32.726844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.912 qpair failed and we were unable to recover it. 00:26:16.912 [2024-04-18 21:19:32.736706] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.912 [2024-04-18 21:19:32.736804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.912 [2024-04-18 21:19:32.736821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.912 [2024-04-18 21:19:32.736828] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.912 [2024-04-18 21:19:32.736834] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.912 [2024-04-18 21:19:32.736850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.912 qpair failed and we were unable to recover it. 00:26:16.912 [2024-04-18 21:19:32.746748] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.912 [2024-04-18 21:19:32.746844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.912 [2024-04-18 21:19:32.746863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.912 [2024-04-18 21:19:32.746870] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.912 [2024-04-18 21:19:32.746877] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.912 [2024-04-18 21:19:32.746892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.912 qpair failed and we were unable to recover it. 00:26:16.912 [2024-04-18 21:19:32.756744] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.912 [2024-04-18 21:19:32.756836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.912 [2024-04-18 21:19:32.756854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.912 [2024-04-18 21:19:32.756862] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.912 [2024-04-18 21:19:32.756868] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.912 [2024-04-18 21:19:32.756884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.912 qpair failed and we were unable to recover it. 00:26:16.912 [2024-04-18 21:19:32.766906] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.912 [2024-04-18 21:19:32.767012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.912 [2024-04-18 21:19:32.767031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.912 [2024-04-18 21:19:32.767038] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.912 [2024-04-18 21:19:32.767044] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.912 [2024-04-18 21:19:32.767060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.912 qpair failed and we were unable to recover it. 00:26:16.912 [2024-04-18 21:19:32.776824] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.912 [2024-04-18 21:19:32.776923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.912 [2024-04-18 21:19:32.776942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.912 [2024-04-18 21:19:32.776949] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.912 [2024-04-18 21:19:32.776955] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.912 [2024-04-18 21:19:32.776971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.912 qpair failed and we were unable to recover it. 00:26:16.912 [2024-04-18 21:19:32.786848] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.912 [2024-04-18 21:19:32.786944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.912 [2024-04-18 21:19:32.786962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.912 [2024-04-18 21:19:32.786970] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.912 [2024-04-18 21:19:32.786975] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.912 [2024-04-18 21:19:32.786991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.912 qpair failed and we were unable to recover it. 00:26:16.912 [2024-04-18 21:19:32.796883] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.912 [2024-04-18 21:19:32.796983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.913 [2024-04-18 21:19:32.797002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.913 [2024-04-18 21:19:32.797009] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.913 [2024-04-18 21:19:32.797015] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.913 [2024-04-18 21:19:32.797031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.913 qpair failed and we were unable to recover it. 00:26:16.913 [2024-04-18 21:19:32.806914] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.913 [2024-04-18 21:19:32.807011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.913 [2024-04-18 21:19:32.807029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.913 [2024-04-18 21:19:32.807036] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.913 [2024-04-18 21:19:32.807047] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.913 [2024-04-18 21:19:32.807062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.913 qpair failed and we were unable to recover it. 00:26:16.913 [2024-04-18 21:19:32.816935] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.913 [2024-04-18 21:19:32.817036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.913 [2024-04-18 21:19:32.817055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.913 [2024-04-18 21:19:32.817062] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.913 [2024-04-18 21:19:32.817068] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.913 [2024-04-18 21:19:32.817084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.913 qpair failed and we were unable to recover it. 00:26:16.913 [2024-04-18 21:19:32.826958] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.913 [2024-04-18 21:19:32.827061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.913 [2024-04-18 21:19:32.827079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.913 [2024-04-18 21:19:32.827087] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.913 [2024-04-18 21:19:32.827092] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.913 [2024-04-18 21:19:32.827108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.913 qpair failed and we were unable to recover it. 00:26:16.913 [2024-04-18 21:19:32.836989] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.913 [2024-04-18 21:19:32.837081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.913 [2024-04-18 21:19:32.837099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.913 [2024-04-18 21:19:32.837107] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.913 [2024-04-18 21:19:32.837113] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:16.913 [2024-04-18 21:19:32.837129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.913 qpair failed and we were unable to recover it. 00:26:17.173 [2024-04-18 21:19:32.847031] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.173 [2024-04-18 21:19:32.847130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.173 [2024-04-18 21:19:32.847148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.173 [2024-04-18 21:19:32.847155] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.173 [2024-04-18 21:19:32.847161] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.173 [2024-04-18 21:19:32.847177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.173 qpair failed and we were unable to recover it. 00:26:17.173 [2024-04-18 21:19:32.857025] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.173 [2024-04-18 21:19:32.857125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.173 [2024-04-18 21:19:32.857143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.173 [2024-04-18 21:19:32.857150] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.173 [2024-04-18 21:19:32.857156] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.173 [2024-04-18 21:19:32.857172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.173 qpair failed and we were unable to recover it. 00:26:17.173 [2024-04-18 21:19:32.867081] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.173 [2024-04-18 21:19:32.867180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.173 [2024-04-18 21:19:32.867198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.173 [2024-04-18 21:19:32.867207] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.173 [2024-04-18 21:19:32.867214] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.173 [2024-04-18 21:19:32.867229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.173 qpair failed and we were unable to recover it. 00:26:17.173 [2024-04-18 21:19:32.877101] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.173 [2024-04-18 21:19:32.877194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.173 [2024-04-18 21:19:32.877214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.173 [2024-04-18 21:19:32.877222] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.173 [2024-04-18 21:19:32.877227] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.173 [2024-04-18 21:19:32.877243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.173 qpair failed and we were unable to recover it. 00:26:17.173 [2024-04-18 21:19:32.887145] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.173 [2024-04-18 21:19:32.887241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.173 [2024-04-18 21:19:32.887259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.173 [2024-04-18 21:19:32.887267] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.174 [2024-04-18 21:19:32.887273] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.174 [2024-04-18 21:19:32.887289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.174 qpair failed and we were unable to recover it. 00:26:17.174 [2024-04-18 21:19:32.897153] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.174 [2024-04-18 21:19:32.897249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.174 [2024-04-18 21:19:32.897268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.174 [2024-04-18 21:19:32.897279] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.174 [2024-04-18 21:19:32.897285] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.174 [2024-04-18 21:19:32.897300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.174 qpair failed and we were unable to recover it. 00:26:17.174 [2024-04-18 21:19:32.907177] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.174 [2024-04-18 21:19:32.907274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.174 [2024-04-18 21:19:32.907292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.174 [2024-04-18 21:19:32.907299] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.174 [2024-04-18 21:19:32.907305] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.174 [2024-04-18 21:19:32.907321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.174 qpair failed and we were unable to recover it. 00:26:17.174 [2024-04-18 21:19:32.917195] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.174 [2024-04-18 21:19:32.917293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.174 [2024-04-18 21:19:32.917311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.174 [2024-04-18 21:19:32.917319] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.174 [2024-04-18 21:19:32.917325] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.174 [2024-04-18 21:19:32.917340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.174 qpair failed and we were unable to recover it. 00:26:17.174 [2024-04-18 21:19:32.927243] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.174 [2024-04-18 21:19:32.927338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.174 [2024-04-18 21:19:32.927356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.174 [2024-04-18 21:19:32.927363] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.174 [2024-04-18 21:19:32.927369] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.174 [2024-04-18 21:19:32.927384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.174 qpair failed and we were unable to recover it. 00:26:17.174 [2024-04-18 21:19:32.937273] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.174 [2024-04-18 21:19:32.937375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.174 [2024-04-18 21:19:32.937393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.174 [2024-04-18 21:19:32.937400] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.174 [2024-04-18 21:19:32.937406] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.174 [2024-04-18 21:19:32.937422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.174 qpair failed and we were unable to recover it. 00:26:17.174 [2024-04-18 21:19:32.947223] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.174 [2024-04-18 21:19:32.947318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.174 [2024-04-18 21:19:32.947337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.174 [2024-04-18 21:19:32.947345] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.174 [2024-04-18 21:19:32.947351] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.174 [2024-04-18 21:19:32.947367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.174 qpair failed and we were unable to recover it. 00:26:17.174 [2024-04-18 21:19:32.957323] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.174 [2024-04-18 21:19:32.957420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.174 [2024-04-18 21:19:32.957438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.174 [2024-04-18 21:19:32.957445] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.174 [2024-04-18 21:19:32.957451] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.174 [2024-04-18 21:19:32.957467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.174 qpair failed and we were unable to recover it. 00:26:17.174 [2024-04-18 21:19:32.967280] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.174 [2024-04-18 21:19:32.967384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.174 [2024-04-18 21:19:32.967402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.174 [2024-04-18 21:19:32.967409] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.174 [2024-04-18 21:19:32.967415] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.174 [2024-04-18 21:19:32.967431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.174 qpair failed and we were unable to recover it. 00:26:17.174 [2024-04-18 21:19:32.977457] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.174 [2024-04-18 21:19:32.977562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.174 [2024-04-18 21:19:32.977580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.174 [2024-04-18 21:19:32.977587] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.174 [2024-04-18 21:19:32.977593] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.174 [2024-04-18 21:19:32.977610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.174 qpair failed and we were unable to recover it. 00:26:17.174 [2024-04-18 21:19:32.987422] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.174 [2024-04-18 21:19:32.987542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.174 [2024-04-18 21:19:32.987560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.174 [2024-04-18 21:19:32.987571] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.174 [2024-04-18 21:19:32.987577] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.174 [2024-04-18 21:19:32.987592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.174 qpair failed and we were unable to recover it. 00:26:17.174 [2024-04-18 21:19:32.997450] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.174 [2024-04-18 21:19:32.997549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.174 [2024-04-18 21:19:32.997567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.174 [2024-04-18 21:19:32.997575] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.174 [2024-04-18 21:19:32.997581] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.174 [2024-04-18 21:19:32.997597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.174 qpair failed and we were unable to recover it. 00:26:17.174 [2024-04-18 21:19:33.007480] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.174 [2024-04-18 21:19:33.007584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.174 [2024-04-18 21:19:33.007603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.174 [2024-04-18 21:19:33.007610] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.174 [2024-04-18 21:19:33.007616] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.174 [2024-04-18 21:19:33.007632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.174 qpair failed and we were unable to recover it. 00:26:17.174 [2024-04-18 21:19:33.017499] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.174 [2024-04-18 21:19:33.017602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.174 [2024-04-18 21:19:33.017620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.174 [2024-04-18 21:19:33.017627] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.174 [2024-04-18 21:19:33.017633] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.174 [2024-04-18 21:19:33.017649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.174 qpair failed and we were unable to recover it. 00:26:17.174 [2024-04-18 21:19:33.027527] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.174 [2024-04-18 21:19:33.027624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.174 [2024-04-18 21:19:33.027642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.174 [2024-04-18 21:19:33.027649] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.174 [2024-04-18 21:19:33.027655] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.174 [2024-04-18 21:19:33.027671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.174 qpair failed and we were unable to recover it. 00:26:17.174 [2024-04-18 21:19:33.037564] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.174 [2024-04-18 21:19:33.037662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.174 [2024-04-18 21:19:33.037682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.174 [2024-04-18 21:19:33.037689] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.174 [2024-04-18 21:19:33.037695] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.174 [2024-04-18 21:19:33.037712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.174 qpair failed and we were unable to recover it. 00:26:17.174 [2024-04-18 21:19:33.047611] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.174 [2024-04-18 21:19:33.047713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.174 [2024-04-18 21:19:33.047732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.174 [2024-04-18 21:19:33.047740] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.174 [2024-04-18 21:19:33.047746] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.174 [2024-04-18 21:19:33.047763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.174 qpair failed and we were unable to recover it. 00:26:17.174 [2024-04-18 21:19:33.057571] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.174 [2024-04-18 21:19:33.057670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.174 [2024-04-18 21:19:33.057688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.174 [2024-04-18 21:19:33.057696] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.174 [2024-04-18 21:19:33.057702] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.174 [2024-04-18 21:19:33.057718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.174 qpair failed and we were unable to recover it. 00:26:17.174 [2024-04-18 21:19:33.067766] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.174 [2024-04-18 21:19:33.067864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.174 [2024-04-18 21:19:33.067883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.174 [2024-04-18 21:19:33.067891] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.174 [2024-04-18 21:19:33.067897] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.174 [2024-04-18 21:19:33.067913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.174 qpair failed and we were unable to recover it. 00:26:17.174 [2024-04-18 21:19:33.077657] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.174 [2024-04-18 21:19:33.077752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.174 [2024-04-18 21:19:33.077770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.175 [2024-04-18 21:19:33.077781] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.175 [2024-04-18 21:19:33.077787] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.175 [2024-04-18 21:19:33.077803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.175 qpair failed and we were unable to recover it. 00:26:17.175 [2024-04-18 21:19:33.087714] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.175 [2024-04-18 21:19:33.087809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.175 [2024-04-18 21:19:33.087827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.175 [2024-04-18 21:19:33.087835] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.175 [2024-04-18 21:19:33.087841] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.175 [2024-04-18 21:19:33.087856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.175 qpair failed and we were unable to recover it. 00:26:17.175 [2024-04-18 21:19:33.097730] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.175 [2024-04-18 21:19:33.097830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.175 [2024-04-18 21:19:33.097849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.175 [2024-04-18 21:19:33.097857] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.175 [2024-04-18 21:19:33.097862] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.175 [2024-04-18 21:19:33.097878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.175 qpair failed and we were unable to recover it. 00:26:17.435 [2024-04-18 21:19:33.107683] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.435 [2024-04-18 21:19:33.107803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.435 [2024-04-18 21:19:33.107822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.435 [2024-04-18 21:19:33.107829] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.435 [2024-04-18 21:19:33.107835] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.435 [2024-04-18 21:19:33.107851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.435 qpair failed and we were unable to recover it. 00:26:17.435 [2024-04-18 21:19:33.117790] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.435 [2024-04-18 21:19:33.117888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.435 [2024-04-18 21:19:33.117907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.435 [2024-04-18 21:19:33.117915] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.435 [2024-04-18 21:19:33.117922] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.435 [2024-04-18 21:19:33.117938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.435 qpair failed and we were unable to recover it. 00:26:17.435 [2024-04-18 21:19:33.127824] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.435 [2024-04-18 21:19:33.127922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.435 [2024-04-18 21:19:33.127940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.435 [2024-04-18 21:19:33.127948] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.435 [2024-04-18 21:19:33.127954] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.435 [2024-04-18 21:19:33.127970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.435 qpair failed and we were unable to recover it. 00:26:17.435 [2024-04-18 21:19:33.137853] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.435 [2024-04-18 21:19:33.137950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.435 [2024-04-18 21:19:33.137969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.435 [2024-04-18 21:19:33.137976] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.435 [2024-04-18 21:19:33.137982] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.435 [2024-04-18 21:19:33.137998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.435 qpair failed and we were unable to recover it. 00:26:17.435 [2024-04-18 21:19:33.147886] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.435 [2024-04-18 21:19:33.147982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.435 [2024-04-18 21:19:33.148000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.435 [2024-04-18 21:19:33.148008] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.435 [2024-04-18 21:19:33.148014] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.435 [2024-04-18 21:19:33.148029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.435 qpair failed and we were unable to recover it. 00:26:17.435 [2024-04-18 21:19:33.157911] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.435 [2024-04-18 21:19:33.158012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.435 [2024-04-18 21:19:33.158030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.435 [2024-04-18 21:19:33.158037] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.435 [2024-04-18 21:19:33.158044] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.435 [2024-04-18 21:19:33.158059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.435 qpair failed and we were unable to recover it. 00:26:17.435 [2024-04-18 21:19:33.167934] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.435 [2024-04-18 21:19:33.168033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.435 [2024-04-18 21:19:33.168054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.435 [2024-04-18 21:19:33.168061] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.435 [2024-04-18 21:19:33.168067] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.435 [2024-04-18 21:19:33.168082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.435 qpair failed and we were unable to recover it. 00:26:17.435 [2024-04-18 21:19:33.177982] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.435 [2024-04-18 21:19:33.178177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.435 [2024-04-18 21:19:33.178194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.435 [2024-04-18 21:19:33.178201] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.435 [2024-04-18 21:19:33.178207] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.435 [2024-04-18 21:19:33.178224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.435 qpair failed and we were unable to recover it. 00:26:17.435 [2024-04-18 21:19:33.187985] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.435 [2024-04-18 21:19:33.188085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.435 [2024-04-18 21:19:33.188103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.435 [2024-04-18 21:19:33.188110] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.435 [2024-04-18 21:19:33.188116] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.435 [2024-04-18 21:19:33.188132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.435 qpair failed and we were unable to recover it. 00:26:17.436 [2024-04-18 21:19:33.197985] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.436 [2024-04-18 21:19:33.198082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.436 [2024-04-18 21:19:33.198100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.436 [2024-04-18 21:19:33.198107] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.436 [2024-04-18 21:19:33.198113] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.436 [2024-04-18 21:19:33.198130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.436 qpair failed and we were unable to recover it. 00:26:17.436 [2024-04-18 21:19:33.208037] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.436 [2024-04-18 21:19:33.208134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.436 [2024-04-18 21:19:33.208152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.436 [2024-04-18 21:19:33.208159] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.436 [2024-04-18 21:19:33.208165] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.436 [2024-04-18 21:19:33.208184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.436 qpair failed and we were unable to recover it. 00:26:17.436 [2024-04-18 21:19:33.218001] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.436 [2024-04-18 21:19:33.218102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.436 [2024-04-18 21:19:33.218121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.436 [2024-04-18 21:19:33.218128] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.436 [2024-04-18 21:19:33.218134] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.436 [2024-04-18 21:19:33.218150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.436 qpair failed and we were unable to recover it. 00:26:17.436 [2024-04-18 21:19:33.228106] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.436 [2024-04-18 21:19:33.228205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.436 [2024-04-18 21:19:33.228223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.436 [2024-04-18 21:19:33.228232] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.436 [2024-04-18 21:19:33.228239] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.436 [2024-04-18 21:19:33.228254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.436 qpair failed and we were unable to recover it. 00:26:17.436 [2024-04-18 21:19:33.238067] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.436 [2024-04-18 21:19:33.238165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.436 [2024-04-18 21:19:33.238185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.436 [2024-04-18 21:19:33.238193] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.436 [2024-04-18 21:19:33.238199] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.436 [2024-04-18 21:19:33.238215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.436 qpair failed and we were unable to recover it. 00:26:17.436 [2024-04-18 21:19:33.248188] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.436 [2024-04-18 21:19:33.248291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.436 [2024-04-18 21:19:33.248310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.436 [2024-04-18 21:19:33.248318] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.436 [2024-04-18 21:19:33.248324] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.436 [2024-04-18 21:19:33.248340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.436 qpair failed and we were unable to recover it. 00:26:17.436 [2024-04-18 21:19:33.258160] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.436 [2024-04-18 21:19:33.258252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.436 [2024-04-18 21:19:33.258274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.436 [2024-04-18 21:19:33.258281] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.436 [2024-04-18 21:19:33.258287] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.436 [2024-04-18 21:19:33.258303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.436 qpair failed and we were unable to recover it. 00:26:17.436 [2024-04-18 21:19:33.268133] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.436 [2024-04-18 21:19:33.268230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.436 [2024-04-18 21:19:33.268250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.436 [2024-04-18 21:19:33.268257] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.436 [2024-04-18 21:19:33.268263] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.436 [2024-04-18 21:19:33.268279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.436 qpair failed and we were unable to recover it. 00:26:17.436 [2024-04-18 21:19:33.278275] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.436 [2024-04-18 21:19:33.278373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.436 [2024-04-18 21:19:33.278391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.436 [2024-04-18 21:19:33.278398] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.436 [2024-04-18 21:19:33.278404] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.436 [2024-04-18 21:19:33.278420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.436 qpair failed and we were unable to recover it. 00:26:17.436 [2024-04-18 21:19:33.288298] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.436 [2024-04-18 21:19:33.288406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.436 [2024-04-18 21:19:33.288424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.436 [2024-04-18 21:19:33.288432] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.436 [2024-04-18 21:19:33.288438] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.436 [2024-04-18 21:19:33.288453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.436 qpair failed and we were unable to recover it. 00:26:17.436 [2024-04-18 21:19:33.298303] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.436 [2024-04-18 21:19:33.298408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.436 [2024-04-18 21:19:33.298427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.436 [2024-04-18 21:19:33.298434] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.436 [2024-04-18 21:19:33.298439] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.436 [2024-04-18 21:19:33.298459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.436 qpair failed and we were unable to recover it. 00:26:17.436 [2024-04-18 21:19:33.308333] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.436 [2024-04-18 21:19:33.308432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.436 [2024-04-18 21:19:33.308450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.436 [2024-04-18 21:19:33.308458] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.436 [2024-04-18 21:19:33.308464] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.436 [2024-04-18 21:19:33.308480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.436 qpair failed and we were unable to recover it. 00:26:17.436 [2024-04-18 21:19:33.318295] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.436 [2024-04-18 21:19:33.318423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.436 [2024-04-18 21:19:33.318441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.436 [2024-04-18 21:19:33.318449] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.436 [2024-04-18 21:19:33.318455] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.436 [2024-04-18 21:19:33.318471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.436 qpair failed and we were unable to recover it. 00:26:17.436 [2024-04-18 21:19:33.328349] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.436 [2024-04-18 21:19:33.328476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.436 [2024-04-18 21:19:33.328495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.437 [2024-04-18 21:19:33.328503] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.437 [2024-04-18 21:19:33.328509] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.437 [2024-04-18 21:19:33.328533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.437 qpair failed and we were unable to recover it. 00:26:17.437 [2024-04-18 21:19:33.338414] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.437 [2024-04-18 21:19:33.338530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.437 [2024-04-18 21:19:33.338548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.437 [2024-04-18 21:19:33.338555] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.437 [2024-04-18 21:19:33.338562] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.437 [2024-04-18 21:19:33.338578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.437 qpair failed and we were unable to recover it. 00:26:17.437 [2024-04-18 21:19:33.348423] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.437 [2024-04-18 21:19:33.348526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.437 [2024-04-18 21:19:33.348548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.437 [2024-04-18 21:19:33.348556] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.437 [2024-04-18 21:19:33.348561] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.437 [2024-04-18 21:19:33.348577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.437 qpair failed and we were unable to recover it. 00:26:17.437 [2024-04-18 21:19:33.358462] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.437 [2024-04-18 21:19:33.358567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.437 [2024-04-18 21:19:33.358585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.437 [2024-04-18 21:19:33.358592] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.437 [2024-04-18 21:19:33.358598] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.437 [2024-04-18 21:19:33.358615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.437 qpair failed and we were unable to recover it. 00:26:17.697 [2024-04-18 21:19:33.368450] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.697 [2024-04-18 21:19:33.368594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.697 [2024-04-18 21:19:33.368612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.697 [2024-04-18 21:19:33.368620] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.697 [2024-04-18 21:19:33.368626] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.697 [2024-04-18 21:19:33.368642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.697 qpair failed and we were unable to recover it. 00:26:17.697 [2024-04-18 21:19:33.378497] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.697 [2024-04-18 21:19:33.378602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.697 [2024-04-18 21:19:33.378622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.697 [2024-04-18 21:19:33.378629] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.697 [2024-04-18 21:19:33.378635] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.698 [2024-04-18 21:19:33.378651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.698 qpair failed and we were unable to recover it. 00:26:17.698 [2024-04-18 21:19:33.388602] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.698 [2024-04-18 21:19:33.388704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.698 [2024-04-18 21:19:33.388723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.698 [2024-04-18 21:19:33.388730] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.698 [2024-04-18 21:19:33.388736] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.698 [2024-04-18 21:19:33.388755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.698 qpair failed and we were unable to recover it. 00:26:17.698 [2024-04-18 21:19:33.398517] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.698 [2024-04-18 21:19:33.398613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.698 [2024-04-18 21:19:33.398632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.698 [2024-04-18 21:19:33.398639] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.698 [2024-04-18 21:19:33.398645] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.698 [2024-04-18 21:19:33.398661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.698 qpair failed and we were unable to recover it. 00:26:17.698 [2024-04-18 21:19:33.408553] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.698 [2024-04-18 21:19:33.408655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.698 [2024-04-18 21:19:33.408673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.698 [2024-04-18 21:19:33.408681] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.698 [2024-04-18 21:19:33.408687] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.698 [2024-04-18 21:19:33.408702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.698 qpair failed and we were unable to recover it. 00:26:17.698 [2024-04-18 21:19:33.418647] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.698 [2024-04-18 21:19:33.418746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.698 [2024-04-18 21:19:33.418763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.698 [2024-04-18 21:19:33.418770] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.698 [2024-04-18 21:19:33.418776] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.698 [2024-04-18 21:19:33.418793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.698 qpair failed and we were unable to recover it. 00:26:17.698 [2024-04-18 21:19:33.428660] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.698 [2024-04-18 21:19:33.428760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.698 [2024-04-18 21:19:33.428779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.698 [2024-04-18 21:19:33.428787] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.698 [2024-04-18 21:19:33.428793] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.698 [2024-04-18 21:19:33.428809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.698 qpair failed and we were unable to recover it. 00:26:17.698 [2024-04-18 21:19:33.438679] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.698 [2024-04-18 21:19:33.438776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.698 [2024-04-18 21:19:33.438797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.698 [2024-04-18 21:19:33.438805] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.698 [2024-04-18 21:19:33.438811] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.698 [2024-04-18 21:19:33.438826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.698 qpair failed and we were unable to recover it. 00:26:17.698 [2024-04-18 21:19:33.448697] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.698 [2024-04-18 21:19:33.448791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.698 [2024-04-18 21:19:33.448809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.698 [2024-04-18 21:19:33.448816] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.698 [2024-04-18 21:19:33.448822] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.698 [2024-04-18 21:19:33.448838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.698 qpair failed and we were unable to recover it. 00:26:17.698 [2024-04-18 21:19:33.458685] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.698 [2024-04-18 21:19:33.458786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.698 [2024-04-18 21:19:33.458804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.698 [2024-04-18 21:19:33.458811] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.698 [2024-04-18 21:19:33.458817] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.698 [2024-04-18 21:19:33.458833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.698 qpair failed and we were unable to recover it. 00:26:17.698 [2024-04-18 21:19:33.468762] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.698 [2024-04-18 21:19:33.468873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.698 [2024-04-18 21:19:33.468892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.698 [2024-04-18 21:19:33.468899] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.698 [2024-04-18 21:19:33.468905] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.698 [2024-04-18 21:19:33.468920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.698 qpair failed and we were unable to recover it. 00:26:17.698 [2024-04-18 21:19:33.478838] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.698 [2024-04-18 21:19:33.478947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.698 [2024-04-18 21:19:33.478966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.698 [2024-04-18 21:19:33.478973] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.698 [2024-04-18 21:19:33.478984] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.698 [2024-04-18 21:19:33.479001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.698 qpair failed and we were unable to recover it. 00:26:17.698 [2024-04-18 21:19:33.488777] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.698 [2024-04-18 21:19:33.488873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.698 [2024-04-18 21:19:33.488892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.698 [2024-04-18 21:19:33.488899] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.698 [2024-04-18 21:19:33.488905] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.698 [2024-04-18 21:19:33.488921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.698 qpair failed and we were unable to recover it. 00:26:17.698 [2024-04-18 21:19:33.498827] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.698 [2024-04-18 21:19:33.498927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.698 [2024-04-18 21:19:33.498945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.698 [2024-04-18 21:19:33.498952] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.698 [2024-04-18 21:19:33.498958] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.698 [2024-04-18 21:19:33.498974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.698 qpair failed and we were unable to recover it. 00:26:17.698 [2024-04-18 21:19:33.508840] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.698 [2024-04-18 21:19:33.508940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.698 [2024-04-18 21:19:33.508959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.698 [2024-04-18 21:19:33.508967] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.698 [2024-04-18 21:19:33.508973] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.698 [2024-04-18 21:19:33.508988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.698 qpair failed and we were unable to recover it. 00:26:17.698 [2024-04-18 21:19:33.518852] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.698 [2024-04-18 21:19:33.518950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.699 [2024-04-18 21:19:33.518968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.699 [2024-04-18 21:19:33.518975] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.699 [2024-04-18 21:19:33.518981] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.699 [2024-04-18 21:19:33.518998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.699 qpair failed and we were unable to recover it. 00:26:17.699 [2024-04-18 21:19:33.528876] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.699 [2024-04-18 21:19:33.528979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.699 [2024-04-18 21:19:33.528998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.699 [2024-04-18 21:19:33.529005] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.699 [2024-04-18 21:19:33.529011] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.699 [2024-04-18 21:19:33.529027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.699 qpair failed and we were unable to recover it. 00:26:17.699 [2024-04-18 21:19:33.538914] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.699 [2024-04-18 21:19:33.539015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.699 [2024-04-18 21:19:33.539034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.699 [2024-04-18 21:19:33.539041] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.699 [2024-04-18 21:19:33.539047] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.699 [2024-04-18 21:19:33.539062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.699 qpair failed and we were unable to recover it. 00:26:17.699 [2024-04-18 21:19:33.548986] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.699 [2024-04-18 21:19:33.549085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.699 [2024-04-18 21:19:33.549102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.699 [2024-04-18 21:19:33.549110] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.699 [2024-04-18 21:19:33.549115] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.699 [2024-04-18 21:19:33.549132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.699 qpair failed and we were unable to recover it. 00:26:17.699 [2024-04-18 21:19:33.558965] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.699 [2024-04-18 21:19:33.559071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.699 [2024-04-18 21:19:33.559091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.699 [2024-04-18 21:19:33.559098] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.699 [2024-04-18 21:19:33.559104] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.699 [2024-04-18 21:19:33.559120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.699 qpair failed and we were unable to recover it. 00:26:17.699 [2024-04-18 21:19:33.569063] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.699 [2024-04-18 21:19:33.569164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.699 [2024-04-18 21:19:33.569183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.699 [2024-04-18 21:19:33.569190] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.699 [2024-04-18 21:19:33.569199] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.699 [2024-04-18 21:19:33.569216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.699 qpair failed and we were unable to recover it. 00:26:17.699 [2024-04-18 21:19:33.579095] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.699 [2024-04-18 21:19:33.579188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.699 [2024-04-18 21:19:33.579207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.699 [2024-04-18 21:19:33.579215] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.699 [2024-04-18 21:19:33.579220] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.699 [2024-04-18 21:19:33.579237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.699 qpair failed and we were unable to recover it. 00:26:17.699 [2024-04-18 21:19:33.589052] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.699 [2024-04-18 21:19:33.589155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.699 [2024-04-18 21:19:33.589174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.699 [2024-04-18 21:19:33.589181] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.699 [2024-04-18 21:19:33.589187] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.699 [2024-04-18 21:19:33.589203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.699 qpair failed and we were unable to recover it. 00:26:17.699 [2024-04-18 21:19:33.599082] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.699 [2024-04-18 21:19:33.599180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.699 [2024-04-18 21:19:33.599198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.699 [2024-04-18 21:19:33.599206] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.699 [2024-04-18 21:19:33.599212] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.699 [2024-04-18 21:19:33.599228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.699 qpair failed and we were unable to recover it. 00:26:17.699 [2024-04-18 21:19:33.609172] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.699 [2024-04-18 21:19:33.609267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.699 [2024-04-18 21:19:33.609286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.699 [2024-04-18 21:19:33.609294] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.699 [2024-04-18 21:19:33.609300] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.699 [2024-04-18 21:19:33.609316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.699 qpair failed and we were unable to recover it. 00:26:17.699 [2024-04-18 21:19:33.619213] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.699 [2024-04-18 21:19:33.619350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.699 [2024-04-18 21:19:33.619369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.699 [2024-04-18 21:19:33.619377] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.699 [2024-04-18 21:19:33.619383] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.699 [2024-04-18 21:19:33.619399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.699 qpair failed and we were unable to recover it. 00:26:17.960 [2024-04-18 21:19:33.629248] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.960 [2024-04-18 21:19:33.629342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.960 [2024-04-18 21:19:33.629362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.960 [2024-04-18 21:19:33.629369] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.960 [2024-04-18 21:19:33.629376] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.960 [2024-04-18 21:19:33.629392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.960 qpair failed and we were unable to recover it. 00:26:17.960 [2024-04-18 21:19:33.639280] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.960 [2024-04-18 21:19:33.639385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.960 [2024-04-18 21:19:33.639404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.960 [2024-04-18 21:19:33.639412] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.960 [2024-04-18 21:19:33.639418] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.960 [2024-04-18 21:19:33.639434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.960 qpair failed and we were unable to recover it. 00:26:17.960 [2024-04-18 21:19:33.649324] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.960 [2024-04-18 21:19:33.649421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.960 [2024-04-18 21:19:33.649439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.960 [2024-04-18 21:19:33.649446] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.960 [2024-04-18 21:19:33.649452] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.960 [2024-04-18 21:19:33.649468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.960 qpair failed and we were unable to recover it. 00:26:17.960 [2024-04-18 21:19:33.659342] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.960 [2024-04-18 21:19:33.659436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.960 [2024-04-18 21:19:33.659455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.960 [2024-04-18 21:19:33.659465] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.961 [2024-04-18 21:19:33.659472] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.961 [2024-04-18 21:19:33.659488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.961 qpair failed and we were unable to recover it. 00:26:17.961 [2024-04-18 21:19:33.669289] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.961 [2024-04-18 21:19:33.669387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.961 [2024-04-18 21:19:33.669406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.961 [2024-04-18 21:19:33.669413] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.961 [2024-04-18 21:19:33.669419] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.961 [2024-04-18 21:19:33.669435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.961 qpair failed and we were unable to recover it. 00:26:17.961 [2024-04-18 21:19:33.679386] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.961 [2024-04-18 21:19:33.679484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.961 [2024-04-18 21:19:33.679502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.961 [2024-04-18 21:19:33.679509] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.961 [2024-04-18 21:19:33.679521] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.961 [2024-04-18 21:19:33.679537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.961 qpair failed and we were unable to recover it. 00:26:17.961 [2024-04-18 21:19:33.689425] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.961 [2024-04-18 21:19:33.689530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.961 [2024-04-18 21:19:33.689549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.961 [2024-04-18 21:19:33.689556] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.961 [2024-04-18 21:19:33.689562] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.961 [2024-04-18 21:19:33.689579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.961 qpair failed and we were unable to recover it. 00:26:17.961 [2024-04-18 21:19:33.699445] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.961 [2024-04-18 21:19:33.699543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.961 [2024-04-18 21:19:33.699561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.961 [2024-04-18 21:19:33.699569] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.961 [2024-04-18 21:19:33.699575] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.961 [2024-04-18 21:19:33.699591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.961 qpair failed and we were unable to recover it. 00:26:17.961 [2024-04-18 21:19:33.709472] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.961 [2024-04-18 21:19:33.709575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.961 [2024-04-18 21:19:33.709594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.961 [2024-04-18 21:19:33.709601] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.961 [2024-04-18 21:19:33.709607] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.961 [2024-04-18 21:19:33.709623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.961 qpair failed and we were unable to recover it. 00:26:17.961 [2024-04-18 21:19:33.719485] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.961 [2024-04-18 21:19:33.719577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.961 [2024-04-18 21:19:33.719596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.961 [2024-04-18 21:19:33.719603] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.961 [2024-04-18 21:19:33.719609] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.961 [2024-04-18 21:19:33.719625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.961 qpair failed and we were unable to recover it. 00:26:17.961 [2024-04-18 21:19:33.729532] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.961 [2024-04-18 21:19:33.729632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.961 [2024-04-18 21:19:33.729651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.961 [2024-04-18 21:19:33.729658] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.961 [2024-04-18 21:19:33.729664] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.961 [2024-04-18 21:19:33.729679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.961 qpair failed and we were unable to recover it. 00:26:17.961 [2024-04-18 21:19:33.739560] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.961 [2024-04-18 21:19:33.739657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.961 [2024-04-18 21:19:33.739675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.961 [2024-04-18 21:19:33.739682] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.961 [2024-04-18 21:19:33.739688] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.961 [2024-04-18 21:19:33.739704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.961 qpair failed and we were unable to recover it. 00:26:17.961 [2024-04-18 21:19:33.749541] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.961 [2024-04-18 21:19:33.749635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.961 [2024-04-18 21:19:33.749654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.961 [2024-04-18 21:19:33.749665] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.961 [2024-04-18 21:19:33.749671] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.961 [2024-04-18 21:19:33.749687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.961 qpair failed and we were unable to recover it. 00:26:17.961 [2024-04-18 21:19:33.759607] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.961 [2024-04-18 21:19:33.759702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.961 [2024-04-18 21:19:33.759721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.961 [2024-04-18 21:19:33.759728] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.961 [2024-04-18 21:19:33.759734] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.961 [2024-04-18 21:19:33.759751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.961 qpair failed and we were unable to recover it. 00:26:17.961 [2024-04-18 21:19:33.769649] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.961 [2024-04-18 21:19:33.769746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.961 [2024-04-18 21:19:33.769765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.961 [2024-04-18 21:19:33.769772] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.961 [2024-04-18 21:19:33.769778] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.961 [2024-04-18 21:19:33.769794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.961 qpair failed and we were unable to recover it. 00:26:17.961 [2024-04-18 21:19:33.779670] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.961 [2024-04-18 21:19:33.779768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.961 [2024-04-18 21:19:33.779787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.961 [2024-04-18 21:19:33.779794] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.961 [2024-04-18 21:19:33.779800] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.961 [2024-04-18 21:19:33.779816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.961 qpair failed and we were unable to recover it. 00:26:17.961 [2024-04-18 21:19:33.789702] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.961 [2024-04-18 21:19:33.789843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.961 [2024-04-18 21:19:33.789862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.961 [2024-04-18 21:19:33.789869] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.961 [2024-04-18 21:19:33.789876] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.961 [2024-04-18 21:19:33.789891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.961 qpair failed and we were unable to recover it. 00:26:17.961 [2024-04-18 21:19:33.799735] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.962 [2024-04-18 21:19:33.799843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.962 [2024-04-18 21:19:33.799862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.962 [2024-04-18 21:19:33.799870] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.962 [2024-04-18 21:19:33.799875] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.962 [2024-04-18 21:19:33.799891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.962 qpair failed and we were unable to recover it. 00:26:17.962 [2024-04-18 21:19:33.809762] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.962 [2024-04-18 21:19:33.809859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.962 [2024-04-18 21:19:33.809879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.962 [2024-04-18 21:19:33.809886] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.962 [2024-04-18 21:19:33.809892] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.962 [2024-04-18 21:19:33.809908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.962 qpair failed and we were unable to recover it. 00:26:17.962 [2024-04-18 21:19:33.819781] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.962 [2024-04-18 21:19:33.819876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.962 [2024-04-18 21:19:33.819894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.962 [2024-04-18 21:19:33.819901] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.962 [2024-04-18 21:19:33.819907] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.962 [2024-04-18 21:19:33.819922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.962 qpair failed and we were unable to recover it. 00:26:17.962 [2024-04-18 21:19:33.829805] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.962 [2024-04-18 21:19:33.829897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.962 [2024-04-18 21:19:33.829915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.962 [2024-04-18 21:19:33.829922] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.962 [2024-04-18 21:19:33.829928] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.962 [2024-04-18 21:19:33.829944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.962 qpair failed and we were unable to recover it. 00:26:17.962 [2024-04-18 21:19:33.839869] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.962 [2024-04-18 21:19:33.839980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.962 [2024-04-18 21:19:33.839999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.962 [2024-04-18 21:19:33.840009] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.962 [2024-04-18 21:19:33.840015] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.962 [2024-04-18 21:19:33.840031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.962 qpair failed and we were unable to recover it. 00:26:17.962 [2024-04-18 21:19:33.849865] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.962 [2024-04-18 21:19:33.849962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.962 [2024-04-18 21:19:33.849981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.962 [2024-04-18 21:19:33.849988] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.962 [2024-04-18 21:19:33.849994] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.962 [2024-04-18 21:19:33.850010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.962 qpair failed and we were unable to recover it. 00:26:17.962 [2024-04-18 21:19:33.859891] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.962 [2024-04-18 21:19:33.859986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.962 [2024-04-18 21:19:33.860005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.962 [2024-04-18 21:19:33.860012] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.962 [2024-04-18 21:19:33.860018] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.962 [2024-04-18 21:19:33.860035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.962 qpair failed and we were unable to recover it. 00:26:17.962 [2024-04-18 21:19:33.869915] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.962 [2024-04-18 21:19:33.870009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.962 [2024-04-18 21:19:33.870029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.962 [2024-04-18 21:19:33.870036] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.962 [2024-04-18 21:19:33.870042] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.962 [2024-04-18 21:19:33.870057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.962 qpair failed and we were unable to recover it. 00:26:17.962 [2024-04-18 21:19:33.879958] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.962 [2024-04-18 21:19:33.880049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.962 [2024-04-18 21:19:33.880069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.962 [2024-04-18 21:19:33.880078] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.962 [2024-04-18 21:19:33.880087] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:17.962 [2024-04-18 21:19:33.880109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.962 qpair failed and we were unable to recover it. 00:26:18.223 [2024-04-18 21:19:33.890018] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.223 [2024-04-18 21:19:33.890118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.223 [2024-04-18 21:19:33.890137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.223 [2024-04-18 21:19:33.890144] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.223 [2024-04-18 21:19:33.890150] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.223 [2024-04-18 21:19:33.890165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.223 qpair failed and we were unable to recover it. 00:26:18.223 [2024-04-18 21:19:33.900005] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.223 [2024-04-18 21:19:33.900101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.223 [2024-04-18 21:19:33.900119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.223 [2024-04-18 21:19:33.900126] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.223 [2024-04-18 21:19:33.900132] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.223 [2024-04-18 21:19:33.900148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.223 qpair failed and we were unable to recover it. 00:26:18.223 [2024-04-18 21:19:33.910031] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.223 [2024-04-18 21:19:33.910126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.223 [2024-04-18 21:19:33.910144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.223 [2024-04-18 21:19:33.910152] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.223 [2024-04-18 21:19:33.910158] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.223 [2024-04-18 21:19:33.910174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.223 qpair failed and we were unable to recover it. 00:26:18.223 [2024-04-18 21:19:33.920096] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.223 [2024-04-18 21:19:33.920203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.223 [2024-04-18 21:19:33.920221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.223 [2024-04-18 21:19:33.920228] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.223 [2024-04-18 21:19:33.920234] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.223 [2024-04-18 21:19:33.920250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.223 qpair failed and we were unable to recover it. 00:26:18.224 [2024-04-18 21:19:33.930099] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.224 [2024-04-18 21:19:33.930197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.224 [2024-04-18 21:19:33.930220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.224 [2024-04-18 21:19:33.930228] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.224 [2024-04-18 21:19:33.930234] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.224 [2024-04-18 21:19:33.930250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.224 qpair failed and we were unable to recover it. 00:26:18.224 [2024-04-18 21:19:33.940111] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.224 [2024-04-18 21:19:33.940216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.224 [2024-04-18 21:19:33.940235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.224 [2024-04-18 21:19:33.940243] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.224 [2024-04-18 21:19:33.940248] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.224 [2024-04-18 21:19:33.940264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.224 qpair failed and we were unable to recover it. 00:26:18.224 [2024-04-18 21:19:33.950123] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.224 [2024-04-18 21:19:33.950220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.224 [2024-04-18 21:19:33.950239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.224 [2024-04-18 21:19:33.950247] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.224 [2024-04-18 21:19:33.950253] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.224 [2024-04-18 21:19:33.950269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.224 qpair failed and we were unable to recover it. 00:26:18.224 [2024-04-18 21:19:33.960192] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.224 [2024-04-18 21:19:33.960285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.224 [2024-04-18 21:19:33.960303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.224 [2024-04-18 21:19:33.960310] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.224 [2024-04-18 21:19:33.960316] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.224 [2024-04-18 21:19:33.960332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.224 qpair failed and we were unable to recover it. 00:26:18.224 [2024-04-18 21:19:33.970212] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.224 [2024-04-18 21:19:33.970308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.224 [2024-04-18 21:19:33.970327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.224 [2024-04-18 21:19:33.970334] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.224 [2024-04-18 21:19:33.970340] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.224 [2024-04-18 21:19:33.970360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.224 qpair failed and we were unable to recover it. 00:26:18.224 [2024-04-18 21:19:33.980249] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.224 [2024-04-18 21:19:33.980345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.224 [2024-04-18 21:19:33.980364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.224 [2024-04-18 21:19:33.980371] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.224 [2024-04-18 21:19:33.980377] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.224 [2024-04-18 21:19:33.980393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.224 qpair failed and we were unable to recover it. 00:26:18.224 [2024-04-18 21:19:33.990267] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.224 [2024-04-18 21:19:33.990361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.224 [2024-04-18 21:19:33.990380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.224 [2024-04-18 21:19:33.990387] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.224 [2024-04-18 21:19:33.990393] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.224 [2024-04-18 21:19:33.990409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.224 qpair failed and we were unable to recover it. 00:26:18.224 [2024-04-18 21:19:34.000287] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.224 [2024-04-18 21:19:34.000378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.224 [2024-04-18 21:19:34.000396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.224 [2024-04-18 21:19:34.000403] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.224 [2024-04-18 21:19:34.000409] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.224 [2024-04-18 21:19:34.000425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.224 qpair failed and we were unable to recover it. 00:26:18.224 [2024-04-18 21:19:34.010327] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.224 [2024-04-18 21:19:34.010423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.224 [2024-04-18 21:19:34.010442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.224 [2024-04-18 21:19:34.010449] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.224 [2024-04-18 21:19:34.010455] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.224 [2024-04-18 21:19:34.010470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.224 qpair failed and we were unable to recover it. 00:26:18.224 [2024-04-18 21:19:34.020355] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.224 [2024-04-18 21:19:34.020450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.224 [2024-04-18 21:19:34.020472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.224 [2024-04-18 21:19:34.020480] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.224 [2024-04-18 21:19:34.020486] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.224 [2024-04-18 21:19:34.020502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.224 qpair failed and we were unable to recover it. 00:26:18.224 [2024-04-18 21:19:34.030395] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.224 [2024-04-18 21:19:34.030488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.224 [2024-04-18 21:19:34.030506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.224 [2024-04-18 21:19:34.030519] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.224 [2024-04-18 21:19:34.030525] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.224 [2024-04-18 21:19:34.030541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.224 qpair failed and we were unable to recover it. 00:26:18.224 [2024-04-18 21:19:34.040499] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.224 [2024-04-18 21:19:34.040599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.224 [2024-04-18 21:19:34.040618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.224 [2024-04-18 21:19:34.040625] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.224 [2024-04-18 21:19:34.040631] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.224 [2024-04-18 21:19:34.040647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.224 qpair failed and we were unable to recover it. 00:26:18.224 [2024-04-18 21:19:34.050450] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.224 [2024-04-18 21:19:34.050551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.224 [2024-04-18 21:19:34.050569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.224 [2024-04-18 21:19:34.050576] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.224 [2024-04-18 21:19:34.050582] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.224 [2024-04-18 21:19:34.050598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.224 qpair failed and we were unable to recover it. 00:26:18.224 [2024-04-18 21:19:34.060449] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.224 [2024-04-18 21:19:34.060555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.224 [2024-04-18 21:19:34.060574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.225 [2024-04-18 21:19:34.060581] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.225 [2024-04-18 21:19:34.060587] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.225 [2024-04-18 21:19:34.060606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.225 qpair failed and we were unable to recover it. 00:26:18.225 [2024-04-18 21:19:34.070502] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.225 [2024-04-18 21:19:34.070602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.225 [2024-04-18 21:19:34.070622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.225 [2024-04-18 21:19:34.070630] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.225 [2024-04-18 21:19:34.070636] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.225 [2024-04-18 21:19:34.070652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.225 qpair failed and we were unable to recover it. 00:26:18.225 [2024-04-18 21:19:34.080451] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.225 [2024-04-18 21:19:34.080565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.225 [2024-04-18 21:19:34.080584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.225 [2024-04-18 21:19:34.080591] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.225 [2024-04-18 21:19:34.080597] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.225 [2024-04-18 21:19:34.080614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.225 qpair failed and we were unable to recover it. 00:26:18.225 [2024-04-18 21:19:34.090569] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.225 [2024-04-18 21:19:34.090669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.225 [2024-04-18 21:19:34.090688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.225 [2024-04-18 21:19:34.090696] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.225 [2024-04-18 21:19:34.090702] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.225 [2024-04-18 21:19:34.090718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.225 qpair failed and we were unable to recover it. 00:26:18.225 [2024-04-18 21:19:34.100504] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.225 [2024-04-18 21:19:34.100609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.225 [2024-04-18 21:19:34.100627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.225 [2024-04-18 21:19:34.100635] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.225 [2024-04-18 21:19:34.100641] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.225 [2024-04-18 21:19:34.100657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.225 qpair failed and we were unable to recover it. 00:26:18.225 [2024-04-18 21:19:34.110618] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.225 [2024-04-18 21:19:34.110715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.225 [2024-04-18 21:19:34.110738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.225 [2024-04-18 21:19:34.110745] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.225 [2024-04-18 21:19:34.110751] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.225 [2024-04-18 21:19:34.110767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.225 qpair failed and we were unable to recover it. 00:26:18.225 [2024-04-18 21:19:34.120646] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.225 [2024-04-18 21:19:34.120745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.225 [2024-04-18 21:19:34.120763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.225 [2024-04-18 21:19:34.120772] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.225 [2024-04-18 21:19:34.120779] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.225 [2024-04-18 21:19:34.120796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.225 qpair failed and we were unable to recover it. 00:26:18.225 [2024-04-18 21:19:34.130696] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.225 [2024-04-18 21:19:34.130792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.225 [2024-04-18 21:19:34.130812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.225 [2024-04-18 21:19:34.130819] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.225 [2024-04-18 21:19:34.130825] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.225 [2024-04-18 21:19:34.130841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.225 qpair failed and we were unable to recover it. 00:26:18.225 [2024-04-18 21:19:34.140698] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.225 [2024-04-18 21:19:34.140794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.225 [2024-04-18 21:19:34.140813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.225 [2024-04-18 21:19:34.140821] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.225 [2024-04-18 21:19:34.140827] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.225 [2024-04-18 21:19:34.140842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.225 qpair failed and we were unable to recover it. 00:26:18.225 [2024-04-18 21:19:34.150738] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.225 [2024-04-18 21:19:34.150833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.225 [2024-04-18 21:19:34.150851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.225 [2024-04-18 21:19:34.150859] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.225 [2024-04-18 21:19:34.150865] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.225 [2024-04-18 21:19:34.150884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.225 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-18 21:19:34.160762] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.486 [2024-04-18 21:19:34.160857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.486 [2024-04-18 21:19:34.160876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.486 [2024-04-18 21:19:34.160883] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.486 [2024-04-18 21:19:34.160889] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.486 [2024-04-18 21:19:34.160904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-18 21:19:34.170796] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.486 [2024-04-18 21:19:34.170891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.486 [2024-04-18 21:19:34.170909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.486 [2024-04-18 21:19:34.170917] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.486 [2024-04-18 21:19:34.170923] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.486 [2024-04-18 21:19:34.170938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-18 21:19:34.180851] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.486 [2024-04-18 21:19:34.180949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.486 [2024-04-18 21:19:34.180968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.486 [2024-04-18 21:19:34.180975] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.486 [2024-04-18 21:19:34.180981] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.486 [2024-04-18 21:19:34.180998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-18 21:19:34.190860] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.486 [2024-04-18 21:19:34.190962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.486 [2024-04-18 21:19:34.190980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.486 [2024-04-18 21:19:34.190988] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.486 [2024-04-18 21:19:34.190994] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.486 [2024-04-18 21:19:34.191010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-18 21:19:34.200868] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.486 [2024-04-18 21:19:34.200962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.486 [2024-04-18 21:19:34.200984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.486 [2024-04-18 21:19:34.200991] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.486 [2024-04-18 21:19:34.200997] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.486 [2024-04-18 21:19:34.201012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-18 21:19:34.210953] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.486 [2024-04-18 21:19:34.211048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.486 [2024-04-18 21:19:34.211067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.486 [2024-04-18 21:19:34.211074] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.486 [2024-04-18 21:19:34.211081] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.486 [2024-04-18 21:19:34.211097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-18 21:19:34.220972] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.486 [2024-04-18 21:19:34.221068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.486 [2024-04-18 21:19:34.221086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.486 [2024-04-18 21:19:34.221093] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.486 [2024-04-18 21:19:34.221099] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.486 [2024-04-18 21:19:34.221116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-18 21:19:34.230931] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.486 [2024-04-18 21:19:34.231071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.486 [2024-04-18 21:19:34.231090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.486 [2024-04-18 21:19:34.231097] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.486 [2024-04-18 21:19:34.231103] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.486 [2024-04-18 21:19:34.231119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-18 21:19:34.240984] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.486 [2024-04-18 21:19:34.241080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.486 [2024-04-18 21:19:34.241098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.486 [2024-04-18 21:19:34.241106] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.486 [2024-04-18 21:19:34.241117] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.486 [2024-04-18 21:19:34.241134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-18 21:19:34.250944] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.486 [2024-04-18 21:19:34.251039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.486 [2024-04-18 21:19:34.251057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.486 [2024-04-18 21:19:34.251065] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.486 [2024-04-18 21:19:34.251071] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.486 [2024-04-18 21:19:34.251086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-18 21:19:34.261044] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.486 [2024-04-18 21:19:34.261138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.486 [2024-04-18 21:19:34.261157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.487 [2024-04-18 21:19:34.261165] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.487 [2024-04-18 21:19:34.261171] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.487 [2024-04-18 21:19:34.261187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-18 21:19:34.271079] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.487 [2024-04-18 21:19:34.271179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.487 [2024-04-18 21:19:34.271198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.487 [2024-04-18 21:19:34.271205] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.487 [2024-04-18 21:19:34.271211] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.487 [2024-04-18 21:19:34.271227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-18 21:19:34.281098] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.487 [2024-04-18 21:19:34.281196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.487 [2024-04-18 21:19:34.281214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.487 [2024-04-18 21:19:34.281221] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.487 [2024-04-18 21:19:34.281227] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.487 [2024-04-18 21:19:34.281243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-18 21:19:34.291130] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.487 [2024-04-18 21:19:34.291233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.487 [2024-04-18 21:19:34.291252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.487 [2024-04-18 21:19:34.291259] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.487 [2024-04-18 21:19:34.291265] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.487 [2024-04-18 21:19:34.291281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-18 21:19:34.301157] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.487 [2024-04-18 21:19:34.301249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.487 [2024-04-18 21:19:34.301268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.487 [2024-04-18 21:19:34.301276] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.487 [2024-04-18 21:19:34.301281] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.487 [2024-04-18 21:19:34.301297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-18 21:19:34.311155] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.487 [2024-04-18 21:19:34.311250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.487 [2024-04-18 21:19:34.311269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.487 [2024-04-18 21:19:34.311277] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.487 [2024-04-18 21:19:34.311283] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.487 [2024-04-18 21:19:34.311298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-18 21:19:34.321188] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.487 [2024-04-18 21:19:34.321278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.487 [2024-04-18 21:19:34.321296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.487 [2024-04-18 21:19:34.321304] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.487 [2024-04-18 21:19:34.321310] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.487 [2024-04-18 21:19:34.321326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-18 21:19:34.331246] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.487 [2024-04-18 21:19:34.331344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.487 [2024-04-18 21:19:34.331363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.487 [2024-04-18 21:19:34.331370] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.487 [2024-04-18 21:19:34.331379] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.487 [2024-04-18 21:19:34.331395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-18 21:19:34.341221] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.487 [2024-04-18 21:19:34.341320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.487 [2024-04-18 21:19:34.341338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.487 [2024-04-18 21:19:34.341346] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.487 [2024-04-18 21:19:34.341352] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.487 [2024-04-18 21:19:34.341367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-18 21:19:34.351295] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.487 [2024-04-18 21:19:34.351390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.487 [2024-04-18 21:19:34.351408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.487 [2024-04-18 21:19:34.351416] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.487 [2024-04-18 21:19:34.351422] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.487 [2024-04-18 21:19:34.351437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-18 21:19:34.361349] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.487 [2024-04-18 21:19:34.361449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.487 [2024-04-18 21:19:34.361467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.487 [2024-04-18 21:19:34.361474] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.487 [2024-04-18 21:19:34.361480] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.487 [2024-04-18 21:19:34.361496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-18 21:19:34.371402] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.487 [2024-04-18 21:19:34.371496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.487 [2024-04-18 21:19:34.371520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.487 [2024-04-18 21:19:34.371528] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.487 [2024-04-18 21:19:34.371534] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.487 [2024-04-18 21:19:34.371550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-18 21:19:34.381385] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.487 [2024-04-18 21:19:34.381485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.487 [2024-04-18 21:19:34.381504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.487 [2024-04-18 21:19:34.381517] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.487 [2024-04-18 21:19:34.381523] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.487 [2024-04-18 21:19:34.381539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-18 21:19:34.391331] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.487 [2024-04-18 21:19:34.391436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.487 [2024-04-18 21:19:34.391455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.487 [2024-04-18 21:19:34.391462] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.487 [2024-04-18 21:19:34.391468] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.487 [2024-04-18 21:19:34.391484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-18 21:19:34.401450] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.488 [2024-04-18 21:19:34.401552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.488 [2024-04-18 21:19:34.401571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.488 [2024-04-18 21:19:34.401578] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.488 [2024-04-18 21:19:34.401584] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.488 [2024-04-18 21:19:34.401600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-18 21:19:34.411469] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.488 [2024-04-18 21:19:34.411571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.488 [2024-04-18 21:19:34.411589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.488 [2024-04-18 21:19:34.411597] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.488 [2024-04-18 21:19:34.411603] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.488 [2024-04-18 21:19:34.411619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.747 [2024-04-18 21:19:34.421492] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.748 [2024-04-18 21:19:34.421597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.748 [2024-04-18 21:19:34.421614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.748 [2024-04-18 21:19:34.421625] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.748 [2024-04-18 21:19:34.421630] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.748 [2024-04-18 21:19:34.421647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.748 qpair failed and we were unable to recover it. 00:26:18.748 [2024-04-18 21:19:34.431535] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.748 [2024-04-18 21:19:34.431633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.748 [2024-04-18 21:19:34.431652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.748 [2024-04-18 21:19:34.431660] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.748 [2024-04-18 21:19:34.431665] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.748 [2024-04-18 21:19:34.431682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.748 qpair failed and we were unable to recover it. 00:26:18.748 [2024-04-18 21:19:34.441545] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.748 [2024-04-18 21:19:34.441638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.748 [2024-04-18 21:19:34.441656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.748 [2024-04-18 21:19:34.441664] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.748 [2024-04-18 21:19:34.441670] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.748 [2024-04-18 21:19:34.441685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.748 qpair failed and we were unable to recover it. 00:26:18.748 [2024-04-18 21:19:34.451588] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.748 [2024-04-18 21:19:34.451685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.748 [2024-04-18 21:19:34.451704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.748 [2024-04-18 21:19:34.451711] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.748 [2024-04-18 21:19:34.451717] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.748 [2024-04-18 21:19:34.451733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.748 qpair failed and we were unable to recover it. 00:26:18.748 [2024-04-18 21:19:34.461621] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.748 [2024-04-18 21:19:34.461719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.748 [2024-04-18 21:19:34.461738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.748 [2024-04-18 21:19:34.461745] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.748 [2024-04-18 21:19:34.461751] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.748 [2024-04-18 21:19:34.461767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.748 qpair failed and we were unable to recover it. 00:26:18.748 [2024-04-18 21:19:34.471644] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.748 [2024-04-18 21:19:34.471744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.748 [2024-04-18 21:19:34.471762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.748 [2024-04-18 21:19:34.471770] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.748 [2024-04-18 21:19:34.471775] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.748 [2024-04-18 21:19:34.471793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.748 qpair failed and we were unable to recover it. 00:26:18.748 [2024-04-18 21:19:34.481659] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.748 [2024-04-18 21:19:34.481752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.748 [2024-04-18 21:19:34.481770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.748 [2024-04-18 21:19:34.481778] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.748 [2024-04-18 21:19:34.481784] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.748 [2024-04-18 21:19:34.481800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.748 qpair failed and we were unable to recover it. 00:26:18.748 [2024-04-18 21:19:34.491702] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.748 [2024-04-18 21:19:34.491799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.748 [2024-04-18 21:19:34.491817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.748 [2024-04-18 21:19:34.491825] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.748 [2024-04-18 21:19:34.491831] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.748 [2024-04-18 21:19:34.491846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.748 qpair failed and we were unable to recover it. 00:26:18.748 [2024-04-18 21:19:34.501660] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.748 [2024-04-18 21:19:34.501755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.748 [2024-04-18 21:19:34.501773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.748 [2024-04-18 21:19:34.501780] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.748 [2024-04-18 21:19:34.501786] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.748 [2024-04-18 21:19:34.501802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.748 qpair failed and we were unable to recover it. 00:26:18.748 [2024-04-18 21:19:34.511730] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.748 [2024-04-18 21:19:34.511826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.748 [2024-04-18 21:19:34.511845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.748 [2024-04-18 21:19:34.511855] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.748 [2024-04-18 21:19:34.511861] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.748 [2024-04-18 21:19:34.511877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.748 qpair failed and we were unable to recover it. 00:26:18.748 [2024-04-18 21:19:34.521785] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.748 [2024-04-18 21:19:34.521880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.748 [2024-04-18 21:19:34.521898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.748 [2024-04-18 21:19:34.521905] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.748 [2024-04-18 21:19:34.521911] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.748 [2024-04-18 21:19:34.521927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.748 qpair failed and we were unable to recover it. 00:26:18.748 [2024-04-18 21:19:34.531744] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.748 [2024-04-18 21:19:34.531838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.748 [2024-04-18 21:19:34.531856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.748 [2024-04-18 21:19:34.531864] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.748 [2024-04-18 21:19:34.531870] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.748 [2024-04-18 21:19:34.531885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.748 qpair failed and we were unable to recover it. 00:26:18.748 [2024-04-18 21:19:34.541836] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.748 [2024-04-18 21:19:34.541931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.748 [2024-04-18 21:19:34.541949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.748 [2024-04-18 21:19:34.541956] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.748 [2024-04-18 21:19:34.541962] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.748 [2024-04-18 21:19:34.541978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.748 qpair failed and we were unable to recover it. 00:26:18.748 [2024-04-18 21:19:34.551899] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.748 [2024-04-18 21:19:34.552005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.748 [2024-04-18 21:19:34.552023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.748 [2024-04-18 21:19:34.552031] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.749 [2024-04-18 21:19:34.552037] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.749 [2024-04-18 21:19:34.552053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.749 qpair failed and we were unable to recover it. 00:26:18.749 [2024-04-18 21:19:34.561905] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.749 [2024-04-18 21:19:34.561999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.749 [2024-04-18 21:19:34.562017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.749 [2024-04-18 21:19:34.562024] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.749 [2024-04-18 21:19:34.562031] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.749 [2024-04-18 21:19:34.562047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.749 qpair failed and we were unable to recover it. 00:26:18.749 [2024-04-18 21:19:34.571930] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.749 [2024-04-18 21:19:34.572029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.749 [2024-04-18 21:19:34.572048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.749 [2024-04-18 21:19:34.572055] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.749 [2024-04-18 21:19:34.572061] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.749 [2024-04-18 21:19:34.572077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.749 qpair failed and we were unable to recover it. 00:26:18.749 [2024-04-18 21:19:34.581945] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.749 [2024-04-18 21:19:34.582048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.749 [2024-04-18 21:19:34.582067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.749 [2024-04-18 21:19:34.582074] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.749 [2024-04-18 21:19:34.582080] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.749 [2024-04-18 21:19:34.582096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.749 qpair failed and we were unable to recover it. 00:26:18.749 [2024-04-18 21:19:34.591991] ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.749 [2024-04-18 21:19:34.592094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.749 [2024-04-18 21:19:34.592112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.749 [2024-04-18 21:19:34.592120] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.749 [2024-04-18 21:19:34.592126] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2461c90 00:26:18.749 [2024-04-18 21:19:34.592142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.749 qpair failed and we were unable to recover it. 00:26:18.749 [2024-04-18 21:19:34.592229] nvme_ctrlr.c:4341:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:26:18.749 A controller has encountered a failure and is being reset. 00:26:18.749 [2024-04-18 21:19:34.592320] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x246f7f0 (9): Bad file descriptor 00:26:18.749 Controller properly reset. 00:26:18.749 Initializing NVMe Controllers 00:26:18.749 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:18.749 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:18.749 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:26:18.749 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:26:18.749 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:26:18.749 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:26:18.749 Initialization complete. Launching workers. 00:26:18.749 Starting thread on core 1 00:26:18.749 Starting thread on core 2 00:26:18.749 Starting thread on core 3 00:26:18.749 Starting thread on core 0 00:26:18.749 21:19:34 -- host/target_disconnect.sh@59 -- # sync 00:26:18.749 00:26:18.749 real 0m11.270s 00:26:18.749 user 0m20.891s 00:26:18.749 sys 0m4.257s 00:26:18.749 21:19:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:18.749 21:19:34 -- common/autotest_common.sh@10 -- # set +x 00:26:18.749 ************************************ 00:26:18.749 END TEST nvmf_target_disconnect_tc2 00:26:18.749 ************************************ 00:26:19.008 21:19:34 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:26:19.008 21:19:34 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:26:19.008 21:19:34 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:26:19.008 21:19:34 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:19.008 21:19:34 -- nvmf/common.sh@117 -- # sync 00:26:19.008 21:19:34 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:19.008 21:19:34 -- nvmf/common.sh@120 -- # set +e 00:26:19.008 21:19:34 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:19.008 21:19:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:19.008 rmmod nvme_tcp 00:26:19.008 rmmod nvme_fabrics 00:26:19.008 rmmod nvme_keyring 00:26:19.008 21:19:34 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:19.008 21:19:34 -- nvmf/common.sh@124 -- # set -e 00:26:19.008 21:19:34 -- nvmf/common.sh@125 -- # return 0 00:26:19.008 21:19:34 -- nvmf/common.sh@478 -- # '[' -n 3204739 ']' 00:26:19.008 21:19:34 -- nvmf/common.sh@479 -- # killprocess 3204739 00:26:19.008 21:19:34 -- common/autotest_common.sh@936 -- # '[' -z 3204739 ']' 00:26:19.008 21:19:34 -- common/autotest_common.sh@940 -- # kill -0 3204739 00:26:19.008 21:19:34 -- common/autotest_common.sh@941 -- # uname 00:26:19.008 21:19:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:19.008 21:19:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3204739 00:26:19.008 21:19:34 -- common/autotest_common.sh@942 -- # process_name=reactor_4 00:26:19.008 21:19:34 -- common/autotest_common.sh@946 -- # '[' reactor_4 = sudo ']' 00:26:19.008 21:19:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3204739' 00:26:19.008 killing process with pid 3204739 00:26:19.008 21:19:34 -- common/autotest_common.sh@955 -- # kill 3204739 00:26:19.008 21:19:34 -- common/autotest_common.sh@960 -- # wait 3204739 00:26:19.267 21:19:35 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:19.267 21:19:35 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:19.268 21:19:35 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:19.268 21:19:35 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:19.268 21:19:35 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:19.268 21:19:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:19.268 21:19:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:19.268 21:19:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:21.171 21:19:37 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:21.171 00:26:21.171 real 0m20.257s 00:26:21.171 user 0m48.098s 00:26:21.171 sys 0m9.303s 00:26:21.171 21:19:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:21.171 21:19:37 -- common/autotest_common.sh@10 -- # set +x 00:26:21.171 ************************************ 00:26:21.171 END TEST nvmf_target_disconnect 00:26:21.172 ************************************ 00:26:21.431 21:19:37 -- nvmf/nvmf.sh@124 -- # timing_exit host 00:26:21.431 21:19:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:21.431 21:19:37 -- common/autotest_common.sh@10 -- # set +x 00:26:21.431 21:19:37 -- nvmf/nvmf.sh@126 -- # trap - SIGINT SIGTERM EXIT 00:26:21.431 00:26:21.431 real 20m6.020s 00:26:21.431 user 41m41.665s 00:26:21.431 sys 6m27.238s 00:26:21.431 21:19:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:21.431 21:19:37 -- common/autotest_common.sh@10 -- # set +x 00:26:21.431 ************************************ 00:26:21.431 END TEST nvmf_tcp 00:26:21.431 ************************************ 00:26:21.431 21:19:37 -- spdk/autotest.sh@286 -- # [[ 0 -eq 0 ]] 00:26:21.431 21:19:37 -- spdk/autotest.sh@287 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:21.431 21:19:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:21.431 21:19:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:21.431 21:19:37 -- common/autotest_common.sh@10 -- # set +x 00:26:21.431 ************************************ 00:26:21.431 START TEST spdkcli_nvmf_tcp 00:26:21.431 ************************************ 00:26:21.431 21:19:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:21.691 * Looking for test storage... 00:26:21.691 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:26:21.691 21:19:37 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:26:21.691 21:19:37 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:26:21.691 21:19:37 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:26:21.691 21:19:37 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:21.691 21:19:37 -- nvmf/common.sh@7 -- # uname -s 00:26:21.691 21:19:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:21.691 21:19:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:21.691 21:19:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:21.691 21:19:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:21.691 21:19:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:21.691 21:19:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:21.691 21:19:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:21.691 21:19:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:21.691 21:19:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:21.691 21:19:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:21.691 21:19:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:21.691 21:19:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:21.691 21:19:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:21.691 21:19:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:21.691 21:19:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:21.691 21:19:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:21.691 21:19:37 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:21.691 21:19:37 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:21.691 21:19:37 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:21.691 21:19:37 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:21.691 21:19:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.691 21:19:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.691 21:19:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.691 21:19:37 -- paths/export.sh@5 -- # export PATH 00:26:21.691 21:19:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.691 21:19:37 -- nvmf/common.sh@47 -- # : 0 00:26:21.691 21:19:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:21.691 21:19:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:21.691 21:19:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:21.691 21:19:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:21.691 21:19:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:21.691 21:19:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:21.691 21:19:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:21.691 21:19:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:21.691 21:19:37 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:26:21.691 21:19:37 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:26:21.691 21:19:37 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:26:21.691 21:19:37 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:26:21.691 21:19:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:21.691 21:19:37 -- common/autotest_common.sh@10 -- # set +x 00:26:21.691 21:19:37 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:26:21.691 21:19:37 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3206268 00:26:21.691 21:19:37 -- spdkcli/common.sh@34 -- # waitforlisten 3206268 00:26:21.691 21:19:37 -- common/autotest_common.sh@817 -- # '[' -z 3206268 ']' 00:26:21.691 21:19:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:21.691 21:19:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:21.691 21:19:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:21.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:21.691 21:19:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:21.691 21:19:37 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:26:21.691 21:19:37 -- common/autotest_common.sh@10 -- # set +x 00:26:21.691 [2024-04-18 21:19:37.479241] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:26:21.691 [2024-04-18 21:19:37.479291] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3206268 ] 00:26:21.691 EAL: No free 2048 kB hugepages reported on node 1 00:26:21.691 [2024-04-18 21:19:37.539748] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:21.691 [2024-04-18 21:19:37.618457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:21.691 [2024-04-18 21:19:37.618461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.626 21:19:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:22.626 21:19:38 -- common/autotest_common.sh@850 -- # return 0 00:26:22.626 21:19:38 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:26:22.626 21:19:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:22.626 21:19:38 -- common/autotest_common.sh@10 -- # set +x 00:26:22.626 21:19:38 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:26:22.626 21:19:38 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:26:22.626 21:19:38 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:26:22.626 21:19:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:22.626 21:19:38 -- common/autotest_common.sh@10 -- # set +x 00:26:22.626 21:19:38 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:26:22.626 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:26:22.626 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:26:22.626 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:26:22.626 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:26:22.626 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:26:22.626 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:26:22.626 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:22.626 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:26:22.626 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:26:22.626 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:22.626 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:22.626 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:26:22.626 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:22.626 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:22.626 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:26:22.626 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:22.626 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:22.626 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:22.626 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:22.626 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:26:22.626 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:26:22.626 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:22.626 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:26:22.626 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:22.626 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:26:22.626 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:26:22.626 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:26:22.626 ' 00:26:22.885 [2024-04-18 21:19:38.651682] nvmf_rpc.c: 279:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:26:24.790 [2024-04-18 21:19:40.691475] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:26.166 [2024-04-18 21:19:41.867545] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:26:28.103 [2024-04-18 21:19:44.030094] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:26:30.008 [2024-04-18 21:19:45.887888] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:26:31.386 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:26:31.386 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:26:31.386 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:26:31.386 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:26:31.386 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:26:31.386 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:26:31.386 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:26:31.386 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:26:31.386 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:26:31.386 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:26:31.386 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:31.386 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:31.386 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:26:31.386 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:31.386 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:31.386 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:26:31.386 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:31.386 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:26:31.386 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:26:31.386 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:31.386 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:26:31.386 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:26:31.386 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:26:31.386 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:26:31.386 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:31.386 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:26:31.386 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:26:31.386 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:26:31.645 21:19:47 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:26:31.645 21:19:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:31.645 21:19:47 -- common/autotest_common.sh@10 -- # set +x 00:26:31.645 21:19:47 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:26:31.645 21:19:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:31.645 21:19:47 -- common/autotest_common.sh@10 -- # set +x 00:26:31.645 21:19:47 -- spdkcli/nvmf.sh@69 -- # check_match 00:26:31.645 21:19:47 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:26:31.904 21:19:47 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:26:32.163 21:19:47 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:26:32.163 21:19:47 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:26:32.163 21:19:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:32.163 21:19:47 -- common/autotest_common.sh@10 -- # set +x 00:26:32.163 21:19:47 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:26:32.163 21:19:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:32.163 21:19:47 -- common/autotest_common.sh@10 -- # set +x 00:26:32.163 21:19:47 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:26:32.163 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:26:32.163 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:32.163 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:26:32.163 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:26:32.163 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:26:32.163 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:26:32.163 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:32.163 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:26:32.163 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:26:32.163 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:26:32.163 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:26:32.163 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:26:32.163 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:26:32.163 ' 00:26:37.437 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:26:37.437 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:26:37.437 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:37.437 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:26:37.437 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:26:37.437 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:26:37.437 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:26:37.437 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:37.437 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:26:37.437 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:26:37.437 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:26:37.437 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:26:37.437 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:26:37.437 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:26:37.437 21:19:52 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:26:37.437 21:19:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:37.437 21:19:52 -- common/autotest_common.sh@10 -- # set +x 00:26:37.437 21:19:52 -- spdkcli/nvmf.sh@90 -- # killprocess 3206268 00:26:37.437 21:19:52 -- common/autotest_common.sh@936 -- # '[' -z 3206268 ']' 00:26:37.437 21:19:52 -- common/autotest_common.sh@940 -- # kill -0 3206268 00:26:37.437 21:19:52 -- common/autotest_common.sh@941 -- # uname 00:26:37.437 21:19:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:37.437 21:19:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3206268 00:26:37.437 21:19:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:37.437 21:19:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:37.437 21:19:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3206268' 00:26:37.437 killing process with pid 3206268 00:26:37.437 21:19:52 -- common/autotest_common.sh@955 -- # kill 3206268 00:26:37.437 [2024-04-18 21:19:52.913887] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:26:37.437 21:19:52 -- common/autotest_common.sh@960 -- # wait 3206268 00:26:37.437 21:19:53 -- spdkcli/nvmf.sh@1 -- # cleanup 00:26:37.437 21:19:53 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:26:37.437 21:19:53 -- spdkcli/common.sh@13 -- # '[' -n 3206268 ']' 00:26:37.437 21:19:53 -- spdkcli/common.sh@14 -- # killprocess 3206268 00:26:37.437 21:19:53 -- common/autotest_common.sh@936 -- # '[' -z 3206268 ']' 00:26:37.437 21:19:53 -- common/autotest_common.sh@940 -- # kill -0 3206268 00:26:37.437 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (3206268) - No such process 00:26:37.437 21:19:53 -- common/autotest_common.sh@963 -- # echo 'Process with pid 3206268 is not found' 00:26:37.437 Process with pid 3206268 is not found 00:26:37.437 21:19:53 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:26:37.437 21:19:53 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:26:37.437 21:19:53 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:26:37.437 00:26:37.437 real 0m15.800s 00:26:37.437 user 0m32.662s 00:26:37.437 sys 0m0.721s 00:26:37.437 21:19:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:37.437 21:19:53 -- common/autotest_common.sh@10 -- # set +x 00:26:37.437 ************************************ 00:26:37.437 END TEST spdkcli_nvmf_tcp 00:26:37.437 ************************************ 00:26:37.437 21:19:53 -- spdk/autotest.sh@288 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:37.437 21:19:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:37.437 21:19:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:37.437 21:19:53 -- common/autotest_common.sh@10 -- # set +x 00:26:37.437 ************************************ 00:26:37.437 START TEST nvmf_identify_passthru 00:26:37.437 ************************************ 00:26:37.437 21:19:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:37.437 * Looking for test storage... 00:26:37.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:37.697 21:19:53 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:37.697 21:19:53 -- nvmf/common.sh@7 -- # uname -s 00:26:37.697 21:19:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:37.697 21:19:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:37.697 21:19:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:37.697 21:19:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:37.697 21:19:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:37.697 21:19:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:37.697 21:19:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:37.697 21:19:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:37.697 21:19:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:37.697 21:19:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:37.697 21:19:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:37.697 21:19:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:37.697 21:19:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:37.697 21:19:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:37.697 21:19:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:37.697 21:19:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:37.697 21:19:53 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:37.697 21:19:53 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:37.697 21:19:53 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:37.697 21:19:53 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:37.697 21:19:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.697 21:19:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.697 21:19:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.697 21:19:53 -- paths/export.sh@5 -- # export PATH 00:26:37.697 21:19:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.697 21:19:53 -- nvmf/common.sh@47 -- # : 0 00:26:37.697 21:19:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:37.697 21:19:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:37.697 21:19:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:37.697 21:19:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:37.697 21:19:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:37.697 21:19:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:37.697 21:19:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:37.697 21:19:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:37.697 21:19:53 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:37.697 21:19:53 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:37.697 21:19:53 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:37.697 21:19:53 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:37.697 21:19:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.697 21:19:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.697 21:19:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.697 21:19:53 -- paths/export.sh@5 -- # export PATH 00:26:37.697 21:19:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.697 21:19:53 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:26:37.697 21:19:53 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:37.697 21:19:53 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:37.697 21:19:53 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:37.697 21:19:53 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:37.697 21:19:53 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:37.697 21:19:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:37.698 21:19:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:37.698 21:19:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:37.698 21:19:53 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:26:37.698 21:19:53 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:26:37.698 21:19:53 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:37.698 21:19:53 -- common/autotest_common.sh@10 -- # set +x 00:26:44.279 21:19:59 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:44.279 21:19:59 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:44.279 21:19:59 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:44.279 21:19:59 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:44.279 21:19:59 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:44.279 21:19:59 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:44.279 21:19:59 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:44.279 21:19:59 -- nvmf/common.sh@295 -- # net_devs=() 00:26:44.279 21:19:59 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:44.279 21:19:59 -- nvmf/common.sh@296 -- # e810=() 00:26:44.279 21:19:59 -- nvmf/common.sh@296 -- # local -ga e810 00:26:44.279 21:19:59 -- nvmf/common.sh@297 -- # x722=() 00:26:44.279 21:19:59 -- nvmf/common.sh@297 -- # local -ga x722 00:26:44.279 21:19:59 -- nvmf/common.sh@298 -- # mlx=() 00:26:44.279 21:19:59 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:44.279 21:19:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:44.279 21:19:59 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:44.279 21:19:59 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:44.279 21:19:59 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:44.279 21:19:59 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:44.279 21:19:59 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:44.279 21:19:59 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:44.279 21:19:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:44.279 21:19:59 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:44.279 21:19:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:44.279 21:19:59 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:44.279 21:19:59 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:44.279 21:19:59 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:44.279 21:19:59 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:44.279 21:19:59 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:44.279 21:19:59 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:44.279 21:19:59 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:44.279 21:19:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:44.279 21:19:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:44.279 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:44.279 21:19:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:44.279 21:19:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:44.279 21:19:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:44.279 21:19:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:44.279 21:19:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:44.279 21:19:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:44.279 21:19:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:44.279 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:44.279 21:19:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:44.279 21:19:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:44.279 21:19:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:44.279 21:19:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:44.279 21:19:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:44.279 21:19:59 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:44.279 21:19:59 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:44.279 21:19:59 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:44.279 21:19:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:44.279 21:19:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:44.279 21:19:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:44.279 21:19:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:44.279 21:19:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:44.279 Found net devices under 0000:86:00.0: cvl_0_0 00:26:44.279 21:19:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:44.279 21:19:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:44.279 21:19:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:44.279 21:19:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:44.279 21:19:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:44.279 21:19:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:44.279 Found net devices under 0000:86:00.1: cvl_0_1 00:26:44.279 21:19:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:44.279 21:19:59 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:26:44.279 21:19:59 -- nvmf/common.sh@403 -- # is_hw=yes 00:26:44.279 21:19:59 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:26:44.279 21:19:59 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:26:44.279 21:19:59 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:26:44.279 21:19:59 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:44.279 21:19:59 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:44.279 21:19:59 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:44.279 21:19:59 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:44.279 21:19:59 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:44.279 21:19:59 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:44.279 21:19:59 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:44.279 21:19:59 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:44.279 21:19:59 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:44.279 21:19:59 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:44.279 21:19:59 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:44.279 21:19:59 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:44.279 21:19:59 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:44.279 21:19:59 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:44.279 21:19:59 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:44.279 21:19:59 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:44.279 21:19:59 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:44.279 21:19:59 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:44.279 21:19:59 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:44.279 21:19:59 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:44.279 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:44.279 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:26:44.279 00:26:44.279 --- 10.0.0.2 ping statistics --- 00:26:44.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.279 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:26:44.279 21:19:59 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:44.279 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:44.279 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:26:44.279 00:26:44.279 --- 10.0.0.1 ping statistics --- 00:26:44.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.279 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:26:44.279 21:19:59 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:44.279 21:19:59 -- nvmf/common.sh@411 -- # return 0 00:26:44.279 21:19:59 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:44.279 21:19:59 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:44.279 21:19:59 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:44.279 21:19:59 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:44.279 21:19:59 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:44.279 21:19:59 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:44.279 21:19:59 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:44.279 21:19:59 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:26:44.279 21:19:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:44.279 21:19:59 -- common/autotest_common.sh@10 -- # set +x 00:26:44.279 21:19:59 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:26:44.279 21:19:59 -- common/autotest_common.sh@1510 -- # bdfs=() 00:26:44.279 21:19:59 -- common/autotest_common.sh@1510 -- # local bdfs 00:26:44.279 21:19:59 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:26:44.279 21:19:59 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:26:44.279 21:19:59 -- common/autotest_common.sh@1499 -- # bdfs=() 00:26:44.279 21:19:59 -- common/autotest_common.sh@1499 -- # local bdfs 00:26:44.279 21:19:59 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:44.280 21:19:59 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:44.280 21:19:59 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:26:44.280 21:19:59 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:26:44.280 21:19:59 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:5e:00.0 00:26:44.280 21:19:59 -- common/autotest_common.sh@1513 -- # echo 0000:5e:00.0 00:26:44.280 21:19:59 -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:26:44.280 21:19:59 -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:26:44.280 21:19:59 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:26:44.280 21:19:59 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:26:44.280 21:19:59 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:26:44.280 EAL: No free 2048 kB hugepages reported on node 1 00:26:48.474 21:20:03 -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:26:48.474 21:20:03 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:26:48.474 21:20:03 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:26:48.474 21:20:03 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:26:48.474 EAL: No free 2048 kB hugepages reported on node 1 00:26:52.665 21:20:07 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:26:52.665 21:20:07 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:26:52.665 21:20:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:52.666 21:20:07 -- common/autotest_common.sh@10 -- # set +x 00:26:52.666 21:20:07 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:26:52.666 21:20:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:52.666 21:20:07 -- common/autotest_common.sh@10 -- # set +x 00:26:52.666 21:20:07 -- target/identify_passthru.sh@31 -- # nvmfpid=3213811 00:26:52.666 21:20:07 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:52.666 21:20:07 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:52.666 21:20:07 -- target/identify_passthru.sh@35 -- # waitforlisten 3213811 00:26:52.666 21:20:07 -- common/autotest_common.sh@817 -- # '[' -z 3213811 ']' 00:26:52.666 21:20:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:52.666 21:20:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:52.666 21:20:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:52.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:52.666 21:20:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:52.666 21:20:07 -- common/autotest_common.sh@10 -- # set +x 00:26:52.666 [2024-04-18 21:20:07.933209] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:26:52.666 [2024-04-18 21:20:07.933253] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:52.666 EAL: No free 2048 kB hugepages reported on node 1 00:26:52.666 [2024-04-18 21:20:07.994959] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:52.666 [2024-04-18 21:20:08.075619] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:52.666 [2024-04-18 21:20:08.075652] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:52.666 [2024-04-18 21:20:08.075662] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:52.666 [2024-04-18 21:20:08.075669] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:52.666 [2024-04-18 21:20:08.075675] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:52.666 [2024-04-18 21:20:08.075716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:52.666 [2024-04-18 21:20:08.075814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:52.666 [2024-04-18 21:20:08.075875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:52.666 [2024-04-18 21:20:08.075878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:52.926 21:20:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:52.926 21:20:08 -- common/autotest_common.sh@850 -- # return 0 00:26:52.926 21:20:08 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:26:52.926 21:20:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.926 21:20:08 -- common/autotest_common.sh@10 -- # set +x 00:26:52.926 INFO: Log level set to 20 00:26:52.926 INFO: Requests: 00:26:52.926 { 00:26:52.926 "jsonrpc": "2.0", 00:26:52.926 "method": "nvmf_set_config", 00:26:52.926 "id": 1, 00:26:52.926 "params": { 00:26:52.926 "admin_cmd_passthru": { 00:26:52.926 "identify_ctrlr": true 00:26:52.926 } 00:26:52.926 } 00:26:52.926 } 00:26:52.926 00:26:52.926 INFO: response: 00:26:52.926 { 00:26:52.926 "jsonrpc": "2.0", 00:26:52.926 "id": 1, 00:26:52.926 "result": true 00:26:52.926 } 00:26:52.926 00:26:52.926 21:20:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.926 21:20:08 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:26:52.926 21:20:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.926 21:20:08 -- common/autotest_common.sh@10 -- # set +x 00:26:52.926 INFO: Setting log level to 20 00:26:52.926 INFO: Setting log level to 20 00:26:52.926 INFO: Log level set to 20 00:26:52.926 INFO: Log level set to 20 00:26:52.926 INFO: Requests: 00:26:52.926 { 00:26:52.926 "jsonrpc": "2.0", 00:26:52.926 "method": "framework_start_init", 00:26:52.926 "id": 1 00:26:52.926 } 00:26:52.926 00:26:52.926 INFO: Requests: 00:26:52.926 { 00:26:52.926 "jsonrpc": "2.0", 00:26:52.926 "method": "framework_start_init", 00:26:52.926 "id": 1 00:26:52.926 } 00:26:52.926 00:26:52.926 [2024-04-18 21:20:08.831377] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:26:52.926 INFO: response: 00:26:52.926 { 00:26:52.926 "jsonrpc": "2.0", 00:26:52.926 "id": 1, 00:26:52.926 "result": true 00:26:52.926 } 00:26:52.926 00:26:52.926 INFO: response: 00:26:52.926 { 00:26:52.926 "jsonrpc": "2.0", 00:26:52.926 "id": 1, 00:26:52.926 "result": true 00:26:52.926 } 00:26:52.926 00:26:52.926 21:20:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.926 21:20:08 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:52.926 21:20:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.926 21:20:08 -- common/autotest_common.sh@10 -- # set +x 00:26:52.926 INFO: Setting log level to 40 00:26:52.926 INFO: Setting log level to 40 00:26:52.926 INFO: Setting log level to 40 00:26:52.926 [2024-04-18 21:20:08.844697] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:52.926 21:20:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.926 21:20:08 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:26:52.926 21:20:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:52.926 21:20:08 -- common/autotest_common.sh@10 -- # set +x 00:26:53.186 21:20:08 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:26:53.186 21:20:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.186 21:20:08 -- common/autotest_common.sh@10 -- # set +x 00:26:56.479 Nvme0n1 00:26:56.479 21:20:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:56.479 21:20:11 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:26:56.479 21:20:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:56.480 21:20:11 -- common/autotest_common.sh@10 -- # set +x 00:26:56.480 21:20:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:56.480 21:20:11 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:56.480 21:20:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:56.480 21:20:11 -- common/autotest_common.sh@10 -- # set +x 00:26:56.480 21:20:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:56.480 21:20:11 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:56.480 21:20:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:56.480 21:20:11 -- common/autotest_common.sh@10 -- # set +x 00:26:56.480 [2024-04-18 21:20:11.750374] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:56.480 21:20:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:56.480 21:20:11 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:26:56.480 21:20:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:56.480 21:20:11 -- common/autotest_common.sh@10 -- # set +x 00:26:56.480 [2024-04-18 21:20:11.758152] nvmf_rpc.c: 279:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:26:56.480 [ 00:26:56.480 { 00:26:56.480 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:56.480 "subtype": "Discovery", 00:26:56.480 "listen_addresses": [], 00:26:56.480 "allow_any_host": true, 00:26:56.480 "hosts": [] 00:26:56.480 }, 00:26:56.480 { 00:26:56.480 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:56.480 "subtype": "NVMe", 00:26:56.480 "listen_addresses": [ 00:26:56.480 { 00:26:56.480 "transport": "TCP", 00:26:56.480 "trtype": "TCP", 00:26:56.480 "adrfam": "IPv4", 00:26:56.480 "traddr": "10.0.0.2", 00:26:56.480 "trsvcid": "4420" 00:26:56.480 } 00:26:56.480 ], 00:26:56.480 "allow_any_host": true, 00:26:56.480 "hosts": [], 00:26:56.480 "serial_number": "SPDK00000000000001", 00:26:56.480 "model_number": "SPDK bdev Controller", 00:26:56.480 "max_namespaces": 1, 00:26:56.480 "min_cntlid": 1, 00:26:56.480 "max_cntlid": 65519, 00:26:56.480 "namespaces": [ 00:26:56.480 { 00:26:56.480 "nsid": 1, 00:26:56.480 "bdev_name": "Nvme0n1", 00:26:56.480 "name": "Nvme0n1", 00:26:56.480 "nguid": "24E74A102900494092529F85C10A3425", 00:26:56.480 "uuid": "24e74a10-2900-4940-9252-9f85c10a3425" 00:26:56.480 } 00:26:56.480 ] 00:26:56.480 } 00:26:56.480 ] 00:26:56.480 21:20:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:56.480 21:20:11 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:56.480 21:20:11 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:26:56.480 21:20:11 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:26:56.480 EAL: No free 2048 kB hugepages reported on node 1 00:26:56.480 21:20:11 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:26:56.480 21:20:11 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:56.480 21:20:11 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:26:56.480 21:20:11 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:26:56.480 EAL: No free 2048 kB hugepages reported on node 1 00:26:56.480 21:20:12 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:26:56.480 21:20:12 -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:26:56.480 21:20:12 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:26:56.480 21:20:12 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:56.480 21:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:56.480 21:20:12 -- common/autotest_common.sh@10 -- # set +x 00:26:56.480 21:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:56.480 21:20:12 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:26:56.480 21:20:12 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:26:56.480 21:20:12 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:56.480 21:20:12 -- nvmf/common.sh@117 -- # sync 00:26:56.480 21:20:12 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:56.480 21:20:12 -- nvmf/common.sh@120 -- # set +e 00:26:56.480 21:20:12 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:56.480 21:20:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:56.480 rmmod nvme_tcp 00:26:56.480 rmmod nvme_fabrics 00:26:56.480 rmmod nvme_keyring 00:26:56.480 21:20:12 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:56.480 21:20:12 -- nvmf/common.sh@124 -- # set -e 00:26:56.480 21:20:12 -- nvmf/common.sh@125 -- # return 0 00:26:56.480 21:20:12 -- nvmf/common.sh@478 -- # '[' -n 3213811 ']' 00:26:56.480 21:20:12 -- nvmf/common.sh@479 -- # killprocess 3213811 00:26:56.480 21:20:12 -- common/autotest_common.sh@936 -- # '[' -z 3213811 ']' 00:26:56.480 21:20:12 -- common/autotest_common.sh@940 -- # kill -0 3213811 00:26:56.480 21:20:12 -- common/autotest_common.sh@941 -- # uname 00:26:56.480 21:20:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:56.480 21:20:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3213811 00:26:56.480 21:20:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:56.480 21:20:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:56.480 21:20:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3213811' 00:26:56.480 killing process with pid 3213811 00:26:56.480 21:20:12 -- common/autotest_common.sh@955 -- # kill 3213811 00:26:56.480 [2024-04-18 21:20:12.143050] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:26:56.480 21:20:12 -- common/autotest_common.sh@960 -- # wait 3213811 00:26:57.863 21:20:13 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:57.863 21:20:13 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:57.863 21:20:13 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:57.863 21:20:13 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:57.863 21:20:13 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:57.863 21:20:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:57.863 21:20:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:57.863 21:20:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:59.810 21:20:15 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:59.810 00:26:59.810 real 0m22.417s 00:26:59.810 user 0m29.805s 00:26:59.810 sys 0m5.312s 00:26:59.810 21:20:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:59.810 21:20:15 -- common/autotest_common.sh@10 -- # set +x 00:26:59.810 ************************************ 00:26:59.810 END TEST nvmf_identify_passthru 00:26:59.810 ************************************ 00:26:59.810 21:20:15 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:26:59.810 21:20:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:59.810 21:20:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:59.810 21:20:15 -- common/autotest_common.sh@10 -- # set +x 00:27:00.069 ************************************ 00:27:00.069 START TEST nvmf_dif 00:27:00.069 ************************************ 00:27:00.069 21:20:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:00.069 * Looking for test storage... 00:27:00.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:00.069 21:20:15 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:00.069 21:20:15 -- nvmf/common.sh@7 -- # uname -s 00:27:00.069 21:20:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:00.069 21:20:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:00.069 21:20:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:00.069 21:20:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:00.069 21:20:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:00.069 21:20:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:00.069 21:20:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:00.069 21:20:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:00.069 21:20:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:00.069 21:20:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:00.069 21:20:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:00.069 21:20:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:00.069 21:20:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:00.069 21:20:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:00.069 21:20:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:00.069 21:20:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:00.069 21:20:15 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:00.069 21:20:15 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:00.069 21:20:15 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:00.069 21:20:15 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:00.069 21:20:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.069 21:20:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.069 21:20:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.069 21:20:15 -- paths/export.sh@5 -- # export PATH 00:27:00.069 21:20:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.069 21:20:15 -- nvmf/common.sh@47 -- # : 0 00:27:00.069 21:20:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:00.069 21:20:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:00.069 21:20:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:00.069 21:20:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:00.069 21:20:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:00.069 21:20:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:00.069 21:20:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:00.069 21:20:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:00.069 21:20:15 -- target/dif.sh@15 -- # NULL_META=16 00:27:00.069 21:20:15 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:27:00.069 21:20:15 -- target/dif.sh@15 -- # NULL_SIZE=64 00:27:00.069 21:20:15 -- target/dif.sh@15 -- # NULL_DIF=1 00:27:00.069 21:20:15 -- target/dif.sh@135 -- # nvmftestinit 00:27:00.069 21:20:15 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:00.069 21:20:15 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:00.069 21:20:15 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:00.069 21:20:15 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:00.069 21:20:15 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:00.069 21:20:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:00.069 21:20:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:00.069 21:20:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:00.328 21:20:16 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:27:00.328 21:20:16 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:27:00.328 21:20:16 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:00.328 21:20:16 -- common/autotest_common.sh@10 -- # set +x 00:27:06.898 21:20:21 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:06.898 21:20:21 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:06.898 21:20:21 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:06.898 21:20:21 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:06.898 21:20:21 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:06.898 21:20:21 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:06.898 21:20:21 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:06.898 21:20:21 -- nvmf/common.sh@295 -- # net_devs=() 00:27:06.898 21:20:21 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:06.898 21:20:21 -- nvmf/common.sh@296 -- # e810=() 00:27:06.898 21:20:21 -- nvmf/common.sh@296 -- # local -ga e810 00:27:06.898 21:20:21 -- nvmf/common.sh@297 -- # x722=() 00:27:06.899 21:20:21 -- nvmf/common.sh@297 -- # local -ga x722 00:27:06.899 21:20:21 -- nvmf/common.sh@298 -- # mlx=() 00:27:06.899 21:20:21 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:06.899 21:20:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:06.899 21:20:21 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:06.899 21:20:21 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:06.899 21:20:21 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:06.899 21:20:21 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:06.899 21:20:21 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:06.899 21:20:21 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:06.899 21:20:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:06.899 21:20:21 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:06.899 21:20:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:06.899 21:20:21 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:06.899 21:20:21 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:06.899 21:20:21 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:06.899 21:20:21 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:06.899 21:20:21 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:06.899 21:20:21 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:06.899 21:20:21 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:06.899 21:20:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:06.899 21:20:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:06.899 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:06.899 21:20:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:06.899 21:20:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:06.899 21:20:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:06.899 21:20:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:06.899 21:20:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:06.899 21:20:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:06.899 21:20:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:06.899 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:06.899 21:20:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:06.899 21:20:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:06.899 21:20:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:06.899 21:20:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:06.899 21:20:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:06.899 21:20:21 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:06.899 21:20:21 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:06.899 21:20:21 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:06.899 21:20:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:06.899 21:20:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:06.899 21:20:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:06.899 21:20:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:06.899 21:20:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:06.899 Found net devices under 0000:86:00.0: cvl_0_0 00:27:06.899 21:20:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:06.899 21:20:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:06.899 21:20:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:06.899 21:20:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:06.899 21:20:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:06.899 21:20:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:06.899 Found net devices under 0000:86:00.1: cvl_0_1 00:27:06.899 21:20:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:06.899 21:20:21 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:27:06.899 21:20:21 -- nvmf/common.sh@403 -- # is_hw=yes 00:27:06.899 21:20:21 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:27:06.899 21:20:21 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:27:06.899 21:20:21 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:27:06.899 21:20:21 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:06.899 21:20:21 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:06.899 21:20:21 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:06.899 21:20:21 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:06.899 21:20:21 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:06.899 21:20:21 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:06.899 21:20:21 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:06.899 21:20:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:06.899 21:20:21 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:06.899 21:20:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:06.899 21:20:21 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:06.899 21:20:21 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:06.899 21:20:21 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:06.899 21:20:21 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:06.899 21:20:21 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:06.899 21:20:21 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:06.899 21:20:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:06.899 21:20:22 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:06.899 21:20:22 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:06.899 21:20:22 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:06.899 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:06.899 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:27:06.899 00:27:06.899 --- 10.0.0.2 ping statistics --- 00:27:06.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.899 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:27:06.899 21:20:22 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:06.899 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:06.899 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.347 ms 00:27:06.899 00:27:06.899 --- 10.0.0.1 ping statistics --- 00:27:06.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.899 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:27:06.899 21:20:22 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:06.899 21:20:22 -- nvmf/common.sh@411 -- # return 0 00:27:06.899 21:20:22 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:27:06.899 21:20:22 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:09.436 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:27:09.436 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:09.436 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:27:09.436 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:27:09.436 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:27:09.436 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:27:09.436 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:27:09.436 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:27:09.436 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:27:09.436 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:27:09.436 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:27:09.436 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:27:09.436 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:27:09.436 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:27:09.436 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:27:09.436 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:27:09.436 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:27:09.436 21:20:25 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:09.436 21:20:25 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:09.436 21:20:25 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:09.436 21:20:25 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:09.436 21:20:25 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:09.436 21:20:25 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:09.436 21:20:25 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:27:09.436 21:20:25 -- target/dif.sh@137 -- # nvmfappstart 00:27:09.436 21:20:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:09.436 21:20:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:09.436 21:20:25 -- common/autotest_common.sh@10 -- # set +x 00:27:09.436 21:20:25 -- nvmf/common.sh@470 -- # nvmfpid=3219894 00:27:09.436 21:20:25 -- nvmf/common.sh@471 -- # waitforlisten 3219894 00:27:09.436 21:20:25 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:09.436 21:20:25 -- common/autotest_common.sh@817 -- # '[' -z 3219894 ']' 00:27:09.436 21:20:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:09.436 21:20:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:09.436 21:20:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:09.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:09.436 21:20:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:09.436 21:20:25 -- common/autotest_common.sh@10 -- # set +x 00:27:09.436 [2024-04-18 21:20:25.250251] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:27:09.436 [2024-04-18 21:20:25.250291] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:09.436 EAL: No free 2048 kB hugepages reported on node 1 00:27:09.437 [2024-04-18 21:20:25.313460] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.695 [2024-04-18 21:20:25.391380] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:09.695 [2024-04-18 21:20:25.391415] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:09.696 [2024-04-18 21:20:25.391424] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:09.696 [2024-04-18 21:20:25.391431] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:09.696 [2024-04-18 21:20:25.391436] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:09.696 [2024-04-18 21:20:25.391465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:10.263 21:20:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:10.263 21:20:26 -- common/autotest_common.sh@850 -- # return 0 00:27:10.263 21:20:26 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:10.263 21:20:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:10.263 21:20:26 -- common/autotest_common.sh@10 -- # set +x 00:27:10.263 21:20:26 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:10.263 21:20:26 -- target/dif.sh@139 -- # create_transport 00:27:10.263 21:20:26 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:27:10.263 21:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:10.263 21:20:26 -- common/autotest_common.sh@10 -- # set +x 00:27:10.263 [2024-04-18 21:20:26.095336] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:10.263 21:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:10.263 21:20:26 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:27:10.263 21:20:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:10.263 21:20:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:10.263 21:20:26 -- common/autotest_common.sh@10 -- # set +x 00:27:10.522 ************************************ 00:27:10.522 START TEST fio_dif_1_default 00:27:10.522 ************************************ 00:27:10.522 21:20:26 -- common/autotest_common.sh@1111 -- # fio_dif_1 00:27:10.522 21:20:26 -- target/dif.sh@86 -- # create_subsystems 0 00:27:10.522 21:20:26 -- target/dif.sh@28 -- # local sub 00:27:10.522 21:20:26 -- target/dif.sh@30 -- # for sub in "$@" 00:27:10.522 21:20:26 -- target/dif.sh@31 -- # create_subsystem 0 00:27:10.522 21:20:26 -- target/dif.sh@18 -- # local sub_id=0 00:27:10.522 21:20:26 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:10.522 21:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:10.523 21:20:26 -- common/autotest_common.sh@10 -- # set +x 00:27:10.523 bdev_null0 00:27:10.523 21:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:10.523 21:20:26 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:10.523 21:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:10.523 21:20:26 -- common/autotest_common.sh@10 -- # set +x 00:27:10.523 21:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:10.523 21:20:26 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:10.523 21:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:10.523 21:20:26 -- common/autotest_common.sh@10 -- # set +x 00:27:10.523 21:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:10.523 21:20:26 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:10.523 21:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:10.523 21:20:26 -- common/autotest_common.sh@10 -- # set +x 00:27:10.523 [2024-04-18 21:20:26.255872] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:10.523 21:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:10.523 21:20:26 -- target/dif.sh@87 -- # fio /dev/fd/62 00:27:10.523 21:20:26 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:10.523 21:20:26 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:10.523 21:20:26 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:10.523 21:20:26 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:10.523 21:20:26 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:27:10.523 21:20:26 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:10.523 21:20:26 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:10.523 21:20:26 -- common/autotest_common.sh@1327 -- # shift 00:27:10.523 21:20:26 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:10.523 21:20:26 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:10.523 21:20:26 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:10.523 21:20:26 -- target/dif.sh@82 -- # gen_fio_conf 00:27:10.523 21:20:26 -- nvmf/common.sh@521 -- # config=() 00:27:10.523 21:20:26 -- nvmf/common.sh@521 -- # local subsystem config 00:27:10.523 21:20:26 -- target/dif.sh@54 -- # local file 00:27:10.523 21:20:26 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:10.523 21:20:26 -- target/dif.sh@56 -- # cat 00:27:10.523 21:20:26 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:10.523 { 00:27:10.523 "params": { 00:27:10.523 "name": "Nvme$subsystem", 00:27:10.523 "trtype": "$TEST_TRANSPORT", 00:27:10.523 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:10.523 "adrfam": "ipv4", 00:27:10.523 "trsvcid": "$NVMF_PORT", 00:27:10.523 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:10.523 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:10.523 "hdgst": ${hdgst:-false}, 00:27:10.523 "ddgst": ${ddgst:-false} 00:27:10.523 }, 00:27:10.523 "method": "bdev_nvme_attach_controller" 00:27:10.523 } 00:27:10.523 EOF 00:27:10.523 )") 00:27:10.523 21:20:26 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:10.523 21:20:26 -- nvmf/common.sh@543 -- # cat 00:27:10.523 21:20:26 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:10.523 21:20:26 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:10.523 21:20:26 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:10.523 21:20:26 -- target/dif.sh@72 -- # (( file <= files )) 00:27:10.523 21:20:26 -- nvmf/common.sh@545 -- # jq . 00:27:10.523 21:20:26 -- nvmf/common.sh@546 -- # IFS=, 00:27:10.523 21:20:26 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:10.523 "params": { 00:27:10.523 "name": "Nvme0", 00:27:10.523 "trtype": "tcp", 00:27:10.523 "traddr": "10.0.0.2", 00:27:10.523 "adrfam": "ipv4", 00:27:10.523 "trsvcid": "4420", 00:27:10.523 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:10.523 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:10.523 "hdgst": false, 00:27:10.523 "ddgst": false 00:27:10.523 }, 00:27:10.523 "method": "bdev_nvme_attach_controller" 00:27:10.523 }' 00:27:10.523 21:20:26 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:10.523 21:20:26 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:10.523 21:20:26 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:10.523 21:20:26 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:10.523 21:20:26 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:10.523 21:20:26 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:10.523 21:20:26 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:10.523 21:20:26 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:10.523 21:20:26 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:10.523 21:20:26 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:10.782 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:10.782 fio-3.35 00:27:10.782 Starting 1 thread 00:27:10.782 EAL: No free 2048 kB hugepages reported on node 1 00:27:22.995 00:27:22.995 filename0: (groupid=0, jobs=1): err= 0: pid=3220277: Thu Apr 18 21:20:37 2024 00:27:22.995 read: IOPS=185, BW=741KiB/s (758kB/s)(7424KiB/10024msec) 00:27:22.995 slat (nsec): min=5756, max=25079, avg=6045.91, stdev=870.35 00:27:22.995 clat (usec): min=1058, max=43852, avg=21586.69, stdev=20374.38 00:27:22.995 lat (usec): min=1064, max=43877, avg=21592.74, stdev=20374.36 00:27:22.995 clat percentiles (usec): 00:27:22.995 | 1.00th=[ 1057], 5.00th=[ 1074], 10.00th=[ 1074], 20.00th=[ 1090], 00:27:22.995 | 30.00th=[ 1090], 40.00th=[ 1205], 50.00th=[41157], 60.00th=[41681], 00:27:22.995 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:27:22.995 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43779], 99.95th=[43779], 00:27:22.995 | 99.99th=[43779] 00:27:22.995 bw ( KiB/s): min= 672, max= 768, per=99.92%, avg=740.80, stdev=34.86, samples=20 00:27:22.995 iops : min= 168, max= 192, avg=185.20, stdev= 8.72, samples=20 00:27:22.995 lat (msec) : 2=49.78%, 50=50.22% 00:27:22.995 cpu : usr=94.15%, sys=5.60%, ctx=13, majf=0, minf=224 00:27:22.995 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:22.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:22.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:22.995 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:22.995 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:22.995 00:27:22.995 Run status group 0 (all jobs): 00:27:22.995 READ: bw=741KiB/s (758kB/s), 741KiB/s-741KiB/s (758kB/s-758kB/s), io=7424KiB (7602kB), run=10024-10024msec 00:27:22.995 21:20:37 -- target/dif.sh@88 -- # destroy_subsystems 0 00:27:22.995 21:20:37 -- target/dif.sh@43 -- # local sub 00:27:22.995 21:20:37 -- target/dif.sh@45 -- # for sub in "$@" 00:27:22.995 21:20:37 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:22.995 21:20:37 -- target/dif.sh@36 -- # local sub_id=0 00:27:22.995 21:20:37 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:22.995 21:20:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:22.995 21:20:37 -- common/autotest_common.sh@10 -- # set +x 00:27:22.995 21:20:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:22.995 21:20:37 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:22.995 21:20:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:22.995 21:20:37 -- common/autotest_common.sh@10 -- # set +x 00:27:22.995 21:20:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:22.995 00:27:22.995 real 0m11.050s 00:27:22.995 user 0m16.441s 00:27:22.995 sys 0m0.846s 00:27:22.995 21:20:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:22.995 21:20:37 -- common/autotest_common.sh@10 -- # set +x 00:27:22.995 ************************************ 00:27:22.995 END TEST fio_dif_1_default 00:27:22.995 ************************************ 00:27:22.995 21:20:37 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:27:22.995 21:20:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:22.995 21:20:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:22.995 21:20:37 -- common/autotest_common.sh@10 -- # set +x 00:27:22.995 ************************************ 00:27:22.995 START TEST fio_dif_1_multi_subsystems 00:27:22.995 ************************************ 00:27:22.995 21:20:37 -- common/autotest_common.sh@1111 -- # fio_dif_1_multi_subsystems 00:27:22.995 21:20:37 -- target/dif.sh@92 -- # local files=1 00:27:22.995 21:20:37 -- target/dif.sh@94 -- # create_subsystems 0 1 00:27:22.995 21:20:37 -- target/dif.sh@28 -- # local sub 00:27:22.995 21:20:37 -- target/dif.sh@30 -- # for sub in "$@" 00:27:22.995 21:20:37 -- target/dif.sh@31 -- # create_subsystem 0 00:27:22.995 21:20:37 -- target/dif.sh@18 -- # local sub_id=0 00:27:22.995 21:20:37 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:22.995 21:20:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:22.995 21:20:37 -- common/autotest_common.sh@10 -- # set +x 00:27:22.995 bdev_null0 00:27:22.995 21:20:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:22.995 21:20:37 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:22.995 21:20:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:22.995 21:20:37 -- common/autotest_common.sh@10 -- # set +x 00:27:22.995 21:20:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:22.995 21:20:37 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:22.995 21:20:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:22.995 21:20:37 -- common/autotest_common.sh@10 -- # set +x 00:27:22.995 21:20:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:22.995 21:20:37 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:22.995 21:20:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:22.995 21:20:37 -- common/autotest_common.sh@10 -- # set +x 00:27:22.995 [2024-04-18 21:20:37.467153] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:22.995 21:20:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:22.995 21:20:37 -- target/dif.sh@30 -- # for sub in "$@" 00:27:22.995 21:20:37 -- target/dif.sh@31 -- # create_subsystem 1 00:27:22.995 21:20:37 -- target/dif.sh@18 -- # local sub_id=1 00:27:22.995 21:20:37 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:22.995 21:20:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:22.996 21:20:37 -- common/autotest_common.sh@10 -- # set +x 00:27:22.996 bdev_null1 00:27:22.996 21:20:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:22.996 21:20:37 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:22.996 21:20:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:22.996 21:20:37 -- common/autotest_common.sh@10 -- # set +x 00:27:22.996 21:20:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:22.996 21:20:37 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:22.996 21:20:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:22.996 21:20:37 -- common/autotest_common.sh@10 -- # set +x 00:27:22.996 21:20:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:22.996 21:20:37 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:22.996 21:20:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:22.996 21:20:37 -- common/autotest_common.sh@10 -- # set +x 00:27:22.996 21:20:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:22.996 21:20:37 -- target/dif.sh@95 -- # fio /dev/fd/62 00:27:22.996 21:20:37 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:22.996 21:20:37 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:22.996 21:20:37 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:27:22.996 21:20:37 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:22.996 21:20:37 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:22.996 21:20:37 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:22.996 21:20:37 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:22.996 21:20:37 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:22.996 21:20:37 -- common/autotest_common.sh@1327 -- # shift 00:27:22.996 21:20:37 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:22.996 21:20:37 -- nvmf/common.sh@521 -- # config=() 00:27:22.996 21:20:37 -- target/dif.sh@82 -- # gen_fio_conf 00:27:22.996 21:20:37 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:22.996 21:20:37 -- nvmf/common.sh@521 -- # local subsystem config 00:27:22.996 21:20:37 -- target/dif.sh@54 -- # local file 00:27:22.996 21:20:37 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:22.996 21:20:37 -- target/dif.sh@56 -- # cat 00:27:22.996 21:20:37 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:22.996 { 00:27:22.996 "params": { 00:27:22.996 "name": "Nvme$subsystem", 00:27:22.996 "trtype": "$TEST_TRANSPORT", 00:27:22.996 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.996 "adrfam": "ipv4", 00:27:22.996 "trsvcid": "$NVMF_PORT", 00:27:22.996 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.996 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.996 "hdgst": ${hdgst:-false}, 00:27:22.996 "ddgst": ${ddgst:-false} 00:27:22.996 }, 00:27:22.996 "method": "bdev_nvme_attach_controller" 00:27:22.996 } 00:27:22.996 EOF 00:27:22.996 )") 00:27:22.996 21:20:37 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:22.996 21:20:37 -- nvmf/common.sh@543 -- # cat 00:27:22.996 21:20:37 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:22.996 21:20:37 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:22.996 21:20:37 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:22.996 21:20:37 -- target/dif.sh@72 -- # (( file <= files )) 00:27:22.996 21:20:37 -- target/dif.sh@73 -- # cat 00:27:22.996 21:20:37 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:22.996 21:20:37 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:22.996 { 00:27:22.996 "params": { 00:27:22.996 "name": "Nvme$subsystem", 00:27:22.996 "trtype": "$TEST_TRANSPORT", 00:27:22.996 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.996 "adrfam": "ipv4", 00:27:22.996 "trsvcid": "$NVMF_PORT", 00:27:22.996 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.996 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.996 "hdgst": ${hdgst:-false}, 00:27:22.996 "ddgst": ${ddgst:-false} 00:27:22.996 }, 00:27:22.996 "method": "bdev_nvme_attach_controller" 00:27:22.996 } 00:27:22.996 EOF 00:27:22.996 )") 00:27:22.996 21:20:37 -- target/dif.sh@72 -- # (( file++ )) 00:27:22.996 21:20:37 -- target/dif.sh@72 -- # (( file <= files )) 00:27:22.996 21:20:37 -- nvmf/common.sh@543 -- # cat 00:27:22.996 21:20:37 -- nvmf/common.sh@545 -- # jq . 00:27:22.996 21:20:37 -- nvmf/common.sh@546 -- # IFS=, 00:27:22.996 21:20:37 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:22.996 "params": { 00:27:22.996 "name": "Nvme0", 00:27:22.996 "trtype": "tcp", 00:27:22.996 "traddr": "10.0.0.2", 00:27:22.996 "adrfam": "ipv4", 00:27:22.996 "trsvcid": "4420", 00:27:22.996 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:22.996 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:22.996 "hdgst": false, 00:27:22.996 "ddgst": false 00:27:22.996 }, 00:27:22.996 "method": "bdev_nvme_attach_controller" 00:27:22.996 },{ 00:27:22.996 "params": { 00:27:22.996 "name": "Nvme1", 00:27:22.996 "trtype": "tcp", 00:27:22.996 "traddr": "10.0.0.2", 00:27:22.996 "adrfam": "ipv4", 00:27:22.996 "trsvcid": "4420", 00:27:22.996 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:22.996 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:22.996 "hdgst": false, 00:27:22.996 "ddgst": false 00:27:22.996 }, 00:27:22.996 "method": "bdev_nvme_attach_controller" 00:27:22.996 }' 00:27:22.996 21:20:37 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:22.996 21:20:37 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:22.996 21:20:37 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:22.996 21:20:37 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:22.996 21:20:37 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:22.996 21:20:37 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:22.996 21:20:37 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:22.996 21:20:37 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:22.997 21:20:37 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:22.997 21:20:37 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:22.997 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:22.997 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:22.997 fio-3.35 00:27:22.997 Starting 2 threads 00:27:22.997 EAL: No free 2048 kB hugepages reported on node 1 00:27:32.981 00:27:32.981 filename0: (groupid=0, jobs=1): err= 0: pid=3222251: Thu Apr 18 21:20:48 2024 00:27:32.981 read: IOPS=95, BW=381KiB/s (390kB/s)(3808KiB/10005msec) 00:27:32.981 slat (nsec): min=5944, max=25685, avg=8436.20, stdev=2864.59 00:27:32.981 clat (usec): min=40988, max=43079, avg=42009.75, stdev=203.61 00:27:32.981 lat (usec): min=40994, max=43100, avg=42018.19, stdev=203.87 00:27:32.981 clat percentiles (usec): 00:27:32.981 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:27:32.981 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:27:32.981 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:27:32.981 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:27:32.981 | 99.99th=[43254] 00:27:32.981 bw ( KiB/s): min= 352, max= 384, per=49.93%, avg=380.63, stdev=10.09, samples=19 00:27:32.981 iops : min= 88, max= 96, avg=95.16, stdev= 2.52, samples=19 00:27:32.981 lat (msec) : 50=100.00% 00:27:32.981 cpu : usr=97.93%, sys=1.81%, ctx=9, majf=0, minf=51 00:27:32.981 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:32.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:32.981 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:32.981 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:32.981 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:32.981 filename1: (groupid=0, jobs=1): err= 0: pid=3222252: Thu Apr 18 21:20:48 2024 00:27:32.981 read: IOPS=95, BW=381KiB/s (390kB/s)(3808KiB/10007msec) 00:27:32.981 slat (nsec): min=5938, max=24560, avg=8381.82, stdev=2873.79 00:27:32.981 clat (usec): min=41797, max=43039, avg=42022.30, stdev=206.16 00:27:32.981 lat (usec): min=41810, max=43063, avg=42030.68, stdev=206.16 00:27:32.981 clat percentiles (usec): 00:27:32.981 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:27:32.981 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:27:32.981 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:27:32.981 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:27:32.981 | 99.99th=[43254] 00:27:32.981 bw ( KiB/s): min= 352, max= 384, per=49.80%, avg=379.20, stdev=11.72, samples=20 00:27:32.981 iops : min= 88, max= 96, avg=94.80, stdev= 2.93, samples=20 00:27:32.981 lat (msec) : 50=100.00% 00:27:32.981 cpu : usr=97.83%, sys=1.91%, ctx=11, majf=0, minf=170 00:27:32.981 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:32.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:32.981 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:32.981 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:32.981 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:32.981 00:27:32.981 Run status group 0 (all jobs): 00:27:32.981 READ: bw=761KiB/s (779kB/s), 381KiB/s-381KiB/s (390kB/s-390kB/s), io=7616KiB (7799kB), run=10005-10007msec 00:27:32.981 21:20:48 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:27:32.981 21:20:48 -- target/dif.sh@43 -- # local sub 00:27:32.981 21:20:48 -- target/dif.sh@45 -- # for sub in "$@" 00:27:32.981 21:20:48 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:32.981 21:20:48 -- target/dif.sh@36 -- # local sub_id=0 00:27:32.981 21:20:48 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:32.981 21:20:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:32.981 21:20:48 -- common/autotest_common.sh@10 -- # set +x 00:27:32.981 21:20:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:32.981 21:20:48 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:32.981 21:20:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:32.981 21:20:48 -- common/autotest_common.sh@10 -- # set +x 00:27:32.981 21:20:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:32.981 21:20:48 -- target/dif.sh@45 -- # for sub in "$@" 00:27:32.981 21:20:48 -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:32.981 21:20:48 -- target/dif.sh@36 -- # local sub_id=1 00:27:32.981 21:20:48 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:32.981 21:20:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:32.981 21:20:48 -- common/autotest_common.sh@10 -- # set +x 00:27:32.981 21:20:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:32.981 21:20:48 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:32.981 21:20:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:32.981 21:20:48 -- common/autotest_common.sh@10 -- # set +x 00:27:32.981 21:20:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:32.981 00:27:32.981 real 0m11.176s 00:27:32.981 user 0m25.790s 00:27:32.981 sys 0m0.639s 00:27:32.981 21:20:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:32.981 21:20:48 -- common/autotest_common.sh@10 -- # set +x 00:27:32.981 ************************************ 00:27:32.981 END TEST fio_dif_1_multi_subsystems 00:27:32.981 ************************************ 00:27:32.981 21:20:48 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:27:32.981 21:20:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:32.981 21:20:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:32.981 21:20:48 -- common/autotest_common.sh@10 -- # set +x 00:27:32.981 ************************************ 00:27:32.981 START TEST fio_dif_rand_params 00:27:32.981 ************************************ 00:27:32.981 21:20:48 -- common/autotest_common.sh@1111 -- # fio_dif_rand_params 00:27:32.981 21:20:48 -- target/dif.sh@100 -- # local NULL_DIF 00:27:32.981 21:20:48 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:27:32.981 21:20:48 -- target/dif.sh@103 -- # NULL_DIF=3 00:27:32.981 21:20:48 -- target/dif.sh@103 -- # bs=128k 00:27:32.981 21:20:48 -- target/dif.sh@103 -- # numjobs=3 00:27:32.981 21:20:48 -- target/dif.sh@103 -- # iodepth=3 00:27:32.981 21:20:48 -- target/dif.sh@103 -- # runtime=5 00:27:32.981 21:20:48 -- target/dif.sh@105 -- # create_subsystems 0 00:27:32.981 21:20:48 -- target/dif.sh@28 -- # local sub 00:27:32.981 21:20:48 -- target/dif.sh@30 -- # for sub in "$@" 00:27:32.981 21:20:48 -- target/dif.sh@31 -- # create_subsystem 0 00:27:32.981 21:20:48 -- target/dif.sh@18 -- # local sub_id=0 00:27:32.982 21:20:48 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:32.982 21:20:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:32.982 21:20:48 -- common/autotest_common.sh@10 -- # set +x 00:27:32.982 bdev_null0 00:27:32.982 21:20:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:32.982 21:20:48 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:32.982 21:20:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:32.982 21:20:48 -- common/autotest_common.sh@10 -- # set +x 00:27:32.982 21:20:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:32.982 21:20:48 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:32.982 21:20:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:32.982 21:20:48 -- common/autotest_common.sh@10 -- # set +x 00:27:32.982 21:20:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:32.982 21:20:48 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:32.982 21:20:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:32.982 21:20:48 -- common/autotest_common.sh@10 -- # set +x 00:27:32.982 [2024-04-18 21:20:48.811684] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:32.982 21:20:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:32.982 21:20:48 -- target/dif.sh@106 -- # fio /dev/fd/62 00:27:32.982 21:20:48 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:32.982 21:20:48 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:32.982 21:20:48 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:27:32.982 21:20:48 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:32.982 21:20:48 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:32.982 21:20:48 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:32.982 21:20:48 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:32.982 21:20:48 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:32.982 21:20:48 -- common/autotest_common.sh@1327 -- # shift 00:27:32.982 21:20:48 -- target/dif.sh@82 -- # gen_fio_conf 00:27:32.982 21:20:48 -- nvmf/common.sh@521 -- # config=() 00:27:32.982 21:20:48 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:32.982 21:20:48 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:32.982 21:20:48 -- nvmf/common.sh@521 -- # local subsystem config 00:27:32.982 21:20:48 -- target/dif.sh@54 -- # local file 00:27:32.982 21:20:48 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:32.982 21:20:48 -- target/dif.sh@56 -- # cat 00:27:32.982 21:20:48 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:32.982 { 00:27:32.982 "params": { 00:27:32.982 "name": "Nvme$subsystem", 00:27:32.982 "trtype": "$TEST_TRANSPORT", 00:27:32.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.982 "adrfam": "ipv4", 00:27:32.982 "trsvcid": "$NVMF_PORT", 00:27:32.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.982 "hdgst": ${hdgst:-false}, 00:27:32.982 "ddgst": ${ddgst:-false} 00:27:32.982 }, 00:27:32.982 "method": "bdev_nvme_attach_controller" 00:27:32.982 } 00:27:32.982 EOF 00:27:32.982 )") 00:27:32.982 21:20:48 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:32.982 21:20:48 -- nvmf/common.sh@543 -- # cat 00:27:32.982 21:20:48 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:32.982 21:20:48 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:32.982 21:20:48 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:32.982 21:20:48 -- target/dif.sh@72 -- # (( file <= files )) 00:27:32.982 21:20:48 -- nvmf/common.sh@545 -- # jq . 00:27:32.982 21:20:48 -- nvmf/common.sh@546 -- # IFS=, 00:27:32.982 21:20:48 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:32.982 "params": { 00:27:32.982 "name": "Nvme0", 00:27:32.982 "trtype": "tcp", 00:27:32.982 "traddr": "10.0.0.2", 00:27:32.982 "adrfam": "ipv4", 00:27:32.982 "trsvcid": "4420", 00:27:32.982 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:32.982 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:32.982 "hdgst": false, 00:27:32.982 "ddgst": false 00:27:32.982 }, 00:27:32.982 "method": "bdev_nvme_attach_controller" 00:27:32.982 }' 00:27:32.982 21:20:48 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:32.982 21:20:48 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:32.982 21:20:48 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:32.982 21:20:48 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:32.982 21:20:48 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:32.982 21:20:48 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:32.982 21:20:48 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:32.982 21:20:48 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:32.982 21:20:48 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:32.982 21:20:48 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:33.241 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:33.241 ... 00:27:33.241 fio-3.35 00:27:33.241 Starting 3 threads 00:27:33.499 EAL: No free 2048 kB hugepages reported on node 1 00:27:40.095 00:27:40.095 filename0: (groupid=0, jobs=1): err= 0: pid=3224229: Thu Apr 18 21:20:54 2024 00:27:40.095 read: IOPS=131, BW=16.5MiB/s (17.3MB/s)(82.4MiB/5006msec) 00:27:40.095 slat (nsec): min=6146, max=42150, avg=13085.04, stdev=7351.47 00:27:40.095 clat (usec): min=6329, max=58936, avg=22759.99, stdev=18538.49 00:27:40.095 lat (usec): min=6336, max=58944, avg=22773.08, stdev=18538.75 00:27:40.095 clat percentiles (usec): 00:27:40.095 | 1.00th=[ 6783], 5.00th=[ 7832], 10.00th=[ 8848], 20.00th=[10683], 00:27:40.095 | 30.00th=[11469], 40.00th=[12125], 50.00th=[12911], 60.00th=[14091], 00:27:40.095 | 70.00th=[15533], 80.00th=[52167], 90.00th=[54264], 95.00th=[55313], 00:27:40.095 | 99.00th=[57410], 99.50th=[57934], 99.90th=[58983], 99.95th=[58983], 00:27:40.095 | 99.99th=[58983] 00:27:40.095 bw ( KiB/s): min=12288, max=29184, per=19.09%, avg=16819.20, stdev=4955.29, samples=10 00:27:40.095 iops : min= 96, max= 228, avg=131.40, stdev=38.71, samples=10 00:27:40.095 lat (msec) : 10=16.08%, 20=57.97%, 50=0.61%, 100=25.34% 00:27:40.095 cpu : usr=97.34%, sys=2.32%, ctx=7, majf=0, minf=67 00:27:40.095 IO depths : 1=7.1%, 2=92.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:40.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:40.095 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:40.095 issued rwts: total=659,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:40.095 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:40.095 filename0: (groupid=0, jobs=1): err= 0: pid=3224230: Thu Apr 18 21:20:54 2024 00:27:40.095 read: IOPS=288, BW=36.0MiB/s (37.8MB/s)(182MiB/5045msec) 00:27:40.095 slat (nsec): min=6256, max=74025, avg=13191.94, stdev=6878.57 00:27:40.095 clat (usec): min=4166, max=93686, avg=10329.26, stdev=10618.47 00:27:40.095 lat (usec): min=4175, max=93699, avg=10342.46, stdev=10618.91 00:27:40.095 clat percentiles (usec): 00:27:40.095 | 1.00th=[ 4424], 5.00th=[ 4948], 10.00th=[ 5473], 20.00th=[ 6063], 00:27:40.095 | 30.00th=[ 6652], 40.00th=[ 7177], 50.00th=[ 7570], 60.00th=[ 8160], 00:27:40.095 | 70.00th=[ 8848], 80.00th=[ 9765], 90.00th=[10945], 95.00th=[49021], 00:27:40.095 | 99.00th=[52167], 99.50th=[53216], 99.90th=[54789], 99.95th=[93848], 00:27:40.095 | 99.99th=[93848] 00:27:40.095 bw ( KiB/s): min=25344, max=50688, per=42.21%, avg=37196.80, stdev=7975.86, samples=10 00:27:40.095 iops : min= 198, max= 396, avg=290.60, stdev=62.31, samples=10 00:27:40.095 lat (msec) : 10=82.12%, 20=11.69%, 50=2.82%, 100=3.37% 00:27:40.095 cpu : usr=96.69%, sys=2.85%, ctx=13, majf=0, minf=168 00:27:40.095 IO depths : 1=3.0%, 2=97.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:40.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:40.095 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:40.095 issued rwts: total=1454,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:40.095 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:40.095 filename0: (groupid=0, jobs=1): err= 0: pid=3224231: Thu Apr 18 21:20:54 2024 00:27:40.095 read: IOPS=271, BW=34.0MiB/s (35.6MB/s)(170MiB/5005msec) 00:27:40.095 slat (nsec): min=6087, max=36503, avg=12396.72, stdev=7509.28 00:27:40.095 clat (usec): min=4424, max=95324, avg=11016.74, stdev=11355.95 00:27:40.095 lat (usec): min=4430, max=95349, avg=11029.14, stdev=11356.39 00:27:40.095 clat percentiles (usec): 00:27:40.095 | 1.00th=[ 4752], 5.00th=[ 5342], 10.00th=[ 5800], 20.00th=[ 6521], 00:27:40.095 | 30.00th=[ 6980], 40.00th=[ 7504], 50.00th=[ 7898], 60.00th=[ 8586], 00:27:40.095 | 70.00th=[ 9110], 80.00th=[10028], 90.00th=[11469], 95.00th=[49021], 00:27:40.095 | 99.00th=[52691], 99.50th=[53216], 99.90th=[91751], 99.95th=[94897], 00:27:40.095 | 99.99th=[94897] 00:27:40.095 bw ( KiB/s): min=29184, max=48896, per=39.46%, avg=34770.60, stdev=6560.00, samples=10 00:27:40.095 iops : min= 228, max= 382, avg=271.60, stdev=51.29, samples=10 00:27:40.095 lat (msec) : 10=80.37%, 20=12.50%, 50=3.31%, 100=3.82% 00:27:40.095 cpu : usr=96.00%, sys=3.56%, ctx=9, majf=0, minf=105 00:27:40.095 IO depths : 1=3.8%, 2=96.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:40.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:40.095 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:40.095 issued rwts: total=1360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:40.095 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:40.095 00:27:40.095 Run status group 0 (all jobs): 00:27:40.095 READ: bw=86.0MiB/s (90.2MB/s), 16.5MiB/s-36.0MiB/s (17.3MB/s-37.8MB/s), io=434MiB (455MB), run=5005-5045msec 00:27:40.095 21:20:54 -- target/dif.sh@107 -- # destroy_subsystems 0 00:27:40.095 21:20:54 -- target/dif.sh@43 -- # local sub 00:27:40.095 21:20:54 -- target/dif.sh@45 -- # for sub in "$@" 00:27:40.095 21:20:54 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:40.095 21:20:54 -- target/dif.sh@36 -- # local sub_id=0 00:27:40.095 21:20:54 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:40.095 21:20:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:40.095 21:20:54 -- common/autotest_common.sh@10 -- # set +x 00:27:40.095 21:20:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:40.095 21:20:54 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:40.095 21:20:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:40.095 21:20:54 -- common/autotest_common.sh@10 -- # set +x 00:27:40.095 21:20:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:40.095 21:20:54 -- target/dif.sh@109 -- # NULL_DIF=2 00:27:40.095 21:20:54 -- target/dif.sh@109 -- # bs=4k 00:27:40.095 21:20:54 -- target/dif.sh@109 -- # numjobs=8 00:27:40.095 21:20:54 -- target/dif.sh@109 -- # iodepth=16 00:27:40.095 21:20:54 -- target/dif.sh@109 -- # runtime= 00:27:40.095 21:20:54 -- target/dif.sh@109 -- # files=2 00:27:40.095 21:20:54 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:27:40.095 21:20:54 -- target/dif.sh@28 -- # local sub 00:27:40.095 21:20:54 -- target/dif.sh@30 -- # for sub in "$@" 00:27:40.095 21:20:54 -- target/dif.sh@31 -- # create_subsystem 0 00:27:40.095 21:20:54 -- target/dif.sh@18 -- # local sub_id=0 00:27:40.095 21:20:54 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:27:40.095 21:20:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:40.095 21:20:54 -- common/autotest_common.sh@10 -- # set +x 00:27:40.095 bdev_null0 00:27:40.095 21:20:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:40.095 21:20:54 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:40.095 21:20:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:40.095 21:20:54 -- common/autotest_common.sh@10 -- # set +x 00:27:40.095 21:20:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:40.095 21:20:54 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:40.095 21:20:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:40.095 21:20:54 -- common/autotest_common.sh@10 -- # set +x 00:27:40.095 21:20:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:40.095 21:20:55 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:40.095 21:20:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:40.095 21:20:55 -- common/autotest_common.sh@10 -- # set +x 00:27:40.095 [2024-04-18 21:20:55.008334] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:40.095 21:20:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:40.095 21:20:55 -- target/dif.sh@30 -- # for sub in "$@" 00:27:40.095 21:20:55 -- target/dif.sh@31 -- # create_subsystem 1 00:27:40.095 21:20:55 -- target/dif.sh@18 -- # local sub_id=1 00:27:40.095 21:20:55 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:27:40.095 21:20:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:40.095 21:20:55 -- common/autotest_common.sh@10 -- # set +x 00:27:40.095 bdev_null1 00:27:40.095 21:20:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:40.095 21:20:55 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:40.095 21:20:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:40.095 21:20:55 -- common/autotest_common.sh@10 -- # set +x 00:27:40.095 21:20:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:40.095 21:20:55 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:40.095 21:20:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:40.095 21:20:55 -- common/autotest_common.sh@10 -- # set +x 00:27:40.095 21:20:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:40.096 21:20:55 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:40.096 21:20:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:40.096 21:20:55 -- common/autotest_common.sh@10 -- # set +x 00:27:40.096 21:20:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:40.096 21:20:55 -- target/dif.sh@30 -- # for sub in "$@" 00:27:40.096 21:20:55 -- target/dif.sh@31 -- # create_subsystem 2 00:27:40.096 21:20:55 -- target/dif.sh@18 -- # local sub_id=2 00:27:40.096 21:20:55 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:27:40.096 21:20:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:40.096 21:20:55 -- common/autotest_common.sh@10 -- # set +x 00:27:40.096 bdev_null2 00:27:40.096 21:20:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:40.096 21:20:55 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:27:40.096 21:20:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:40.096 21:20:55 -- common/autotest_common.sh@10 -- # set +x 00:27:40.096 21:20:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:40.096 21:20:55 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:27:40.096 21:20:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:40.096 21:20:55 -- common/autotest_common.sh@10 -- # set +x 00:27:40.096 21:20:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:40.096 21:20:55 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:40.096 21:20:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:40.096 21:20:55 -- common/autotest_common.sh@10 -- # set +x 00:27:40.096 21:20:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:40.096 21:20:55 -- target/dif.sh@112 -- # fio /dev/fd/62 00:27:40.096 21:20:55 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:40.096 21:20:55 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:40.096 21:20:55 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:27:40.096 21:20:55 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:40.096 21:20:55 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:40.096 21:20:55 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:27:40.096 21:20:55 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:40.096 21:20:55 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:40.096 21:20:55 -- common/autotest_common.sh@1327 -- # shift 00:27:40.096 21:20:55 -- nvmf/common.sh@521 -- # config=() 00:27:40.096 21:20:55 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:40.096 21:20:55 -- target/dif.sh@82 -- # gen_fio_conf 00:27:40.096 21:20:55 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:40.096 21:20:55 -- nvmf/common.sh@521 -- # local subsystem config 00:27:40.096 21:20:55 -- target/dif.sh@54 -- # local file 00:27:40.096 21:20:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:40.096 21:20:55 -- target/dif.sh@56 -- # cat 00:27:40.096 21:20:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:40.096 { 00:27:40.096 "params": { 00:27:40.096 "name": "Nvme$subsystem", 00:27:40.096 "trtype": "$TEST_TRANSPORT", 00:27:40.096 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:40.096 "adrfam": "ipv4", 00:27:40.096 "trsvcid": "$NVMF_PORT", 00:27:40.096 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:40.096 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:40.096 "hdgst": ${hdgst:-false}, 00:27:40.096 "ddgst": ${ddgst:-false} 00:27:40.096 }, 00:27:40.096 "method": "bdev_nvme_attach_controller" 00:27:40.096 } 00:27:40.096 EOF 00:27:40.096 )") 00:27:40.096 21:20:55 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:40.096 21:20:55 -- nvmf/common.sh@543 -- # cat 00:27:40.096 21:20:55 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:40.096 21:20:55 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:40.096 21:20:55 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:40.096 21:20:55 -- target/dif.sh@72 -- # (( file <= files )) 00:27:40.096 21:20:55 -- target/dif.sh@73 -- # cat 00:27:40.096 21:20:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:40.096 21:20:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:40.096 { 00:27:40.096 "params": { 00:27:40.096 "name": "Nvme$subsystem", 00:27:40.096 "trtype": "$TEST_TRANSPORT", 00:27:40.096 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:40.096 "adrfam": "ipv4", 00:27:40.096 "trsvcid": "$NVMF_PORT", 00:27:40.096 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:40.096 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:40.096 "hdgst": ${hdgst:-false}, 00:27:40.096 "ddgst": ${ddgst:-false} 00:27:40.096 }, 00:27:40.096 "method": "bdev_nvme_attach_controller" 00:27:40.096 } 00:27:40.096 EOF 00:27:40.096 )") 00:27:40.096 21:20:55 -- target/dif.sh@72 -- # (( file++ )) 00:27:40.096 21:20:55 -- target/dif.sh@72 -- # (( file <= files )) 00:27:40.096 21:20:55 -- target/dif.sh@73 -- # cat 00:27:40.096 21:20:55 -- nvmf/common.sh@543 -- # cat 00:27:40.096 21:20:55 -- target/dif.sh@72 -- # (( file++ )) 00:27:40.096 21:20:55 -- target/dif.sh@72 -- # (( file <= files )) 00:27:40.096 21:20:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:40.096 21:20:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:40.096 { 00:27:40.096 "params": { 00:27:40.096 "name": "Nvme$subsystem", 00:27:40.096 "trtype": "$TEST_TRANSPORT", 00:27:40.096 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:40.096 "adrfam": "ipv4", 00:27:40.096 "trsvcid": "$NVMF_PORT", 00:27:40.096 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:40.096 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:40.096 "hdgst": ${hdgst:-false}, 00:27:40.096 "ddgst": ${ddgst:-false} 00:27:40.096 }, 00:27:40.096 "method": "bdev_nvme_attach_controller" 00:27:40.096 } 00:27:40.096 EOF 00:27:40.096 )") 00:27:40.096 21:20:55 -- nvmf/common.sh@543 -- # cat 00:27:40.096 21:20:55 -- nvmf/common.sh@545 -- # jq . 00:27:40.096 21:20:55 -- nvmf/common.sh@546 -- # IFS=, 00:27:40.096 21:20:55 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:40.096 "params": { 00:27:40.096 "name": "Nvme0", 00:27:40.096 "trtype": "tcp", 00:27:40.096 "traddr": "10.0.0.2", 00:27:40.096 "adrfam": "ipv4", 00:27:40.096 "trsvcid": "4420", 00:27:40.096 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:40.096 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:40.096 "hdgst": false, 00:27:40.096 "ddgst": false 00:27:40.096 }, 00:27:40.096 "method": "bdev_nvme_attach_controller" 00:27:40.096 },{ 00:27:40.096 "params": { 00:27:40.096 "name": "Nvme1", 00:27:40.096 "trtype": "tcp", 00:27:40.096 "traddr": "10.0.0.2", 00:27:40.096 "adrfam": "ipv4", 00:27:40.096 "trsvcid": "4420", 00:27:40.096 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:40.096 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:40.096 "hdgst": false, 00:27:40.096 "ddgst": false 00:27:40.096 }, 00:27:40.096 "method": "bdev_nvme_attach_controller" 00:27:40.096 },{ 00:27:40.096 "params": { 00:27:40.096 "name": "Nvme2", 00:27:40.096 "trtype": "tcp", 00:27:40.096 "traddr": "10.0.0.2", 00:27:40.096 "adrfam": "ipv4", 00:27:40.096 "trsvcid": "4420", 00:27:40.096 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:40.096 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:40.096 "hdgst": false, 00:27:40.096 "ddgst": false 00:27:40.096 }, 00:27:40.096 "method": "bdev_nvme_attach_controller" 00:27:40.096 }' 00:27:40.096 21:20:55 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:40.096 21:20:55 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:40.096 21:20:55 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:40.096 21:20:55 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:40.096 21:20:55 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:40.096 21:20:55 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:40.096 21:20:55 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:40.096 21:20:55 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:40.096 21:20:55 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:40.096 21:20:55 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:40.096 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:40.096 ... 00:27:40.096 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:40.096 ... 00:27:40.096 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:40.096 ... 00:27:40.096 fio-3.35 00:27:40.096 Starting 24 threads 00:27:40.096 EAL: No free 2048 kB hugepages reported on node 1 00:27:52.305 00:27:52.305 filename0: (groupid=0, jobs=1): err= 0: pid=3225491: Thu Apr 18 21:21:06 2024 00:27:52.305 read: IOPS=617, BW=2471KiB/s (2531kB/s)(24.2MiB/10019msec) 00:27:52.305 slat (usec): min=6, max=506, avg=22.02, stdev=15.55 00:27:52.305 clat (usec): min=3885, max=45583, avg=25734.28, stdev=3081.59 00:27:52.305 lat (usec): min=3893, max=45619, avg=25756.30, stdev=3083.28 00:27:52.305 clat percentiles (usec): 00:27:52.305 | 1.00th=[12387], 5.00th=[21365], 10.00th=[24773], 20.00th=[25297], 00:27:52.305 | 30.00th=[25560], 40.00th=[25822], 50.00th=[26084], 60.00th=[26084], 00:27:52.305 | 70.00th=[26346], 80.00th=[26608], 90.00th=[26870], 95.00th=[27919], 00:27:52.305 | 99.00th=[33817], 99.50th=[39060], 99.90th=[44303], 99.95th=[45351], 00:27:52.305 | 99.99th=[45351] 00:27:52.305 bw ( KiB/s): min= 2304, max= 2816, per=4.26%, avg=2472.00, stdev=106.20, samples=20 00:27:52.305 iops : min= 576, max= 704, avg=618.00, stdev=26.55, samples=20 00:27:52.305 lat (msec) : 4=0.03%, 10=0.74%, 20=3.04%, 50=96.19% 00:27:52.305 cpu : usr=94.15%, sys=2.78%, ctx=208, majf=0, minf=42 00:27:52.305 IO depths : 1=2.4%, 2=4.8%, 4=12.9%, 8=67.5%, 16=12.3%, 32=0.0%, >=64=0.0% 00:27:52.305 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.305 complete : 0=0.0%, 4=91.7%, 8=4.6%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.305 issued rwts: total=6190,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.305 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.305 filename0: (groupid=0, jobs=1): err= 0: pid=3225492: Thu Apr 18 21:21:06 2024 00:27:52.305 read: IOPS=615, BW=2461KiB/s (2520kB/s)(24.1MiB/10011msec) 00:27:52.305 slat (usec): min=5, max=124, avg=45.78, stdev=19.17 00:27:52.305 clat (usec): min=5930, max=39419, avg=25632.18, stdev=2422.06 00:27:52.305 lat (usec): min=5961, max=39427, avg=25677.95, stdev=2424.61 00:27:52.305 clat percentiles (usec): 00:27:52.305 | 1.00th=[12518], 5.00th=[24249], 10.00th=[24773], 20.00th=[25035], 00:27:52.305 | 30.00th=[25297], 40.00th=[25560], 50.00th=[25822], 60.00th=[25822], 00:27:52.305 | 70.00th=[26084], 80.00th=[26346], 90.00th=[26608], 95.00th=[27132], 00:27:52.305 | 99.00th=[32375], 99.50th=[35390], 99.90th=[39584], 99.95th=[39584], 00:27:52.305 | 99.99th=[39584] 00:27:52.305 bw ( KiB/s): min= 2304, max= 2672, per=4.23%, avg=2458.68, stdev=75.30, samples=19 00:27:52.305 iops : min= 576, max= 668, avg=614.63, stdev=18.83, samples=19 00:27:52.305 lat (msec) : 10=0.52%, 20=1.15%, 50=98.33% 00:27:52.305 cpu : usr=99.04%, sys=0.56%, ctx=18, majf=0, minf=20 00:27:52.305 IO depths : 1=4.7%, 2=10.3%, 4=22.8%, 8=54.4%, 16=7.8%, 32=0.0%, >=64=0.0% 00:27:52.305 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.305 complete : 0=0.0%, 4=93.6%, 8=0.6%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.305 issued rwts: total=6160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.305 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.305 filename0: (groupid=0, jobs=1): err= 0: pid=3225493: Thu Apr 18 21:21:06 2024 00:27:52.305 read: IOPS=606, BW=2426KiB/s (2484kB/s)(23.7MiB/10007msec) 00:27:52.305 slat (usec): min=4, max=122, avg=34.85, stdev=21.82 00:27:52.305 clat (usec): min=6800, max=69290, avg=26188.37, stdev=3873.31 00:27:52.305 lat (usec): min=6811, max=69303, avg=26223.22, stdev=3873.45 00:27:52.305 clat percentiles (usec): 00:27:52.305 | 1.00th=[13566], 5.00th=[21365], 10.00th=[24511], 20.00th=[25297], 00:27:52.305 | 30.00th=[25560], 40.00th=[25822], 50.00th=[26084], 60.00th=[26084], 00:27:52.305 | 70.00th=[26346], 80.00th=[26608], 90.00th=[28443], 95.00th=[32113], 00:27:52.305 | 99.00th=[41681], 99.50th=[43254], 99.90th=[54789], 99.95th=[54789], 00:27:52.305 | 99.99th=[69731] 00:27:52.305 bw ( KiB/s): min= 2272, max= 2480, per=4.16%, avg=2413.47, stdev=61.57, samples=19 00:27:52.305 iops : min= 568, max= 620, avg=603.37, stdev=15.39, samples=19 00:27:52.305 lat (msec) : 10=0.30%, 20=3.35%, 50=96.09%, 100=0.26% 00:27:52.305 cpu : usr=98.79%, sys=0.78%, ctx=18, majf=0, minf=29 00:27:52.305 IO depths : 1=0.8%, 2=1.8%, 4=9.3%, 8=74.3%, 16=13.8%, 32=0.0%, >=64=0.0% 00:27:52.305 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.305 complete : 0=0.0%, 4=90.8%, 8=5.4%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.305 issued rwts: total=6068,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.305 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.305 filename0: (groupid=0, jobs=1): err= 0: pid=3225494: Thu Apr 18 21:21:06 2024 00:27:52.305 read: IOPS=584, BW=2337KiB/s (2394kB/s)(22.8MiB/10004msec) 00:27:52.305 slat (usec): min=6, max=121, avg=30.52, stdev=20.24 00:27:52.305 clat (usec): min=6108, max=70766, avg=27228.99, stdev=5574.40 00:27:52.305 lat (usec): min=6133, max=70783, avg=27259.51, stdev=5573.17 00:27:52.305 clat percentiles (usec): 00:27:52.305 | 1.00th=[11994], 5.00th=[20579], 10.00th=[24773], 20.00th=[25560], 00:27:52.305 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26084], 60.00th=[26346], 00:27:52.305 | 70.00th=[26608], 80.00th=[28443], 90.00th=[33424], 95.00th=[38536], 00:27:52.305 | 99.00th=[45351], 99.50th=[48497], 99.90th=[70779], 99.95th=[70779], 00:27:52.305 | 99.99th=[70779] 00:27:52.305 bw ( KiB/s): min= 2128, max= 2432, per=4.02%, avg=2331.16, stdev=83.93, samples=19 00:27:52.305 iops : min= 532, max= 608, avg=582.79, stdev=20.98, samples=19 00:27:52.305 lat (msec) : 10=0.50%, 20=4.00%, 50=95.12%, 100=0.38% 00:27:52.305 cpu : usr=98.79%, sys=0.78%, ctx=17, majf=0, minf=41 00:27:52.305 IO depths : 1=0.3%, 2=0.7%, 4=6.5%, 8=77.7%, 16=14.8%, 32=0.0%, >=64=0.0% 00:27:52.305 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.305 complete : 0=0.0%, 4=90.2%, 8=6.6%, 16=3.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.305 issued rwts: total=5846,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.305 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.305 filename0: (groupid=0, jobs=1): err= 0: pid=3225495: Thu Apr 18 21:21:06 2024 00:27:52.305 read: IOPS=610, BW=2444KiB/s (2502kB/s)(23.9MiB/10004msec) 00:27:52.305 slat (nsec): min=6170, max=85899, avg=22470.92, stdev=14928.53 00:27:52.305 clat (usec): min=15302, max=59181, avg=25964.57, stdev=1752.69 00:27:52.305 lat (usec): min=15313, max=59205, avg=25987.04, stdev=1752.40 00:27:52.305 clat percentiles (usec): 00:27:52.305 | 1.00th=[23462], 5.00th=[24773], 10.00th=[25035], 20.00th=[25297], 00:27:52.305 | 30.00th=[25560], 40.00th=[25822], 50.00th=[25822], 60.00th=[26084], 00:27:52.305 | 70.00th=[26346], 80.00th=[26346], 90.00th=[26870], 95.00th=[27132], 00:27:52.305 | 99.00th=[28443], 99.50th=[32900], 99.90th=[53216], 99.95th=[53216], 00:27:52.305 | 99.99th=[58983] 00:27:52.305 bw ( KiB/s): min= 2304, max= 2560, per=4.20%, avg=2438.47, stdev=67.14, samples=19 00:27:52.305 iops : min= 576, max= 640, avg=609.58, stdev=16.79, samples=19 00:27:52.305 lat (msec) : 20=0.29%, 50=99.44%, 100=0.26% 00:27:52.305 cpu : usr=98.96%, sys=0.64%, ctx=7, majf=0, minf=24 00:27:52.305 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:52.305 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.305 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.305 issued rwts: total=6112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.305 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.305 filename0: (groupid=0, jobs=1): err= 0: pid=3225496: Thu Apr 18 21:21:06 2024 00:27:52.305 read: IOPS=592, BW=2371KiB/s (2428kB/s)(23.2MiB/10005msec) 00:27:52.305 slat (usec): min=4, max=117, avg=35.16, stdev=21.96 00:27:52.305 clat (usec): min=6329, max=81920, avg=26815.01, stdev=4369.95 00:27:52.305 lat (usec): min=6335, max=81933, avg=26850.17, stdev=4368.78 00:27:52.305 clat percentiles (usec): 00:27:52.305 | 1.00th=[15533], 5.00th=[23725], 10.00th=[25035], 20.00th=[25560], 00:27:52.305 | 30.00th=[25822], 40.00th=[25822], 50.00th=[26084], 60.00th=[26346], 00:27:52.305 | 70.00th=[26608], 80.00th=[26870], 90.00th=[30278], 95.00th=[33817], 00:27:52.305 | 99.00th=[42206], 99.50th=[43779], 99.90th=[70779], 99.95th=[71828], 00:27:52.305 | 99.99th=[82314] 00:27:52.305 bw ( KiB/s): min= 2104, max= 2464, per=4.07%, avg=2361.68, stdev=87.98, samples=19 00:27:52.305 iops : min= 526, max= 616, avg=590.42, stdev=22.00, samples=19 00:27:52.305 lat (msec) : 10=0.25%, 20=2.19%, 50=97.29%, 100=0.27% 00:27:52.305 cpu : usr=98.72%, sys=0.84%, ctx=17, majf=0, minf=24 00:27:52.305 IO depths : 1=0.1%, 2=0.2%, 4=6.0%, 8=78.5%, 16=15.3%, 32=0.0%, >=64=0.0% 00:27:52.305 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.305 complete : 0=0.0%, 4=90.4%, 8=6.0%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.305 issued rwts: total=5931,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.305 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.305 filename0: (groupid=0, jobs=1): err= 0: pid=3225497: Thu Apr 18 21:21:06 2024 00:27:52.305 read: IOPS=517, BW=2072KiB/s (2122kB/s)(20.2MiB/10004msec) 00:27:52.305 slat (usec): min=5, max=123, avg=31.60, stdev=21.50 00:27:52.305 clat (usec): min=5756, max=70587, avg=30713.12, stdev=5671.45 00:27:52.305 lat (usec): min=5762, max=70603, avg=30744.72, stdev=5668.10 00:27:52.305 clat percentiles (usec): 00:27:52.305 | 1.00th=[21103], 5.00th=[25297], 10.00th=[25560], 20.00th=[26084], 00:27:52.305 | 30.00th=[26346], 40.00th=[26870], 50.00th=[30016], 60.00th=[31327], 00:27:52.305 | 70.00th=[33424], 80.00th=[35914], 90.00th=[38536], 95.00th=[40633], 00:27:52.305 | 99.00th=[43779], 99.50th=[44827], 99.90th=[58983], 99.95th=[70779], 00:27:52.305 | 99.99th=[70779] 00:27:52.305 bw ( KiB/s): min= 1792, max= 2448, per=3.57%, avg=2072.42, stdev=266.89, samples=19 00:27:52.306 iops : min= 448, max= 612, avg=518.11, stdev=66.72, samples=19 00:27:52.306 lat (msec) : 10=0.31%, 20=0.60%, 50=98.78%, 100=0.31% 00:27:52.306 cpu : usr=98.69%, sys=0.90%, ctx=17, majf=0, minf=37 00:27:52.306 IO depths : 1=0.1%, 2=0.1%, 4=12.1%, 8=73.1%, 16=14.6%, 32=0.0%, >=64=0.0% 00:27:52.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.306 complete : 0=0.0%, 4=92.4%, 8=4.1%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.306 issued rwts: total=5182,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.306 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.306 filename0: (groupid=0, jobs=1): err= 0: pid=3225498: Thu Apr 18 21:21:06 2024 00:27:52.306 read: IOPS=608, BW=2435KiB/s (2493kB/s)(23.8MiB/10011msec) 00:27:52.306 slat (usec): min=6, max=117, avg=29.03, stdev=19.61 00:27:52.306 clat (usec): min=10414, max=45910, avg=26064.23, stdev=4021.76 00:27:52.306 lat (usec): min=10424, max=45917, avg=26093.26, stdev=4021.37 00:27:52.306 clat percentiles (usec): 00:27:52.306 | 1.00th=[12387], 5.00th=[20579], 10.00th=[24511], 20.00th=[25297], 00:27:52.306 | 30.00th=[25560], 40.00th=[25822], 50.00th=[25822], 60.00th=[26084], 00:27:52.306 | 70.00th=[26346], 80.00th=[26608], 90.00th=[27132], 95.00th=[32900], 00:27:52.306 | 99.00th=[40109], 99.50th=[40633], 99.90th=[43254], 99.95th=[45351], 00:27:52.306 | 99.99th=[45876] 00:27:52.306 bw ( KiB/s): min= 2352, max= 2512, per=4.19%, avg=2434.84, stdev=45.97, samples=19 00:27:52.306 iops : min= 588, max= 628, avg=608.63, stdev=11.57, samples=19 00:27:52.306 lat (msec) : 20=4.64%, 50=95.36% 00:27:52.306 cpu : usr=99.00%, sys=0.59%, ctx=19, majf=0, minf=41 00:27:52.306 IO depths : 1=2.1%, 2=4.8%, 4=16.0%, 8=65.5%, 16=11.5%, 32=0.0%, >=64=0.0% 00:27:52.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.306 complete : 0=0.0%, 4=92.6%, 8=2.7%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.306 issued rwts: total=6094,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.306 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.306 filename1: (groupid=0, jobs=1): err= 0: pid=3225499: Thu Apr 18 21:21:06 2024 00:27:52.306 read: IOPS=606, BW=2428KiB/s (2486kB/s)(23.8MiB/10019msec) 00:27:52.306 slat (usec): min=6, max=111, avg=26.45, stdev=19.56 00:27:52.306 clat (usec): min=9027, max=50082, avg=26211.99, stdev=3812.90 00:27:52.306 lat (usec): min=9077, max=50102, avg=26238.45, stdev=3812.68 00:27:52.306 clat percentiles (usec): 00:27:52.306 | 1.00th=[13042], 5.00th=[21627], 10.00th=[24773], 20.00th=[25297], 00:27:52.306 | 30.00th=[25560], 40.00th=[25822], 50.00th=[26084], 60.00th=[26084], 00:27:52.306 | 70.00th=[26346], 80.00th=[26608], 90.00th=[27657], 95.00th=[32637], 00:27:52.306 | 99.00th=[40633], 99.50th=[42206], 99.90th=[46400], 99.95th=[50070], 00:27:52.306 | 99.99th=[50070] 00:27:52.306 bw ( KiB/s): min= 2224, max= 2618, per=4.18%, avg=2425.11, stdev=103.56, samples=19 00:27:52.306 iops : min= 556, max= 654, avg=606.21, stdev=25.87, samples=19 00:27:52.306 lat (msec) : 10=0.07%, 20=4.28%, 50=95.59%, 100=0.07% 00:27:52.306 cpu : usr=99.06%, sys=0.52%, ctx=13, majf=0, minf=34 00:27:52.306 IO depths : 1=0.5%, 2=2.2%, 4=9.1%, 8=72.8%, 16=15.4%, 32=0.0%, >=64=0.0% 00:27:52.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.306 complete : 0=0.0%, 4=91.2%, 8=6.2%, 16=2.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.306 issued rwts: total=6081,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.306 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.306 filename1: (groupid=0, jobs=1): err= 0: pid=3225500: Thu Apr 18 21:21:06 2024 00:27:52.306 read: IOPS=611, BW=2445KiB/s (2504kB/s)(23.9MiB/10006msec) 00:27:52.306 slat (usec): min=4, max=112, avg=31.59, stdev=20.00 00:27:52.306 clat (usec): min=5592, max=54312, avg=25863.74, stdev=2755.04 00:27:52.306 lat (usec): min=5599, max=54325, avg=25895.33, stdev=2755.34 00:27:52.306 clat percentiles (usec): 00:27:52.306 | 1.00th=[14484], 5.00th=[24511], 10.00th=[24773], 20.00th=[25297], 00:27:52.306 | 30.00th=[25560], 40.00th=[25560], 50.00th=[25822], 60.00th=[26084], 00:27:52.306 | 70.00th=[26084], 80.00th=[26346], 90.00th=[26608], 95.00th=[26870], 00:27:52.306 | 99.00th=[34866], 99.50th=[39584], 99.90th=[54264], 99.95th=[54264], 00:27:52.306 | 99.99th=[54264] 00:27:52.306 bw ( KiB/s): min= 2224, max= 2560, per=4.19%, avg=2433.68, stdev=76.53, samples=19 00:27:52.306 iops : min= 556, max= 640, avg=608.42, stdev=19.13, samples=19 00:27:52.306 lat (msec) : 10=0.49%, 20=1.03%, 50=98.22%, 100=0.26% 00:27:52.306 cpu : usr=99.08%, sys=0.51%, ctx=12, majf=0, minf=30 00:27:52.306 IO depths : 1=5.3%, 2=11.4%, 4=24.4%, 8=51.7%, 16=7.2%, 32=0.0%, >=64=0.0% 00:27:52.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.306 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.306 issued rwts: total=6116,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.306 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.306 filename1: (groupid=0, jobs=1): err= 0: pid=3225501: Thu Apr 18 21:21:06 2024 00:27:52.306 read: IOPS=612, BW=2449KiB/s (2507kB/s)(23.9MiB/10011msec) 00:27:52.306 slat (usec): min=6, max=122, avg=50.53, stdev=15.12 00:27:52.306 clat (usec): min=14538, max=38460, avg=25685.55, stdev=1210.87 00:27:52.306 lat (usec): min=14547, max=38474, avg=25736.08, stdev=1211.32 00:27:52.306 clat percentiles (usec): 00:27:52.306 | 1.00th=[23725], 5.00th=[24511], 10.00th=[24773], 20.00th=[25035], 00:27:52.306 | 30.00th=[25297], 40.00th=[25560], 50.00th=[25560], 60.00th=[25822], 00:27:52.306 | 70.00th=[26084], 80.00th=[26084], 90.00th=[26608], 95.00th=[26870], 00:27:52.306 | 99.00th=[29230], 99.50th=[32113], 99.90th=[38536], 99.95th=[38536], 00:27:52.306 | 99.99th=[38536] 00:27:52.306 bw ( KiB/s): min= 2304, max= 2560, per=4.21%, avg=2444.95, stdev=58.88, samples=19 00:27:52.306 iops : min= 576, max= 640, avg=611.16, stdev=14.75, samples=19 00:27:52.306 lat (msec) : 20=0.26%, 50=99.74% 00:27:52.306 cpu : usr=99.10%, sys=0.50%, ctx=12, majf=0, minf=29 00:27:52.306 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:52.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.306 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.306 issued rwts: total=6128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.306 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.306 filename1: (groupid=0, jobs=1): err= 0: pid=3225502: Thu Apr 18 21:21:06 2024 00:27:52.306 read: IOPS=610, BW=2444KiB/s (2502kB/s)(23.9MiB/10005msec) 00:27:52.306 slat (usec): min=6, max=101, avg=29.62, stdev=18.95 00:27:52.306 clat (usec): min=9701, max=52937, avg=25920.64, stdev=1932.67 00:27:52.306 lat (usec): min=9733, max=52953, avg=25950.25, stdev=1931.82 00:27:52.306 clat percentiles (usec): 00:27:52.306 | 1.00th=[23200], 5.00th=[24511], 10.00th=[25035], 20.00th=[25297], 00:27:52.306 | 30.00th=[25560], 40.00th=[25822], 50.00th=[25822], 60.00th=[26084], 00:27:52.306 | 70.00th=[26084], 80.00th=[26346], 90.00th=[26608], 95.00th=[27132], 00:27:52.306 | 99.00th=[31589], 99.50th=[33817], 99.90th=[52691], 99.95th=[52691], 00:27:52.306 | 99.99th=[52691] 00:27:52.306 bw ( KiB/s): min= 2304, max= 2560, per=4.20%, avg=2438.47, stdev=67.14, samples=19 00:27:52.306 iops : min= 576, max= 640, avg=609.58, stdev=16.79, samples=19 00:27:52.306 lat (msec) : 10=0.10%, 20=0.25%, 50=99.39%, 100=0.26% 00:27:52.306 cpu : usr=99.07%, sys=0.52%, ctx=10, majf=0, minf=21 00:27:52.306 IO depths : 1=5.4%, 2=11.0%, 4=23.2%, 8=53.4%, 16=7.1%, 32=0.0%, >=64=0.0% 00:27:52.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.306 complete : 0=0.0%, 4=93.6%, 8=0.6%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.306 issued rwts: total=6112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.306 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.306 filename1: (groupid=0, jobs=1): err= 0: pid=3225503: Thu Apr 18 21:21:06 2024 00:27:52.306 read: IOPS=597, BW=2391KiB/s (2448kB/s)(23.4MiB/10011msec) 00:27:52.306 slat (nsec): min=6258, max=99485, avg=32131.72, stdev=19314.93 00:27:52.306 clat (usec): min=6613, max=46367, avg=26538.27, stdev=3758.49 00:27:52.306 lat (usec): min=6623, max=46375, avg=26570.40, stdev=3757.30 00:27:52.306 clat percentiles (usec): 00:27:52.306 | 1.00th=[15795], 5.00th=[22676], 10.00th=[24773], 20.00th=[25297], 00:27:52.306 | 30.00th=[25560], 40.00th=[25822], 50.00th=[26084], 60.00th=[26084], 00:27:52.306 | 70.00th=[26346], 80.00th=[26608], 90.00th=[30016], 95.00th=[33424], 00:27:52.306 | 99.00th=[40633], 99.50th=[43779], 99.90th=[45351], 99.95th=[45351], 00:27:52.306 | 99.99th=[46400] 00:27:52.306 bw ( KiB/s): min= 1920, max= 2560, per=4.11%, avg=2384.32, stdev=148.55, samples=19 00:27:52.306 iops : min= 480, max= 640, avg=596.00, stdev=37.10, samples=19 00:27:52.306 lat (msec) : 10=0.13%, 20=2.11%, 50=97.76% 00:27:52.306 cpu : usr=98.12%, sys=1.12%, ctx=225, majf=0, minf=26 00:27:52.306 IO depths : 1=3.1%, 2=6.3%, 4=15.4%, 8=64.1%, 16=11.0%, 32=0.0%, >=64=0.0% 00:27:52.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.306 complete : 0=0.0%, 4=92.1%, 8=3.7%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.306 issued rwts: total=5984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.306 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.306 filename1: (groupid=0, jobs=1): err= 0: pid=3225504: Thu Apr 18 21:21:06 2024 00:27:52.306 read: IOPS=625, BW=2501KiB/s (2561kB/s)(24.4MiB/10005msec) 00:27:52.306 slat (usec): min=4, max=112, avg=34.71, stdev=22.27 00:27:52.306 clat (usec): min=5817, max=49703, avg=25320.52, stdev=3773.83 00:27:52.306 lat (usec): min=5825, max=49726, avg=25355.23, stdev=3777.49 00:27:52.306 clat percentiles (usec): 00:27:52.306 | 1.00th=[12911], 5.00th=[17433], 10.00th=[21627], 20.00th=[25035], 00:27:52.306 | 30.00th=[25297], 40.00th=[25560], 50.00th=[25822], 60.00th=[25822], 00:27:52.306 | 70.00th=[26084], 80.00th=[26346], 90.00th=[26870], 95.00th=[29754], 00:27:52.306 | 99.00th=[39060], 99.50th=[42730], 99.90th=[46924], 99.95th=[49546], 00:27:52.306 | 99.99th=[49546] 00:27:52.306 bw ( KiB/s): min= 2288, max= 3168, per=4.29%, avg=2492.84, stdev=182.87, samples=19 00:27:52.306 iops : min= 572, max= 792, avg=623.21, stdev=45.72, samples=19 00:27:52.306 lat (msec) : 10=0.32%, 20=7.00%, 50=92.68% 00:27:52.306 cpu : usr=99.08%, sys=0.52%, ctx=16, majf=0, minf=22 00:27:52.306 IO depths : 1=2.5%, 2=6.0%, 4=15.2%, 8=64.7%, 16=11.5%, 32=0.0%, >=64=0.0% 00:27:52.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.306 complete : 0=0.0%, 4=92.0%, 8=3.8%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.306 issued rwts: total=6256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.306 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.307 filename1: (groupid=0, jobs=1): err= 0: pid=3225505: Thu Apr 18 21:21:06 2024 00:27:52.307 read: IOPS=619, BW=2478KiB/s (2538kB/s)(24.2MiB/10014msec) 00:27:52.307 slat (usec): min=6, max=289, avg=31.46, stdev=18.60 00:27:52.307 clat (usec): min=4783, max=45796, avg=25583.54, stdev=2606.51 00:27:52.307 lat (usec): min=4792, max=45806, avg=25615.00, stdev=2607.87 00:27:52.307 clat percentiles (usec): 00:27:52.307 | 1.00th=[13566], 5.00th=[23987], 10.00th=[24773], 20.00th=[25297], 00:27:52.307 | 30.00th=[25560], 40.00th=[25560], 50.00th=[25822], 60.00th=[26084], 00:27:52.307 | 70.00th=[26084], 80.00th=[26346], 90.00th=[26608], 95.00th=[27132], 00:27:52.307 | 99.00th=[31589], 99.50th=[32637], 99.90th=[43254], 99.95th=[45876], 00:27:52.307 | 99.99th=[45876] 00:27:52.307 bw ( KiB/s): min= 2304, max= 2784, per=4.26%, avg=2475.20, stdev=107.03, samples=20 00:27:52.307 iops : min= 576, max= 696, avg=618.80, stdev=26.76, samples=20 00:27:52.307 lat (msec) : 10=0.61%, 20=2.35%, 50=97.03% 00:27:52.307 cpu : usr=94.39%, sys=2.57%, ctx=270, majf=0, minf=40 00:27:52.307 IO depths : 1=5.5%, 2=11.4%, 4=23.7%, 8=52.3%, 16=7.0%, 32=0.0%, >=64=0.0% 00:27:52.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.307 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.307 issued rwts: total=6204,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.307 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.307 filename1: (groupid=0, jobs=1): err= 0: pid=3225506: Thu Apr 18 21:21:06 2024 00:27:52.307 read: IOPS=610, BW=2441KiB/s (2500kB/s)(23.9MiB/10005msec) 00:27:52.307 slat (usec): min=6, max=105, avg=37.43, stdev=19.71 00:27:52.307 clat (usec): min=4662, max=46720, avg=25951.97, stdev=3628.69 00:27:52.307 lat (usec): min=4672, max=46734, avg=25989.40, stdev=3629.43 00:27:52.307 clat percentiles (usec): 00:27:52.307 | 1.00th=[12911], 5.00th=[21103], 10.00th=[24511], 20.00th=[25035], 00:27:52.307 | 30.00th=[25560], 40.00th=[25560], 50.00th=[25822], 60.00th=[26084], 00:27:52.307 | 70.00th=[26346], 80.00th=[26608], 90.00th=[27919], 95.00th=[31327], 00:27:52.307 | 99.00th=[39584], 99.50th=[42206], 99.90th=[43779], 99.95th=[46924], 00:27:52.307 | 99.99th=[46924] 00:27:52.307 bw ( KiB/s): min= 2304, max= 2650, per=4.21%, avg=2442.63, stdev=65.13, samples=19 00:27:52.307 iops : min= 576, max= 662, avg=610.63, stdev=16.19, samples=19 00:27:52.307 lat (msec) : 10=0.52%, 20=3.32%, 50=96.15% 00:27:52.307 cpu : usr=95.30%, sys=2.33%, ctx=100, majf=0, minf=32 00:27:52.307 IO depths : 1=2.5%, 2=5.3%, 4=15.9%, 8=65.6%, 16=10.6%, 32=0.0%, >=64=0.0% 00:27:52.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.307 complete : 0=0.0%, 4=92.2%, 8=2.7%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.307 issued rwts: total=6106,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.307 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.307 filename2: (groupid=0, jobs=1): err= 0: pid=3225507: Thu Apr 18 21:21:06 2024 00:27:52.307 read: IOPS=611, BW=2447KiB/s (2506kB/s)(23.9MiB/10011msec) 00:27:52.307 slat (nsec): min=6856, max=89126, avg=44575.81, stdev=14036.12 00:27:52.307 clat (usec): min=14606, max=40987, avg=25781.48, stdev=1572.63 00:27:52.307 lat (usec): min=14664, max=41039, avg=25826.06, stdev=1573.46 00:27:52.307 clat percentiles (usec): 00:27:52.307 | 1.00th=[20579], 5.00th=[24511], 10.00th=[24773], 20.00th=[25297], 00:27:52.307 | 30.00th=[25297], 40.00th=[25560], 50.00th=[25822], 60.00th=[25822], 00:27:52.307 | 70.00th=[26084], 80.00th=[26346], 90.00th=[26608], 95.00th=[26870], 00:27:52.307 | 99.00th=[30802], 99.50th=[33424], 99.90th=[38536], 99.95th=[40633], 00:27:52.307 | 99.99th=[41157] 00:27:52.307 bw ( KiB/s): min= 2304, max= 2560, per=4.21%, avg=2443.26, stdev=59.72, samples=19 00:27:52.307 iops : min= 576, max= 640, avg=610.74, stdev=14.95, samples=19 00:27:52.307 lat (msec) : 20=0.83%, 50=99.17% 00:27:52.307 cpu : usr=98.30%, sys=1.11%, ctx=94, majf=0, minf=22 00:27:52.307 IO depths : 1=5.2%, 2=10.4%, 4=23.3%, 8=53.7%, 16=7.3%, 32=0.0%, >=64=0.0% 00:27:52.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.307 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.307 issued rwts: total=6124,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.307 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.307 filename2: (groupid=0, jobs=1): err= 0: pid=3225508: Thu Apr 18 21:21:06 2024 00:27:52.307 read: IOPS=612, BW=2449KiB/s (2508kB/s)(23.9MiB/10011msec) 00:27:52.307 slat (nsec): min=6288, max=84149, avg=36257.59, stdev=17250.00 00:27:52.307 clat (usec): min=7054, max=42842, avg=25851.18, stdev=2786.33 00:27:52.307 lat (usec): min=7065, max=42869, avg=25887.43, stdev=2787.53 00:27:52.307 clat percentiles (usec): 00:27:52.307 | 1.00th=[14746], 5.00th=[23725], 10.00th=[24773], 20.00th=[25297], 00:27:52.307 | 30.00th=[25560], 40.00th=[25560], 50.00th=[25822], 60.00th=[26084], 00:27:52.307 | 70.00th=[26084], 80.00th=[26346], 90.00th=[26870], 95.00th=[28443], 00:27:52.307 | 99.00th=[39060], 99.50th=[39584], 99.90th=[41681], 99.95th=[42730], 00:27:52.307 | 99.99th=[42730] 00:27:52.307 bw ( KiB/s): min= 2304, max= 2608, per=4.22%, avg=2448.26, stdev=73.57, samples=19 00:27:52.307 iops : min= 576, max= 652, avg=612.00, stdev=18.37, samples=19 00:27:52.307 lat (msec) : 10=0.16%, 20=2.38%, 50=97.46% 00:27:52.307 cpu : usr=94.19%, sys=2.82%, ctx=171, majf=0, minf=38 00:27:52.307 IO depths : 1=3.8%, 2=7.6%, 4=17.4%, 8=61.2%, 16=10.0%, 32=0.0%, >=64=0.0% 00:27:52.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.307 complete : 0=0.0%, 4=92.6%, 8=2.9%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.307 issued rwts: total=6130,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.307 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.307 filename2: (groupid=0, jobs=1): err= 0: pid=3225509: Thu Apr 18 21:21:06 2024 00:27:52.307 read: IOPS=611, BW=2445KiB/s (2504kB/s)(23.9MiB/10014msec) 00:27:52.307 slat (nsec): min=6360, max=89948, avg=43013.98, stdev=14748.07 00:27:52.307 clat (usec): min=14244, max=43912, avg=25833.31, stdev=1713.67 00:27:52.307 lat (usec): min=14253, max=43925, avg=25876.33, stdev=1713.74 00:27:52.307 clat percentiles (usec): 00:27:52.307 | 1.00th=[20841], 5.00th=[24511], 10.00th=[25035], 20.00th=[25297], 00:27:52.307 | 30.00th=[25560], 40.00th=[25560], 50.00th=[25822], 60.00th=[26084], 00:27:52.307 | 70.00th=[26084], 80.00th=[26346], 90.00th=[26608], 95.00th=[27132], 00:27:52.307 | 99.00th=[32113], 99.50th=[38011], 99.90th=[39060], 99.95th=[43779], 00:27:52.307 | 99.99th=[43779] 00:27:52.307 bw ( KiB/s): min= 2304, max= 2560, per=4.21%, avg=2442.42, stdev=60.47, samples=19 00:27:52.307 iops : min= 576, max= 640, avg=610.53, stdev=15.14, samples=19 00:27:52.307 lat (msec) : 20=0.93%, 50=99.07% 00:27:52.307 cpu : usr=98.54%, sys=0.89%, ctx=94, majf=0, minf=26 00:27:52.307 IO depths : 1=5.0%, 2=10.0%, 4=21.8%, 8=55.5%, 16=7.7%, 32=0.0%, >=64=0.0% 00:27:52.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.307 complete : 0=0.0%, 4=93.4%, 8=1.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.307 issued rwts: total=6122,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.307 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.307 filename2: (groupid=0, jobs=1): err= 0: pid=3225510: Thu Apr 18 21:21:06 2024 00:27:52.307 read: IOPS=598, BW=2392KiB/s (2450kB/s)(23.4MiB/10009msec) 00:27:52.307 slat (nsec): min=6035, max=87426, avg=28371.23, stdev=16970.28 00:27:52.307 clat (usec): min=9866, max=56794, avg=26603.30, stdev=4296.59 00:27:52.307 lat (usec): min=9881, max=56810, avg=26631.67, stdev=4296.15 00:27:52.307 clat percentiles (usec): 00:27:52.307 | 1.00th=[13566], 5.00th=[21627], 10.00th=[24773], 20.00th=[25297], 00:27:52.307 | 30.00th=[25560], 40.00th=[25822], 50.00th=[26084], 60.00th=[26346], 00:27:52.307 | 70.00th=[26346], 80.00th=[26870], 90.00th=[30278], 95.00th=[35390], 00:27:52.307 | 99.00th=[42206], 99.50th=[44303], 99.90th=[56886], 99.95th=[56886], 00:27:52.307 | 99.99th=[56886] 00:27:52.307 bw ( KiB/s): min= 2176, max= 2480, per=4.11%, avg=2383.16, stdev=72.44, samples=19 00:27:52.307 iops : min= 544, max= 620, avg=595.79, stdev=18.11, samples=19 00:27:52.307 lat (msec) : 10=0.10%, 20=3.63%, 50=96.01%, 100=0.27% 00:27:52.307 cpu : usr=93.90%, sys=3.03%, ctx=185, majf=0, minf=29 00:27:52.307 IO depths : 1=0.8%, 2=1.6%, 4=6.9%, 8=75.5%, 16=15.3%, 32=0.0%, >=64=0.0% 00:27:52.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.307 complete : 0=0.0%, 4=90.4%, 8=7.2%, 16=2.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.307 issued rwts: total=5986,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.307 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.307 filename2: (groupid=0, jobs=1): err= 0: pid=3225511: Thu Apr 18 21:21:06 2024 00:27:52.307 read: IOPS=611, BW=2445KiB/s (2504kB/s)(23.9MiB/10014msec) 00:27:52.307 slat (nsec): min=6475, max=88916, avg=41995.04, stdev=15226.39 00:27:52.307 clat (usec): min=14663, max=38663, avg=25850.74, stdev=1405.57 00:27:52.307 lat (usec): min=14704, max=38676, avg=25892.73, stdev=1405.14 00:27:52.307 clat percentiles (usec): 00:27:52.307 | 1.00th=[21890], 5.00th=[24511], 10.00th=[25035], 20.00th=[25297], 00:27:52.307 | 30.00th=[25560], 40.00th=[25560], 50.00th=[25822], 60.00th=[26084], 00:27:52.307 | 70.00th=[26084], 80.00th=[26346], 90.00th=[26608], 95.00th=[27132], 00:27:52.307 | 99.00th=[30540], 99.50th=[32900], 99.90th=[38011], 99.95th=[38536], 00:27:52.307 | 99.99th=[38536] 00:27:52.307 bw ( KiB/s): min= 2304, max= 2560, per=4.21%, avg=2442.42, stdev=60.47, samples=19 00:27:52.307 iops : min= 576, max= 640, avg=610.53, stdev=15.14, samples=19 00:27:52.307 lat (msec) : 20=0.29%, 50=99.71% 00:27:52.307 cpu : usr=98.63%, sys=0.87%, ctx=123, majf=0, minf=26 00:27:52.307 IO depths : 1=5.2%, 2=10.4%, 4=22.1%, 8=54.9%, 16=7.4%, 32=0.0%, >=64=0.0% 00:27:52.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.307 complete : 0=0.0%, 4=93.3%, 8=0.9%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.307 issued rwts: total=6122,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.307 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.307 filename2: (groupid=0, jobs=1): err= 0: pid=3225512: Thu Apr 18 21:21:06 2024 00:27:52.307 read: IOPS=604, BW=2417KiB/s (2475kB/s)(23.6MiB/10003msec) 00:27:52.307 slat (nsec): min=6158, max=90285, avg=27451.05, stdev=17420.62 00:27:52.307 clat (usec): min=8653, max=69561, avg=26340.20, stdev=4268.95 00:27:52.307 lat (usec): min=8676, max=69585, avg=26367.65, stdev=4269.16 00:27:52.307 clat percentiles (usec): 00:27:52.307 | 1.00th=[12649], 5.00th=[21365], 10.00th=[24773], 20.00th=[25297], 00:27:52.308 | 30.00th=[25560], 40.00th=[25822], 50.00th=[26084], 60.00th=[26346], 00:27:52.308 | 70.00th=[26346], 80.00th=[26608], 90.00th=[27919], 95.00th=[32375], 00:27:52.308 | 99.00th=[42206], 99.50th=[45351], 99.90th=[63177], 99.95th=[69731], 00:27:52.308 | 99.99th=[69731] 00:27:52.308 bw ( KiB/s): min= 2304, max= 2480, per=4.16%, avg=2412.63, stdev=48.76, samples=19 00:27:52.308 iops : min= 576, max= 620, avg=603.16, stdev=12.19, samples=19 00:27:52.308 lat (msec) : 10=0.20%, 20=3.69%, 50=95.85%, 100=0.26% 00:27:52.308 cpu : usr=98.67%, sys=0.91%, ctx=18, majf=0, minf=57 00:27:52.308 IO depths : 1=0.7%, 2=1.3%, 4=6.1%, 8=76.1%, 16=15.7%, 32=0.0%, >=64=0.0% 00:27:52.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.308 complete : 0=0.0%, 4=90.3%, 8=7.6%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.308 issued rwts: total=6044,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.308 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.308 filename2: (groupid=0, jobs=1): err= 0: pid=3225513: Thu Apr 18 21:21:06 2024 00:27:52.308 read: IOPS=610, BW=2443KiB/s (2502kB/s)(23.9MiB/10007msec) 00:27:52.308 slat (nsec): min=6003, max=86126, avg=42300.54, stdev=14404.54 00:27:52.308 clat (usec): min=11393, max=55764, avg=25805.39, stdev=1907.92 00:27:52.308 lat (usec): min=11415, max=55783, avg=25847.69, stdev=1907.16 00:27:52.308 clat percentiles (usec): 00:27:52.308 | 1.00th=[23987], 5.00th=[24511], 10.00th=[24773], 20.00th=[25297], 00:27:52.308 | 30.00th=[25297], 40.00th=[25560], 50.00th=[25822], 60.00th=[25822], 00:27:52.308 | 70.00th=[26084], 80.00th=[26346], 90.00th=[26608], 95.00th=[26870], 00:27:52.308 | 99.00th=[29492], 99.50th=[32113], 99.90th=[55837], 99.95th=[55837], 00:27:52.308 | 99.99th=[55837] 00:27:52.308 bw ( KiB/s): min= 2308, max= 2560, per=4.20%, avg=2438.95, stdev=51.22, samples=19 00:27:52.308 iops : min= 577, max= 640, avg=609.74, stdev=12.81, samples=19 00:27:52.308 lat (msec) : 20=0.26%, 50=99.48%, 100=0.26% 00:27:52.308 cpu : usr=94.45%, sys=2.67%, ctx=71, majf=0, minf=33 00:27:52.308 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:52.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.308 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.308 issued rwts: total=6112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.308 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.308 filename2: (groupid=0, jobs=1): err= 0: pid=3225514: Thu Apr 18 21:21:06 2024 00:27:52.308 read: IOPS=616, BW=2467KiB/s (2527kB/s)(24.1MiB/10012msec) 00:27:52.308 slat (nsec): min=6672, max=91688, avg=26249.03, stdev=16667.94 00:27:52.308 clat (usec): min=5247, max=33851, avg=25743.37, stdev=2230.60 00:27:52.308 lat (usec): min=5260, max=33910, avg=25769.62, stdev=2231.03 00:27:52.308 clat percentiles (usec): 00:27:52.308 | 1.00th=[ 7767], 5.00th=[24773], 10.00th=[25035], 20.00th=[25297], 00:27:52.308 | 30.00th=[25560], 40.00th=[25822], 50.00th=[25822], 60.00th=[26084], 00:27:52.308 | 70.00th=[26346], 80.00th=[26346], 90.00th=[26608], 95.00th=[26870], 00:27:52.308 | 99.00th=[29230], 99.50th=[32375], 99.90th=[33424], 99.95th=[33817], 00:27:52.308 | 99.99th=[33817] 00:27:52.308 bw ( KiB/s): min= 2304, max= 2816, per=4.25%, avg=2465.42, stdev=103.22, samples=19 00:27:52.308 iops : min= 576, max= 704, avg=616.32, stdev=25.82, samples=19 00:27:52.308 lat (msec) : 10=1.04%, 20=0.26%, 50=98.70% 00:27:52.308 cpu : usr=98.80%, sys=0.73%, ctx=91, majf=0, minf=26 00:27:52.308 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:52.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.308 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.308 issued rwts: total=6176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.308 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.308 00:27:52.308 Run status group 0 (all jobs): 00:27:52.308 READ: bw=56.7MiB/s (59.4MB/s), 2072KiB/s-2501KiB/s (2122kB/s-2561kB/s), io=568MiB (596MB), run=10003-10019msec 00:27:52.308 21:21:06 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:27:52.308 21:21:06 -- target/dif.sh@43 -- # local sub 00:27:52.308 21:21:06 -- target/dif.sh@45 -- # for sub in "$@" 00:27:52.308 21:21:06 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:52.308 21:21:06 -- target/dif.sh@36 -- # local sub_id=0 00:27:52.308 21:21:06 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:52.308 21:21:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:52.308 21:21:06 -- common/autotest_common.sh@10 -- # set +x 00:27:52.308 21:21:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:52.308 21:21:06 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:52.308 21:21:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:52.308 21:21:06 -- common/autotest_common.sh@10 -- # set +x 00:27:52.308 21:21:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:52.308 21:21:06 -- target/dif.sh@45 -- # for sub in "$@" 00:27:52.308 21:21:06 -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:52.308 21:21:06 -- target/dif.sh@36 -- # local sub_id=1 00:27:52.308 21:21:06 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:52.308 21:21:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:52.308 21:21:06 -- common/autotest_common.sh@10 -- # set +x 00:27:52.308 21:21:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:52.308 21:21:06 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:52.308 21:21:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:52.308 21:21:06 -- common/autotest_common.sh@10 -- # set +x 00:27:52.308 21:21:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:52.308 21:21:06 -- target/dif.sh@45 -- # for sub in "$@" 00:27:52.308 21:21:06 -- target/dif.sh@46 -- # destroy_subsystem 2 00:27:52.308 21:21:06 -- target/dif.sh@36 -- # local sub_id=2 00:27:52.308 21:21:06 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:52.308 21:21:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:52.308 21:21:06 -- common/autotest_common.sh@10 -- # set +x 00:27:52.308 21:21:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:52.308 21:21:06 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:27:52.308 21:21:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:52.308 21:21:06 -- common/autotest_common.sh@10 -- # set +x 00:27:52.308 21:21:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:52.308 21:21:06 -- target/dif.sh@115 -- # NULL_DIF=1 00:27:52.308 21:21:06 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:27:52.308 21:21:06 -- target/dif.sh@115 -- # numjobs=2 00:27:52.308 21:21:06 -- target/dif.sh@115 -- # iodepth=8 00:27:52.308 21:21:06 -- target/dif.sh@115 -- # runtime=5 00:27:52.308 21:21:06 -- target/dif.sh@115 -- # files=1 00:27:52.308 21:21:06 -- target/dif.sh@117 -- # create_subsystems 0 1 00:27:52.308 21:21:06 -- target/dif.sh@28 -- # local sub 00:27:52.308 21:21:06 -- target/dif.sh@30 -- # for sub in "$@" 00:27:52.308 21:21:06 -- target/dif.sh@31 -- # create_subsystem 0 00:27:52.308 21:21:06 -- target/dif.sh@18 -- # local sub_id=0 00:27:52.308 21:21:06 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:52.308 21:21:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:52.308 21:21:06 -- common/autotest_common.sh@10 -- # set +x 00:27:52.308 bdev_null0 00:27:52.308 21:21:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:52.308 21:21:06 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:52.308 21:21:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:52.308 21:21:06 -- common/autotest_common.sh@10 -- # set +x 00:27:52.308 21:21:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:52.308 21:21:06 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:52.308 21:21:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:52.308 21:21:06 -- common/autotest_common.sh@10 -- # set +x 00:27:52.308 21:21:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:52.308 21:21:06 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:52.308 21:21:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:52.308 21:21:06 -- common/autotest_common.sh@10 -- # set +x 00:27:52.308 [2024-04-18 21:21:06.640746] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:52.308 21:21:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:52.308 21:21:06 -- target/dif.sh@30 -- # for sub in "$@" 00:27:52.308 21:21:06 -- target/dif.sh@31 -- # create_subsystem 1 00:27:52.308 21:21:06 -- target/dif.sh@18 -- # local sub_id=1 00:27:52.308 21:21:06 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:52.308 21:21:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:52.308 21:21:06 -- common/autotest_common.sh@10 -- # set +x 00:27:52.308 bdev_null1 00:27:52.308 21:21:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:52.308 21:21:06 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:52.308 21:21:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:52.308 21:21:06 -- common/autotest_common.sh@10 -- # set +x 00:27:52.308 21:21:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:52.308 21:21:06 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:52.308 21:21:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:52.308 21:21:06 -- common/autotest_common.sh@10 -- # set +x 00:27:52.308 21:21:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:52.308 21:21:06 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:52.308 21:21:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:52.308 21:21:06 -- common/autotest_common.sh@10 -- # set +x 00:27:52.308 21:21:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:52.308 21:21:06 -- target/dif.sh@118 -- # fio /dev/fd/62 00:27:52.308 21:21:06 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:52.308 21:21:06 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:52.309 21:21:06 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:52.309 21:21:06 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:52.309 21:21:06 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:27:52.309 21:21:06 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:52.309 21:21:06 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:52.309 21:21:06 -- common/autotest_common.sh@1327 -- # shift 00:27:52.309 21:21:06 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:52.309 21:21:06 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:52.309 21:21:06 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:52.309 21:21:06 -- target/dif.sh@82 -- # gen_fio_conf 00:27:52.309 21:21:06 -- nvmf/common.sh@521 -- # config=() 00:27:52.309 21:21:06 -- target/dif.sh@54 -- # local file 00:27:52.309 21:21:06 -- nvmf/common.sh@521 -- # local subsystem config 00:27:52.309 21:21:06 -- target/dif.sh@56 -- # cat 00:27:52.309 21:21:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:52.309 21:21:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:52.309 { 00:27:52.309 "params": { 00:27:52.309 "name": "Nvme$subsystem", 00:27:52.309 "trtype": "$TEST_TRANSPORT", 00:27:52.309 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:52.309 "adrfam": "ipv4", 00:27:52.309 "trsvcid": "$NVMF_PORT", 00:27:52.309 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:52.309 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:52.309 "hdgst": ${hdgst:-false}, 00:27:52.309 "ddgst": ${ddgst:-false} 00:27:52.309 }, 00:27:52.309 "method": "bdev_nvme_attach_controller" 00:27:52.309 } 00:27:52.309 EOF 00:27:52.309 )") 00:27:52.309 21:21:06 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:52.309 21:21:06 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:52.309 21:21:06 -- nvmf/common.sh@543 -- # cat 00:27:52.309 21:21:06 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:52.309 21:21:06 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:52.309 21:21:06 -- target/dif.sh@72 -- # (( file <= files )) 00:27:52.309 21:21:06 -- target/dif.sh@73 -- # cat 00:27:52.309 21:21:06 -- target/dif.sh@72 -- # (( file++ )) 00:27:52.309 21:21:06 -- target/dif.sh@72 -- # (( file <= files )) 00:27:52.309 21:21:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:52.309 21:21:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:52.309 { 00:27:52.309 "params": { 00:27:52.309 "name": "Nvme$subsystem", 00:27:52.309 "trtype": "$TEST_TRANSPORT", 00:27:52.309 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:52.309 "adrfam": "ipv4", 00:27:52.309 "trsvcid": "$NVMF_PORT", 00:27:52.309 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:52.309 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:52.309 "hdgst": ${hdgst:-false}, 00:27:52.309 "ddgst": ${ddgst:-false} 00:27:52.309 }, 00:27:52.309 "method": "bdev_nvme_attach_controller" 00:27:52.309 } 00:27:52.309 EOF 00:27:52.309 )") 00:27:52.309 21:21:06 -- nvmf/common.sh@543 -- # cat 00:27:52.309 21:21:06 -- nvmf/common.sh@545 -- # jq . 00:27:52.309 21:21:06 -- nvmf/common.sh@546 -- # IFS=, 00:27:52.309 21:21:06 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:52.309 "params": { 00:27:52.309 "name": "Nvme0", 00:27:52.309 "trtype": "tcp", 00:27:52.309 "traddr": "10.0.0.2", 00:27:52.309 "adrfam": "ipv4", 00:27:52.309 "trsvcid": "4420", 00:27:52.309 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:52.309 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:52.309 "hdgst": false, 00:27:52.309 "ddgst": false 00:27:52.309 }, 00:27:52.309 "method": "bdev_nvme_attach_controller" 00:27:52.309 },{ 00:27:52.309 "params": { 00:27:52.309 "name": "Nvme1", 00:27:52.309 "trtype": "tcp", 00:27:52.309 "traddr": "10.0.0.2", 00:27:52.309 "adrfam": "ipv4", 00:27:52.309 "trsvcid": "4420", 00:27:52.309 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:52.309 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:52.309 "hdgst": false, 00:27:52.309 "ddgst": false 00:27:52.309 }, 00:27:52.309 "method": "bdev_nvme_attach_controller" 00:27:52.309 }' 00:27:52.309 21:21:06 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:52.309 21:21:06 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:52.309 21:21:06 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:52.309 21:21:06 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:52.309 21:21:06 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:52.309 21:21:06 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:52.309 21:21:06 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:52.309 21:21:06 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:52.309 21:21:06 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:52.309 21:21:06 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:52.309 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:52.309 ... 00:27:52.309 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:52.309 ... 00:27:52.309 fio-3.35 00:27:52.309 Starting 4 threads 00:27:52.309 EAL: No free 2048 kB hugepages reported on node 1 00:27:57.584 00:27:57.584 filename0: (groupid=0, jobs=1): err= 0: pid=3227586: Thu Apr 18 21:21:12 2024 00:27:57.584 read: IOPS=2557, BW=20.0MiB/s (20.9MB/s)(99.9MiB/5003msec) 00:27:57.584 slat (nsec): min=5957, max=81364, avg=10774.60, stdev=6454.64 00:27:57.585 clat (usec): min=1651, max=44667, avg=3100.99, stdev=1116.63 00:27:57.585 lat (usec): min=1657, max=44692, avg=3111.76, stdev=1116.66 00:27:57.585 clat percentiles (usec): 00:27:57.585 | 1.00th=[ 2180], 5.00th=[ 2409], 10.00th=[ 2540], 20.00th=[ 2737], 00:27:57.585 | 30.00th=[ 2900], 40.00th=[ 3064], 50.00th=[ 3097], 60.00th=[ 3130], 00:27:57.585 | 70.00th=[ 3228], 80.00th=[ 3359], 90.00th=[ 3556], 95.00th=[ 3752], 00:27:57.585 | 99.00th=[ 4228], 99.50th=[ 4424], 99.90th=[ 5014], 99.95th=[44827], 00:27:57.585 | 99.99th=[44827] 00:27:57.585 bw ( KiB/s): min=19056, max=20880, per=25.06%, avg=20428.44, stdev=538.90, samples=9 00:27:57.585 iops : min= 2382, max= 2610, avg=2553.56, stdev=67.36, samples=9 00:27:57.585 lat (msec) : 2=0.16%, 4=97.41%, 10=2.36%, 50=0.06% 00:27:57.585 cpu : usr=96.82%, sys=2.82%, ctx=7, majf=0, minf=21 00:27:57.585 IO depths : 1=0.1%, 2=1.1%, 4=65.8%, 8=33.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:57.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.585 complete : 0=0.0%, 4=96.4%, 8=3.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.585 issued rwts: total=12793,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.585 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:57.585 filename0: (groupid=0, jobs=1): err= 0: pid=3227587: Thu Apr 18 21:21:12 2024 00:27:57.585 read: IOPS=2558, BW=20.0MiB/s (21.0MB/s)(99.9MiB/5001msec) 00:27:57.585 slat (nsec): min=6074, max=60666, avg=12352.34, stdev=6112.83 00:27:57.585 clat (usec): min=1794, max=5094, avg=3096.43, stdev=420.05 00:27:57.585 lat (usec): min=1800, max=5112, avg=3108.78, stdev=419.97 00:27:57.585 clat percentiles (usec): 00:27:57.585 | 1.00th=[ 2180], 5.00th=[ 2409], 10.00th=[ 2540], 20.00th=[ 2769], 00:27:57.585 | 30.00th=[ 2933], 40.00th=[ 3064], 50.00th=[ 3097], 60.00th=[ 3130], 00:27:57.585 | 70.00th=[ 3261], 80.00th=[ 3392], 90.00th=[ 3621], 95.00th=[ 3851], 00:27:57.585 | 99.00th=[ 4228], 99.50th=[ 4359], 99.90th=[ 4752], 99.95th=[ 4883], 00:27:57.585 | 99.99th=[ 5080] 00:27:57.585 bw ( KiB/s): min=20240, max=20976, per=25.14%, avg=20490.67, stdev=209.84, samples=9 00:27:57.585 iops : min= 2530, max= 2622, avg=2561.33, stdev=26.23, samples=9 00:27:57.585 lat (msec) : 2=0.20%, 4=96.85%, 10=2.95% 00:27:57.585 cpu : usr=97.16%, sys=2.46%, ctx=7, majf=0, minf=56 00:27:57.585 IO depths : 1=0.1%, 2=1.3%, 4=65.7%, 8=32.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:57.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.585 complete : 0=0.0%, 4=96.4%, 8=3.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.585 issued rwts: total=12793,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.585 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:57.585 filename1: (groupid=0, jobs=1): err= 0: pid=3227588: Thu Apr 18 21:21:12 2024 00:27:57.585 read: IOPS=2534, BW=19.8MiB/s (20.8MB/s)(99.0MiB/5001msec) 00:27:57.585 slat (nsec): min=5961, max=51064, avg=11880.42, stdev=7881.39 00:27:57.585 clat (usec): min=1300, max=5458, avg=3126.43, stdev=432.05 00:27:57.585 lat (usec): min=1307, max=5492, avg=3138.32, stdev=431.87 00:27:57.585 clat percentiles (usec): 00:27:57.585 | 1.00th=[ 2212], 5.00th=[ 2442], 10.00th=[ 2573], 20.00th=[ 2802], 00:27:57.585 | 30.00th=[ 2966], 40.00th=[ 3064], 50.00th=[ 3097], 60.00th=[ 3130], 00:27:57.585 | 70.00th=[ 3294], 80.00th=[ 3425], 90.00th=[ 3687], 95.00th=[ 3884], 00:27:57.585 | 99.00th=[ 4359], 99.50th=[ 4555], 99.90th=[ 5211], 99.95th=[ 5342], 00:27:57.585 | 99.99th=[ 5407] 00:27:57.585 bw ( KiB/s): min=19936, max=20585, per=24.88%, avg=20281.89, stdev=223.52, samples=9 00:27:57.585 iops : min= 2492, max= 2573, avg=2535.22, stdev=27.92, samples=9 00:27:57.585 lat (msec) : 2=0.18%, 4=96.41%, 10=3.41% 00:27:57.585 cpu : usr=97.16%, sys=2.48%, ctx=7, majf=0, minf=42 00:27:57.585 IO depths : 1=0.1%, 2=1.1%, 4=65.7%, 8=33.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:57.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.585 complete : 0=0.0%, 4=96.6%, 8=3.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.585 issued rwts: total=12676,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.585 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:57.585 filename1: (groupid=0, jobs=1): err= 0: pid=3227589: Thu Apr 18 21:21:12 2024 00:27:57.585 read: IOPS=2542, BW=19.9MiB/s (20.8MB/s)(99.3MiB/5002msec) 00:27:57.585 slat (nsec): min=5967, max=50819, avg=11413.60, stdev=7060.47 00:27:57.585 clat (usec): min=1688, max=5236, avg=3118.56, stdev=409.76 00:27:57.585 lat (usec): min=1710, max=5255, avg=3129.97, stdev=409.64 00:27:57.585 clat percentiles (usec): 00:27:57.585 | 1.00th=[ 2180], 5.00th=[ 2442], 10.00th=[ 2573], 20.00th=[ 2802], 00:27:57.585 | 30.00th=[ 2999], 40.00th=[ 3064], 50.00th=[ 3097], 60.00th=[ 3130], 00:27:57.585 | 70.00th=[ 3261], 80.00th=[ 3392], 90.00th=[ 3621], 95.00th=[ 3818], 00:27:57.585 | 99.00th=[ 4293], 99.50th=[ 4424], 99.90th=[ 4817], 99.95th=[ 4948], 00:27:57.585 | 99.99th=[ 5211] 00:27:57.585 bw ( KiB/s): min=19952, max=20608, per=24.96%, avg=20349.22, stdev=195.47, samples=9 00:27:57.585 iops : min= 2494, max= 2576, avg=2543.56, stdev=24.51, samples=9 00:27:57.585 lat (msec) : 2=0.20%, 4=97.03%, 10=2.78% 00:27:57.585 cpu : usr=96.86%, sys=2.74%, ctx=8, majf=0, minf=50 00:27:57.585 IO depths : 1=0.1%, 2=1.0%, 4=65.2%, 8=33.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:57.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.585 complete : 0=0.0%, 4=97.0%, 8=3.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.585 issued rwts: total=12716,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.585 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:57.585 00:27:57.585 Run status group 0 (all jobs): 00:27:57.585 READ: bw=79.6MiB/s (83.5MB/s), 19.8MiB/s-20.0MiB/s (20.8MB/s-21.0MB/s), io=398MiB (418MB), run=5001-5003msec 00:27:57.585 21:21:13 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:27:57.585 21:21:13 -- target/dif.sh@43 -- # local sub 00:27:57.585 21:21:13 -- target/dif.sh@45 -- # for sub in "$@" 00:27:57.585 21:21:13 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:57.585 21:21:13 -- target/dif.sh@36 -- # local sub_id=0 00:27:57.585 21:21:13 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:57.585 21:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.585 21:21:13 -- common/autotest_common.sh@10 -- # set +x 00:27:57.585 21:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.585 21:21:13 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:57.585 21:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.585 21:21:13 -- common/autotest_common.sh@10 -- # set +x 00:27:57.585 21:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.585 21:21:13 -- target/dif.sh@45 -- # for sub in "$@" 00:27:57.585 21:21:13 -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:57.585 21:21:13 -- target/dif.sh@36 -- # local sub_id=1 00:27:57.585 21:21:13 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:57.585 21:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.585 21:21:13 -- common/autotest_common.sh@10 -- # set +x 00:27:57.585 21:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.585 21:21:13 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:57.585 21:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.585 21:21:13 -- common/autotest_common.sh@10 -- # set +x 00:27:57.585 21:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.585 00:27:57.585 real 0m24.323s 00:27:57.585 user 4m49.001s 00:27:57.585 sys 0m4.856s 00:27:57.585 21:21:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:57.585 21:21:13 -- common/autotest_common.sh@10 -- # set +x 00:27:57.585 ************************************ 00:27:57.585 END TEST fio_dif_rand_params 00:27:57.585 ************************************ 00:27:57.585 21:21:13 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:27:57.585 21:21:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:57.585 21:21:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:57.585 21:21:13 -- common/autotest_common.sh@10 -- # set +x 00:27:57.585 ************************************ 00:27:57.585 START TEST fio_dif_digest 00:27:57.585 ************************************ 00:27:57.585 21:21:13 -- common/autotest_common.sh@1111 -- # fio_dif_digest 00:27:57.585 21:21:13 -- target/dif.sh@123 -- # local NULL_DIF 00:27:57.585 21:21:13 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:27:57.585 21:21:13 -- target/dif.sh@125 -- # local hdgst ddgst 00:27:57.585 21:21:13 -- target/dif.sh@127 -- # NULL_DIF=3 00:27:57.585 21:21:13 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:27:57.585 21:21:13 -- target/dif.sh@127 -- # numjobs=3 00:27:57.585 21:21:13 -- target/dif.sh@127 -- # iodepth=3 00:27:57.585 21:21:13 -- target/dif.sh@127 -- # runtime=10 00:27:57.585 21:21:13 -- target/dif.sh@128 -- # hdgst=true 00:27:57.585 21:21:13 -- target/dif.sh@128 -- # ddgst=true 00:27:57.585 21:21:13 -- target/dif.sh@130 -- # create_subsystems 0 00:27:57.585 21:21:13 -- target/dif.sh@28 -- # local sub 00:27:57.585 21:21:13 -- target/dif.sh@30 -- # for sub in "$@" 00:27:57.585 21:21:13 -- target/dif.sh@31 -- # create_subsystem 0 00:27:57.585 21:21:13 -- target/dif.sh@18 -- # local sub_id=0 00:27:57.585 21:21:13 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:57.585 21:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.585 21:21:13 -- common/autotest_common.sh@10 -- # set +x 00:27:57.585 bdev_null0 00:27:57.585 21:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.585 21:21:13 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:57.585 21:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.585 21:21:13 -- common/autotest_common.sh@10 -- # set +x 00:27:57.585 21:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.585 21:21:13 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:57.585 21:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.585 21:21:13 -- common/autotest_common.sh@10 -- # set +x 00:27:57.585 21:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.585 21:21:13 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:57.585 21:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.585 21:21:13 -- common/autotest_common.sh@10 -- # set +x 00:27:57.585 [2024-04-18 21:21:13.309824] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:57.585 21:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.585 21:21:13 -- target/dif.sh@131 -- # fio /dev/fd/62 00:27:57.585 21:21:13 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:57.585 21:21:13 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:27:57.586 21:21:13 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:57.586 21:21:13 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:57.586 21:21:13 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:57.586 21:21:13 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:57.586 21:21:13 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:57.586 21:21:13 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:57.586 21:21:13 -- common/autotest_common.sh@1327 -- # shift 00:27:57.586 21:21:13 -- nvmf/common.sh@521 -- # config=() 00:27:57.586 21:21:13 -- target/dif.sh@82 -- # gen_fio_conf 00:27:57.586 21:21:13 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:57.586 21:21:13 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:57.586 21:21:13 -- nvmf/common.sh@521 -- # local subsystem config 00:27:57.586 21:21:13 -- target/dif.sh@54 -- # local file 00:27:57.586 21:21:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:57.586 21:21:13 -- target/dif.sh@56 -- # cat 00:27:57.586 21:21:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:57.586 { 00:27:57.586 "params": { 00:27:57.586 "name": "Nvme$subsystem", 00:27:57.586 "trtype": "$TEST_TRANSPORT", 00:27:57.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:57.586 "adrfam": "ipv4", 00:27:57.586 "trsvcid": "$NVMF_PORT", 00:27:57.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:57.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:57.586 "hdgst": ${hdgst:-false}, 00:27:57.586 "ddgst": ${ddgst:-false} 00:27:57.586 }, 00:27:57.586 "method": "bdev_nvme_attach_controller" 00:27:57.586 } 00:27:57.586 EOF 00:27:57.586 )") 00:27:57.586 21:21:13 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:57.586 21:21:13 -- nvmf/common.sh@543 -- # cat 00:27:57.586 21:21:13 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:57.586 21:21:13 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:57.586 21:21:13 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:57.586 21:21:13 -- target/dif.sh@72 -- # (( file <= files )) 00:27:57.586 21:21:13 -- nvmf/common.sh@545 -- # jq . 00:27:57.586 21:21:13 -- nvmf/common.sh@546 -- # IFS=, 00:27:57.586 21:21:13 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:57.586 "params": { 00:27:57.586 "name": "Nvme0", 00:27:57.586 "trtype": "tcp", 00:27:57.586 "traddr": "10.0.0.2", 00:27:57.586 "adrfam": "ipv4", 00:27:57.586 "trsvcid": "4420", 00:27:57.586 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:57.586 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:57.586 "hdgst": true, 00:27:57.586 "ddgst": true 00:27:57.586 }, 00:27:57.586 "method": "bdev_nvme_attach_controller" 00:27:57.586 }' 00:27:57.586 21:21:13 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:57.586 21:21:13 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:57.586 21:21:13 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:57.586 21:21:13 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:57.586 21:21:13 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:57.586 21:21:13 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:57.586 21:21:13 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:57.586 21:21:13 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:57.586 21:21:13 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:57.586 21:21:13 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:57.844 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:57.844 ... 00:27:57.844 fio-3.35 00:27:57.844 Starting 3 threads 00:27:57.844 EAL: No free 2048 kB hugepages reported on node 1 00:28:10.057 00:28:10.057 filename0: (groupid=0, jobs=1): err= 0: pid=3229050: Thu Apr 18 21:21:24 2024 00:28:10.057 read: IOPS=315, BW=39.5MiB/s (41.4MB/s)(396MiB/10046msec) 00:28:10.057 slat (nsec): min=6308, max=24095, avg=10155.31, stdev=2278.31 00:28:10.057 clat (usec): min=4645, max=94725, avg=9477.81, stdev=6647.00 00:28:10.057 lat (usec): min=4652, max=94737, avg=9487.97, stdev=6647.19 00:28:10.057 clat percentiles (usec): 00:28:10.057 | 1.00th=[ 5080], 5.00th=[ 5538], 10.00th=[ 6194], 20.00th=[ 7046], 00:28:10.057 | 30.00th=[ 7570], 40.00th=[ 8029], 50.00th=[ 8586], 60.00th=[ 9241], 00:28:10.057 | 70.00th=[ 9765], 80.00th=[10290], 90.00th=[10945], 95.00th=[11863], 00:28:10.057 | 99.00th=[53740], 99.50th=[55313], 99.90th=[90702], 99.95th=[93848], 00:28:10.057 | 99.99th=[94897] 00:28:10.057 bw ( KiB/s): min=29184, max=47104, per=46.43%, avg=40563.20, stdev=5714.97, samples=20 00:28:10.057 iops : min= 228, max= 368, avg=316.90, stdev=44.65, samples=20 00:28:10.057 lat (msec) : 10=73.57%, 20=24.69%, 50=0.19%, 100=1.55% 00:28:10.057 cpu : usr=93.84%, sys=5.60%, ctx=17, majf=0, minf=146 00:28:10.057 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:10.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:10.057 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:10.057 issued rwts: total=3171,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:10.057 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:10.057 filename0: (groupid=0, jobs=1): err= 0: pid=3229051: Thu Apr 18 21:21:24 2024 00:28:10.057 read: IOPS=191, BW=24.0MiB/s (25.1MB/s)(240MiB/10016msec) 00:28:10.057 slat (nsec): min=6371, max=25049, avg=10732.55, stdev=2136.11 00:28:10.057 clat (usec): min=5245, max=96490, avg=15634.47, stdev=13878.41 00:28:10.057 lat (usec): min=5252, max=96503, avg=15645.20, stdev=13878.56 00:28:10.057 clat percentiles (usec): 00:28:10.057 | 1.00th=[ 5866], 5.00th=[ 7767], 10.00th=[ 8455], 20.00th=[ 9110], 00:28:10.057 | 30.00th=[ 9896], 40.00th=[10945], 50.00th=[11863], 60.00th=[12518], 00:28:10.057 | 70.00th=[13173], 80.00th=[13960], 90.00th=[17957], 95.00th=[54264], 00:28:10.057 | 99.00th=[56886], 99.50th=[91751], 99.90th=[94897], 99.95th=[96994], 00:28:10.057 | 99.99th=[96994] 00:28:10.057 bw ( KiB/s): min=18432, max=36352, per=28.08%, avg=24537.60, stdev=4703.04, samples=20 00:28:10.057 iops : min= 144, max= 284, avg=191.70, stdev=36.74, samples=20 00:28:10.057 lat (msec) : 10=30.73%, 20=59.32%, 50=0.21%, 100=9.74% 00:28:10.057 cpu : usr=95.83%, sys=3.85%, ctx=15, majf=0, minf=47 00:28:10.057 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:10.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:10.057 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:10.057 issued rwts: total=1920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:10.057 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:10.057 filename0: (groupid=0, jobs=1): err= 0: pid=3229052: Thu Apr 18 21:21:24 2024 00:28:10.057 read: IOPS=175, BW=22.0MiB/s (23.0MB/s)(221MiB/10045msec) 00:28:10.057 slat (nsec): min=6319, max=36292, avg=11074.93, stdev=2023.12 00:28:10.057 clat (msec): min=5, max=100, avg=17.05, stdev=16.28 00:28:10.057 lat (msec): min=5, max=100, avg=17.06, stdev=16.28 00:28:10.057 clat percentiles (msec): 00:28:10.057 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 10], 00:28:10.057 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 12], 00:28:10.057 | 70.00th=[ 13], 80.00th=[ 14], 90.00th=[ 53], 95.00th=[ 55], 00:28:10.057 | 99.00th=[ 93], 99.50th=[ 95], 99.90th=[ 99], 99.95th=[ 102], 00:28:10.057 | 99.99th=[ 102] 00:28:10.057 bw ( KiB/s): min=14336, max=33536, per=25.83%, avg=22566.40, stdev=5003.66, samples=20 00:28:10.057 iops : min= 112, max= 262, avg=176.30, stdev=39.09, samples=20 00:28:10.057 lat (msec) : 10=26.73%, 20=60.02%, 50=0.85%, 100=12.34%, 250=0.06% 00:28:10.057 cpu : usr=95.63%, sys=3.86%, ctx=17, majf=0, minf=159 00:28:10.057 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:10.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:10.057 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:10.057 issued rwts: total=1766,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:10.058 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:10.058 00:28:10.058 Run status group 0 (all jobs): 00:28:10.058 READ: bw=85.3MiB/s (89.5MB/s), 22.0MiB/s-39.5MiB/s (23.0MB/s-41.4MB/s), io=857MiB (899MB), run=10016-10046msec 00:28:10.058 21:21:24 -- target/dif.sh@132 -- # destroy_subsystems 0 00:28:10.058 21:21:24 -- target/dif.sh@43 -- # local sub 00:28:10.058 21:21:24 -- target/dif.sh@45 -- # for sub in "$@" 00:28:10.058 21:21:24 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:10.058 21:21:24 -- target/dif.sh@36 -- # local sub_id=0 00:28:10.058 21:21:24 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:10.058 21:21:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.058 21:21:24 -- common/autotest_common.sh@10 -- # set +x 00:28:10.058 21:21:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.058 21:21:24 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:10.058 21:21:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.058 21:21:24 -- common/autotest_common.sh@10 -- # set +x 00:28:10.058 21:21:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.058 00:28:10.058 real 0m11.047s 00:28:10.058 user 0m34.760s 00:28:10.058 sys 0m1.619s 00:28:10.058 21:21:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:10.058 21:21:24 -- common/autotest_common.sh@10 -- # set +x 00:28:10.058 ************************************ 00:28:10.058 END TEST fio_dif_digest 00:28:10.058 ************************************ 00:28:10.058 21:21:24 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:28:10.058 21:21:24 -- target/dif.sh@147 -- # nvmftestfini 00:28:10.058 21:21:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:28:10.058 21:21:24 -- nvmf/common.sh@117 -- # sync 00:28:10.058 21:21:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:10.058 21:21:24 -- nvmf/common.sh@120 -- # set +e 00:28:10.058 21:21:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:10.058 21:21:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:10.058 rmmod nvme_tcp 00:28:10.058 rmmod nvme_fabrics 00:28:10.058 rmmod nvme_keyring 00:28:10.058 21:21:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:10.058 21:21:24 -- nvmf/common.sh@124 -- # set -e 00:28:10.058 21:21:24 -- nvmf/common.sh@125 -- # return 0 00:28:10.058 21:21:24 -- nvmf/common.sh@478 -- # '[' -n 3219894 ']' 00:28:10.058 21:21:24 -- nvmf/common.sh@479 -- # killprocess 3219894 00:28:10.058 21:21:24 -- common/autotest_common.sh@936 -- # '[' -z 3219894 ']' 00:28:10.058 21:21:24 -- common/autotest_common.sh@940 -- # kill -0 3219894 00:28:10.058 21:21:24 -- common/autotest_common.sh@941 -- # uname 00:28:10.058 21:21:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:10.058 21:21:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3219894 00:28:10.058 21:21:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:10.058 21:21:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:10.058 21:21:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3219894' 00:28:10.058 killing process with pid 3219894 00:28:10.058 21:21:24 -- common/autotest_common.sh@955 -- # kill 3219894 00:28:10.058 21:21:24 -- common/autotest_common.sh@960 -- # wait 3219894 00:28:10.058 21:21:24 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:28:10.058 21:21:24 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:11.964 Waiting for block devices as requested 00:28:11.964 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:28:11.964 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:11.964 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:11.964 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:11.964 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:11.964 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:12.224 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:12.224 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:12.224 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:12.224 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:12.224 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:12.483 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:12.483 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:12.483 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:12.742 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:12.742 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:12.742 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:12.742 21:21:28 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:28:12.742 21:21:28 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:28:12.742 21:21:28 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:12.742 21:21:28 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:12.742 21:21:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.742 21:21:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:12.742 21:21:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.280 21:21:30 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:15.280 00:28:15.280 real 1m14.862s 00:28:15.280 user 7m6.814s 00:28:15.280 sys 0m19.903s 00:28:15.280 21:21:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:15.280 21:21:30 -- common/autotest_common.sh@10 -- # set +x 00:28:15.280 ************************************ 00:28:15.280 END TEST nvmf_dif 00:28:15.280 ************************************ 00:28:15.280 21:21:30 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:15.280 21:21:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:15.280 21:21:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:15.280 21:21:30 -- common/autotest_common.sh@10 -- # set +x 00:28:15.280 ************************************ 00:28:15.280 START TEST nvmf_abort_qd_sizes 00:28:15.280 ************************************ 00:28:15.280 21:21:30 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:15.280 * Looking for test storage... 00:28:15.280 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:15.280 21:21:30 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:15.280 21:21:30 -- nvmf/common.sh@7 -- # uname -s 00:28:15.280 21:21:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:15.280 21:21:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:15.280 21:21:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:15.280 21:21:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:15.280 21:21:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:15.280 21:21:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:15.280 21:21:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:15.280 21:21:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:15.280 21:21:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:15.280 21:21:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:15.280 21:21:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:15.280 21:21:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:15.280 21:21:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:15.280 21:21:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:15.280 21:21:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:15.280 21:21:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:15.280 21:21:31 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:15.280 21:21:31 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:15.280 21:21:31 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:15.280 21:21:31 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:15.280 21:21:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.280 21:21:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.280 21:21:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.280 21:21:31 -- paths/export.sh@5 -- # export PATH 00:28:15.280 21:21:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.280 21:21:31 -- nvmf/common.sh@47 -- # : 0 00:28:15.280 21:21:31 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:15.280 21:21:31 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:15.280 21:21:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:15.280 21:21:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:15.280 21:21:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:15.280 21:21:31 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:15.280 21:21:31 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:15.280 21:21:31 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:15.280 21:21:31 -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:28:15.280 21:21:31 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:28:15.280 21:21:31 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:15.280 21:21:31 -- nvmf/common.sh@437 -- # prepare_net_devs 00:28:15.280 21:21:31 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:28:15.280 21:21:31 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:28:15.280 21:21:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:15.280 21:21:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:15.280 21:21:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.280 21:21:31 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:28:15.280 21:21:31 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:28:15.280 21:21:31 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:15.280 21:21:31 -- common/autotest_common.sh@10 -- # set +x 00:28:21.853 21:21:36 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:21.853 21:21:36 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:21.853 21:21:36 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:21.853 21:21:36 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:21.853 21:21:36 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:21.853 21:21:36 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:21.853 21:21:36 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:21.853 21:21:36 -- nvmf/common.sh@295 -- # net_devs=() 00:28:21.853 21:21:36 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:21.853 21:21:36 -- nvmf/common.sh@296 -- # e810=() 00:28:21.853 21:21:36 -- nvmf/common.sh@296 -- # local -ga e810 00:28:21.853 21:21:36 -- nvmf/common.sh@297 -- # x722=() 00:28:21.853 21:21:36 -- nvmf/common.sh@297 -- # local -ga x722 00:28:21.853 21:21:36 -- nvmf/common.sh@298 -- # mlx=() 00:28:21.853 21:21:36 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:21.853 21:21:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:21.853 21:21:36 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:21.853 21:21:36 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:21.853 21:21:36 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:21.853 21:21:36 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:21.853 21:21:36 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:21.853 21:21:36 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:21.853 21:21:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:21.853 21:21:36 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:21.853 21:21:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:21.853 21:21:36 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:21.853 21:21:36 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:21.853 21:21:36 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:21.853 21:21:36 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:21.853 21:21:36 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:21.853 21:21:36 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:21.853 21:21:36 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:21.853 21:21:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:21.853 21:21:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:21.853 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:21.853 21:21:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:21.853 21:21:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:21.853 21:21:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.853 21:21:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.853 21:21:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:21.853 21:21:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:21.853 21:21:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:21.853 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:21.853 21:21:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:21.853 21:21:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:21.853 21:21:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.853 21:21:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.853 21:21:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:21.853 21:21:36 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:21.853 21:21:36 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:21.853 21:21:36 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:21.853 21:21:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:21.853 21:21:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.853 21:21:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:21.853 21:21:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.853 21:21:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:21.853 Found net devices under 0000:86:00.0: cvl_0_0 00:28:21.853 21:21:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.853 21:21:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:21.853 21:21:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.853 21:21:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:21.853 21:21:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.853 21:21:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:21.853 Found net devices under 0000:86:00.1: cvl_0_1 00:28:21.853 21:21:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.853 21:21:36 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:28:21.854 21:21:36 -- nvmf/common.sh@403 -- # is_hw=yes 00:28:21.854 21:21:36 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:28:21.854 21:21:36 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:28:21.854 21:21:36 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:28:21.854 21:21:36 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:21.854 21:21:36 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:21.854 21:21:36 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:21.854 21:21:36 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:21.854 21:21:36 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:21.854 21:21:36 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:21.854 21:21:36 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:21.854 21:21:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:21.854 21:21:36 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:21.854 21:21:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:21.854 21:21:36 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:21.854 21:21:36 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:21.854 21:21:36 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:21.854 21:21:36 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:21.854 21:21:36 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:21.854 21:21:36 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:21.854 21:21:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:21.854 21:21:36 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:21.854 21:21:36 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:21.854 21:21:36 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:21.854 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:21.854 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.285 ms 00:28:21.854 00:28:21.854 --- 10.0.0.2 ping statistics --- 00:28:21.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.854 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:28:21.854 21:21:36 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:21.854 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:21.854 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:28:21.854 00:28:21.854 --- 10.0.0.1 ping statistics --- 00:28:21.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.854 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:28:21.854 21:21:36 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:21.854 21:21:36 -- nvmf/common.sh@411 -- # return 0 00:28:21.854 21:21:36 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:28:21.854 21:21:36 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:24.461 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:24.461 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:24.461 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:24.461 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:24.461 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:24.461 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:24.461 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:24.461 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:24.461 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:24.461 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:24.461 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:24.461 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:24.461 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:24.461 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:24.461 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:24.461 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:25.029 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:28:25.029 21:21:40 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:25.029 21:21:40 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:28:25.029 21:21:40 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:28:25.029 21:21:40 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:25.029 21:21:40 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:28:25.029 21:21:40 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:28:25.029 21:21:40 -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:28:25.029 21:21:40 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:28:25.029 21:21:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:25.029 21:21:40 -- common/autotest_common.sh@10 -- # set +x 00:28:25.029 21:21:40 -- nvmf/common.sh@470 -- # nvmfpid=3237544 00:28:25.029 21:21:40 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:28:25.029 21:21:40 -- nvmf/common.sh@471 -- # waitforlisten 3237544 00:28:25.029 21:21:40 -- common/autotest_common.sh@817 -- # '[' -z 3237544 ']' 00:28:25.029 21:21:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:25.029 21:21:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:25.029 21:21:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:25.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:25.029 21:21:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:25.029 21:21:40 -- common/autotest_common.sh@10 -- # set +x 00:28:25.288 [2024-04-18 21:21:40.961827] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:28:25.288 [2024-04-18 21:21:40.961868] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:25.288 EAL: No free 2048 kB hugepages reported on node 1 00:28:25.288 [2024-04-18 21:21:41.026251] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:25.288 [2024-04-18 21:21:41.105893] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:25.288 [2024-04-18 21:21:41.105929] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:25.288 [2024-04-18 21:21:41.105936] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:25.288 [2024-04-18 21:21:41.105942] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:25.288 [2024-04-18 21:21:41.105947] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:25.288 [2024-04-18 21:21:41.105987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:25.288 [2024-04-18 21:21:41.106072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:25.288 [2024-04-18 21:21:41.106154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:25.288 [2024-04-18 21:21:41.106155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:25.857 21:21:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:25.857 21:21:41 -- common/autotest_common.sh@850 -- # return 0 00:28:25.857 21:21:41 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:28:25.857 21:21:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:25.857 21:21:41 -- common/autotest_common.sh@10 -- # set +x 00:28:26.116 21:21:41 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:26.116 21:21:41 -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:28:26.116 21:21:41 -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:28:26.116 21:21:41 -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:28:26.116 21:21:41 -- scripts/common.sh@309 -- # local bdf bdfs 00:28:26.116 21:21:41 -- scripts/common.sh@310 -- # local nvmes 00:28:26.116 21:21:41 -- scripts/common.sh@312 -- # [[ -n 0000:5e:00.0 ]] 00:28:26.116 21:21:41 -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:28:26.116 21:21:41 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:28:26.116 21:21:41 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:28:26.116 21:21:41 -- scripts/common.sh@320 -- # uname -s 00:28:26.116 21:21:41 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:28:26.116 21:21:41 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:28:26.116 21:21:41 -- scripts/common.sh@325 -- # (( 1 )) 00:28:26.116 21:21:41 -- scripts/common.sh@326 -- # printf '%s\n' 0000:5e:00.0 00:28:26.116 21:21:41 -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:28:26.116 21:21:41 -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:28:26.116 21:21:41 -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:28:26.116 21:21:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:26.116 21:21:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:26.116 21:21:41 -- common/autotest_common.sh@10 -- # set +x 00:28:26.116 ************************************ 00:28:26.116 START TEST spdk_target_abort 00:28:26.116 ************************************ 00:28:26.116 21:21:41 -- common/autotest_common.sh@1111 -- # spdk_target 00:28:26.116 21:21:41 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:28:26.116 21:21:41 -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:28:26.116 21:21:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:26.116 21:21:41 -- common/autotest_common.sh@10 -- # set +x 00:28:29.405 spdk_targetn1 00:28:29.405 21:21:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:29.405 21:21:44 -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:29.405 21:21:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:29.405 21:21:44 -- common/autotest_common.sh@10 -- # set +x 00:28:29.405 [2024-04-18 21:21:44.774116] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:29.405 21:21:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:29.405 21:21:44 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:28:29.405 21:21:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:29.405 21:21:44 -- common/autotest_common.sh@10 -- # set +x 00:28:29.405 21:21:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:29.405 21:21:44 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:28:29.405 21:21:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:29.405 21:21:44 -- common/autotest_common.sh@10 -- # set +x 00:28:29.405 21:21:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:29.405 21:21:44 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:28:29.405 21:21:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:29.405 21:21:44 -- common/autotest_common.sh@10 -- # set +x 00:28:29.405 [2024-04-18 21:21:44.811131] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:29.405 21:21:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:29.405 21:21:44 -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:28:29.405 21:21:44 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:29.405 21:21:44 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:29.405 21:21:44 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:28:29.405 21:21:44 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:29.405 21:21:44 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:29.405 21:21:44 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:29.405 21:21:44 -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:29.405 21:21:44 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:29.405 21:21:44 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:29.405 21:21:44 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:29.405 21:21:44 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:29.405 21:21:44 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:29.405 21:21:44 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:29.405 21:21:44 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:28:29.405 21:21:44 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:29.405 21:21:44 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:29.405 21:21:44 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:29.405 21:21:44 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:29.405 21:21:44 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:29.405 21:21:44 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:29.405 EAL: No free 2048 kB hugepages reported on node 1 00:28:32.699 Initializing NVMe Controllers 00:28:32.699 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:32.699 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:32.699 Initialization complete. Launching workers. 00:28:32.699 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 7415, failed: 0 00:28:32.700 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1418, failed to submit 5997 00:28:32.700 success 861, unsuccess 557, failed 0 00:28:32.700 21:21:47 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:32.700 21:21:47 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:32.700 EAL: No free 2048 kB hugepages reported on node 1 00:28:35.991 Initializing NVMe Controllers 00:28:35.991 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:35.991 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:35.991 Initialization complete. Launching workers. 00:28:35.991 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8520, failed: 0 00:28:35.991 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1278, failed to submit 7242 00:28:35.991 success 295, unsuccess 983, failed 0 00:28:35.991 21:21:51 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:35.991 21:21:51 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:35.991 EAL: No free 2048 kB hugepages reported on node 1 00:28:39.284 Initializing NVMe Controllers 00:28:39.284 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:39.284 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:39.284 Initialization complete. Launching workers. 00:28:39.284 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 36283, failed: 0 00:28:39.284 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2724, failed to submit 33559 00:28:39.284 success 632, unsuccess 2092, failed 0 00:28:39.284 21:21:54 -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:28:39.284 21:21:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:39.284 21:21:54 -- common/autotest_common.sh@10 -- # set +x 00:28:39.284 21:21:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:39.284 21:21:54 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:28:39.284 21:21:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:39.284 21:21:54 -- common/autotest_common.sh@10 -- # set +x 00:28:39.852 21:21:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:39.852 21:21:55 -- target/abort_qd_sizes.sh@61 -- # killprocess 3237544 00:28:39.852 21:21:55 -- common/autotest_common.sh@936 -- # '[' -z 3237544 ']' 00:28:39.852 21:21:55 -- common/autotest_common.sh@940 -- # kill -0 3237544 00:28:39.852 21:21:55 -- common/autotest_common.sh@941 -- # uname 00:28:39.852 21:21:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:39.852 21:21:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3237544 00:28:40.112 21:21:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:40.112 21:21:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:40.112 21:21:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3237544' 00:28:40.112 killing process with pid 3237544 00:28:40.112 21:21:55 -- common/autotest_common.sh@955 -- # kill 3237544 00:28:40.112 21:21:55 -- common/autotest_common.sh@960 -- # wait 3237544 00:28:40.112 00:28:40.112 real 0m14.072s 00:28:40.112 user 0m56.372s 00:28:40.112 sys 0m2.215s 00:28:40.112 21:21:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:40.112 21:21:56 -- common/autotest_common.sh@10 -- # set +x 00:28:40.112 ************************************ 00:28:40.112 END TEST spdk_target_abort 00:28:40.112 ************************************ 00:28:40.372 21:21:56 -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:28:40.372 21:21:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:40.372 21:21:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:40.372 21:21:56 -- common/autotest_common.sh@10 -- # set +x 00:28:40.372 ************************************ 00:28:40.372 START TEST kernel_target_abort 00:28:40.372 ************************************ 00:28:40.372 21:21:56 -- common/autotest_common.sh@1111 -- # kernel_target 00:28:40.372 21:21:56 -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:28:40.372 21:21:56 -- nvmf/common.sh@730 -- # local ip 00:28:40.372 21:21:56 -- nvmf/common.sh@731 -- # ip_candidates=() 00:28:40.372 21:21:56 -- nvmf/common.sh@731 -- # local -A ip_candidates 00:28:40.372 21:21:56 -- nvmf/common.sh@733 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.372 21:21:56 -- nvmf/common.sh@734 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.372 21:21:56 -- nvmf/common.sh@736 -- # [[ -z tcp ]] 00:28:40.372 21:21:56 -- nvmf/common.sh@736 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.372 21:21:56 -- nvmf/common.sh@737 -- # ip=NVMF_INITIATOR_IP 00:28:40.372 21:21:56 -- nvmf/common.sh@739 -- # [[ -z 10.0.0.1 ]] 00:28:40.372 21:21:56 -- nvmf/common.sh@744 -- # echo 10.0.0.1 00:28:40.372 21:21:56 -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:40.372 21:21:56 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:40.372 21:21:56 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:28:40.372 21:21:56 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:40.372 21:21:56 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:40.372 21:21:56 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:40.372 21:21:56 -- nvmf/common.sh@628 -- # local block nvme 00:28:40.372 21:21:56 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:28:40.372 21:21:56 -- nvmf/common.sh@631 -- # modprobe nvmet 00:28:40.372 21:21:56 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:40.372 21:21:56 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:43.661 Waiting for block devices as requested 00:28:43.661 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:28:43.661 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:43.661 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:43.661 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:43.661 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:43.661 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:43.661 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:43.661 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:43.920 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:43.920 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:43.920 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:44.179 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:44.179 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:44.179 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:44.179 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:44.438 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:44.438 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:44.438 21:22:00 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:28:44.438 21:22:00 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:44.438 21:22:00 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:28:44.438 21:22:00 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:28:44.438 21:22:00 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:44.438 21:22:00 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:28:44.438 21:22:00 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:28:44.438 21:22:00 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:28:44.438 21:22:00 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:44.438 No valid GPT data, bailing 00:28:44.438 21:22:00 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:44.438 21:22:00 -- scripts/common.sh@391 -- # pt= 00:28:44.698 21:22:00 -- scripts/common.sh@392 -- # return 1 00:28:44.698 21:22:00 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:28:44.698 21:22:00 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:28:44.698 21:22:00 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:44.698 21:22:00 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:44.698 21:22:00 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:44.698 21:22:00 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:44.698 21:22:00 -- nvmf/common.sh@656 -- # echo 1 00:28:44.698 21:22:00 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:28:44.698 21:22:00 -- nvmf/common.sh@658 -- # echo 1 00:28:44.698 21:22:00 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:28:44.698 21:22:00 -- nvmf/common.sh@661 -- # echo tcp 00:28:44.698 21:22:00 -- nvmf/common.sh@662 -- # echo 4420 00:28:44.698 21:22:00 -- nvmf/common.sh@663 -- # echo ipv4 00:28:44.698 21:22:00 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:44.698 21:22:00 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:28:44.698 00:28:44.698 Discovery Log Number of Records 2, Generation counter 2 00:28:44.698 =====Discovery Log Entry 0====== 00:28:44.698 trtype: tcp 00:28:44.698 adrfam: ipv4 00:28:44.698 subtype: current discovery subsystem 00:28:44.698 treq: not specified, sq flow control disable supported 00:28:44.698 portid: 1 00:28:44.698 trsvcid: 4420 00:28:44.698 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:44.698 traddr: 10.0.0.1 00:28:44.698 eflags: none 00:28:44.698 sectype: none 00:28:44.698 =====Discovery Log Entry 1====== 00:28:44.698 trtype: tcp 00:28:44.698 adrfam: ipv4 00:28:44.698 subtype: nvme subsystem 00:28:44.698 treq: not specified, sq flow control disable supported 00:28:44.698 portid: 1 00:28:44.698 trsvcid: 4420 00:28:44.698 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:44.698 traddr: 10.0.0.1 00:28:44.698 eflags: none 00:28:44.698 sectype: none 00:28:44.698 21:22:00 -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:28:44.698 21:22:00 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:44.698 21:22:00 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:44.698 21:22:00 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:28:44.698 21:22:00 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:44.698 21:22:00 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:44.698 21:22:00 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:44.698 21:22:00 -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:44.698 21:22:00 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:44.698 21:22:00 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:44.698 21:22:00 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:44.698 21:22:00 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:44.698 21:22:00 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:44.698 21:22:00 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:44.698 21:22:00 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:28:44.698 21:22:00 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:44.698 21:22:00 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:28:44.698 21:22:00 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:44.698 21:22:00 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:44.698 21:22:00 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:44.698 21:22:00 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:44.698 EAL: No free 2048 kB hugepages reported on node 1 00:28:47.987 Initializing NVMe Controllers 00:28:47.987 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:47.987 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:47.987 Initialization complete. Launching workers. 00:28:47.987 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 54046, failed: 0 00:28:47.987 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 54046, failed to submit 0 00:28:47.987 success 0, unsuccess 54046, failed 0 00:28:47.987 21:22:03 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:47.987 21:22:03 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:47.987 EAL: No free 2048 kB hugepages reported on node 1 00:28:51.304 Initializing NVMe Controllers 00:28:51.304 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:51.304 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:51.304 Initialization complete. Launching workers. 00:28:51.304 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 98359, failed: 0 00:28:51.304 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24866, failed to submit 73493 00:28:51.304 success 0, unsuccess 24866, failed 0 00:28:51.304 21:22:06 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:51.304 21:22:06 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:51.304 EAL: No free 2048 kB hugepages reported on node 1 00:28:53.839 Initializing NVMe Controllers 00:28:53.839 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:53.839 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:53.839 Initialization complete. Launching workers. 00:28:53.839 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 96061, failed: 0 00:28:53.839 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24026, failed to submit 72035 00:28:53.839 success 0, unsuccess 24026, failed 0 00:28:53.839 21:22:09 -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:28:53.839 21:22:09 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:53.839 21:22:09 -- nvmf/common.sh@675 -- # echo 0 00:28:53.839 21:22:09 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:53.839 21:22:09 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:53.839 21:22:09 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:53.839 21:22:09 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:53.839 21:22:09 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:28:53.839 21:22:09 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:28:54.098 21:22:09 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:57.394 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:57.394 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:57.394 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:57.394 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:57.394 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:57.394 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:57.394 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:57.394 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:57.394 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:57.394 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:57.394 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:57.394 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:57.394 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:57.394 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:57.394 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:57.394 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:57.966 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:28:57.966 00:28:57.966 real 0m17.555s 00:28:57.966 user 0m6.545s 00:28:57.966 sys 0m5.694s 00:28:57.966 21:22:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:57.966 21:22:13 -- common/autotest_common.sh@10 -- # set +x 00:28:57.966 ************************************ 00:28:57.966 END TEST kernel_target_abort 00:28:57.966 ************************************ 00:28:57.966 21:22:13 -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:57.966 21:22:13 -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:28:57.966 21:22:13 -- nvmf/common.sh@477 -- # nvmfcleanup 00:28:57.966 21:22:13 -- nvmf/common.sh@117 -- # sync 00:28:57.966 21:22:13 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:57.966 21:22:13 -- nvmf/common.sh@120 -- # set +e 00:28:57.966 21:22:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:57.966 21:22:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:57.966 rmmod nvme_tcp 00:28:57.966 rmmod nvme_fabrics 00:28:57.966 rmmod nvme_keyring 00:28:57.966 21:22:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:57.966 21:22:13 -- nvmf/common.sh@124 -- # set -e 00:28:57.966 21:22:13 -- nvmf/common.sh@125 -- # return 0 00:28:57.966 21:22:13 -- nvmf/common.sh@478 -- # '[' -n 3237544 ']' 00:28:57.966 21:22:13 -- nvmf/common.sh@479 -- # killprocess 3237544 00:28:57.966 21:22:13 -- common/autotest_common.sh@936 -- # '[' -z 3237544 ']' 00:28:57.966 21:22:13 -- common/autotest_common.sh@940 -- # kill -0 3237544 00:28:57.966 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (3237544) - No such process 00:28:57.966 21:22:13 -- common/autotest_common.sh@963 -- # echo 'Process with pid 3237544 is not found' 00:28:57.966 Process with pid 3237544 is not found 00:28:57.966 21:22:13 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:28:57.966 21:22:13 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:01.257 Waiting for block devices as requested 00:29:01.257 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:29:01.257 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:01.257 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:01.257 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:01.257 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:01.257 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:01.257 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:01.257 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:01.516 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:01.516 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:01.516 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:01.776 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:01.776 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:01.776 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:01.776 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:02.035 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:02.035 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:02.035 21:22:17 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:02.035 21:22:17 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:02.035 21:22:17 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:02.035 21:22:17 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:02.035 21:22:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:02.035 21:22:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:02.035 21:22:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:04.571 21:22:19 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:04.571 00:29:04.571 real 0m49.091s 00:29:04.571 user 1m7.411s 00:29:04.571 sys 0m16.897s 00:29:04.571 21:22:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:04.571 21:22:19 -- common/autotest_common.sh@10 -- # set +x 00:29:04.571 ************************************ 00:29:04.571 END TEST nvmf_abort_qd_sizes 00:29:04.571 ************************************ 00:29:04.571 21:22:20 -- spdk/autotest.sh@293 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:29:04.571 21:22:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:04.571 21:22:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:04.571 21:22:20 -- common/autotest_common.sh@10 -- # set +x 00:29:04.571 ************************************ 00:29:04.571 START TEST keyring_file 00:29:04.571 ************************************ 00:29:04.571 21:22:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:29:04.571 * Looking for test storage... 00:29:04.571 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:29:04.571 21:22:20 -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:29:04.571 21:22:20 -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:04.571 21:22:20 -- nvmf/common.sh@7 -- # uname -s 00:29:04.571 21:22:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:04.571 21:22:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:04.571 21:22:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:04.571 21:22:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:04.571 21:22:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:04.571 21:22:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:04.571 21:22:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:04.571 21:22:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:04.571 21:22:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:04.571 21:22:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:04.571 21:22:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:04.571 21:22:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:04.571 21:22:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:04.571 21:22:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:04.571 21:22:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:04.571 21:22:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:04.571 21:22:20 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:04.571 21:22:20 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:04.571 21:22:20 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:04.571 21:22:20 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:04.571 21:22:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.571 21:22:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.571 21:22:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.572 21:22:20 -- paths/export.sh@5 -- # export PATH 00:29:04.572 21:22:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.572 21:22:20 -- nvmf/common.sh@47 -- # : 0 00:29:04.572 21:22:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:04.572 21:22:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:04.572 21:22:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:04.572 21:22:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:04.572 21:22:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:04.572 21:22:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:04.572 21:22:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:04.572 21:22:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:04.572 21:22:20 -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:04.572 21:22:20 -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:04.572 21:22:20 -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:04.572 21:22:20 -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:29:04.572 21:22:20 -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:29:04.572 21:22:20 -- keyring/file.sh@24 -- # trap cleanup EXIT 00:29:04.572 21:22:20 -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:04.572 21:22:20 -- keyring/common.sh@15 -- # local name key digest path 00:29:04.572 21:22:20 -- keyring/common.sh@17 -- # name=key0 00:29:04.572 21:22:20 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:04.572 21:22:20 -- keyring/common.sh@17 -- # digest=0 00:29:04.572 21:22:20 -- keyring/common.sh@18 -- # mktemp 00:29:04.572 21:22:20 -- keyring/common.sh@18 -- # path=/tmp/tmp.rrMP2Q7N18 00:29:04.572 21:22:20 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:04.572 21:22:20 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:04.572 21:22:20 -- nvmf/common.sh@691 -- # local prefix key digest 00:29:04.572 21:22:20 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:29:04.572 21:22:20 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:29:04.572 21:22:20 -- nvmf/common.sh@693 -- # digest=0 00:29:04.572 21:22:20 -- nvmf/common.sh@694 -- # python - 00:29:04.572 21:22:20 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.rrMP2Q7N18 00:29:04.572 21:22:20 -- keyring/common.sh@23 -- # echo /tmp/tmp.rrMP2Q7N18 00:29:04.572 21:22:20 -- keyring/file.sh@26 -- # key0path=/tmp/tmp.rrMP2Q7N18 00:29:04.572 21:22:20 -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:29:04.572 21:22:20 -- keyring/common.sh@15 -- # local name key digest path 00:29:04.572 21:22:20 -- keyring/common.sh@17 -- # name=key1 00:29:04.572 21:22:20 -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:04.572 21:22:20 -- keyring/common.sh@17 -- # digest=0 00:29:04.572 21:22:20 -- keyring/common.sh@18 -- # mktemp 00:29:04.572 21:22:20 -- keyring/common.sh@18 -- # path=/tmp/tmp.2CLPNATrAU 00:29:04.572 21:22:20 -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:04.572 21:22:20 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:04.572 21:22:20 -- nvmf/common.sh@691 -- # local prefix key digest 00:29:04.572 21:22:20 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:29:04.572 21:22:20 -- nvmf/common.sh@693 -- # key=112233445566778899aabbccddeeff00 00:29:04.572 21:22:20 -- nvmf/common.sh@693 -- # digest=0 00:29:04.572 21:22:20 -- nvmf/common.sh@694 -- # python - 00:29:04.572 21:22:20 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.2CLPNATrAU 00:29:04.572 21:22:20 -- keyring/common.sh@23 -- # echo /tmp/tmp.2CLPNATrAU 00:29:04.572 21:22:20 -- keyring/file.sh@27 -- # key1path=/tmp/tmp.2CLPNATrAU 00:29:04.572 21:22:20 -- keyring/file.sh@30 -- # tgtpid=3246859 00:29:04.572 21:22:20 -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:29:04.572 21:22:20 -- keyring/file.sh@32 -- # waitforlisten 3246859 00:29:04.572 21:22:20 -- common/autotest_common.sh@817 -- # '[' -z 3246859 ']' 00:29:04.572 21:22:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:04.572 21:22:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:04.572 21:22:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:04.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:04.572 21:22:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:04.572 21:22:20 -- common/autotest_common.sh@10 -- # set +x 00:29:04.572 [2024-04-18 21:22:20.440479] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:29:04.572 [2024-04-18 21:22:20.440535] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3246859 ] 00:29:04.572 EAL: No free 2048 kB hugepages reported on node 1 00:29:04.572 [2024-04-18 21:22:20.501164] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.832 [2024-04-18 21:22:20.578863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:05.400 21:22:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:05.400 21:22:21 -- common/autotest_common.sh@850 -- # return 0 00:29:05.400 21:22:21 -- keyring/file.sh@33 -- # rpc_cmd 00:29:05.400 21:22:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:05.400 21:22:21 -- common/autotest_common.sh@10 -- # set +x 00:29:05.400 [2024-04-18 21:22:21.227902] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:05.400 null0 00:29:05.400 [2024-04-18 21:22:21.259961] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:05.400 [2024-04-18 21:22:21.260435] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:05.400 [2024-04-18 21:22:21.267983] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:29:05.400 21:22:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:05.400 21:22:21 -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:05.400 21:22:21 -- common/autotest_common.sh@638 -- # local es=0 00:29:05.400 21:22:21 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:05.400 21:22:21 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:29:05.400 21:22:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:05.400 21:22:21 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:29:05.400 21:22:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:05.400 21:22:21 -- common/autotest_common.sh@641 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:05.400 21:22:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:05.400 21:22:21 -- common/autotest_common.sh@10 -- # set +x 00:29:05.400 [2024-04-18 21:22:21.280009] nvmf_rpc.c: 773:nvmf_rpc_listen_paused: *ERROR*: A listener already exists with different secure channel option.request: 00:29:05.400 { 00:29:05.400 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:29:05.400 "secure_channel": false, 00:29:05.400 "listen_address": { 00:29:05.400 "trtype": "tcp", 00:29:05.400 "traddr": "127.0.0.1", 00:29:05.400 "trsvcid": "4420" 00:29:05.400 }, 00:29:05.400 "method": "nvmf_subsystem_add_listener", 00:29:05.400 "req_id": 1 00:29:05.400 } 00:29:05.400 Got JSON-RPC error response 00:29:05.400 response: 00:29:05.400 { 00:29:05.400 "code": -32602, 00:29:05.400 "message": "Invalid parameters" 00:29:05.400 } 00:29:05.400 21:22:21 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:29:05.400 21:22:21 -- common/autotest_common.sh@641 -- # es=1 00:29:05.400 21:22:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:05.400 21:22:21 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:05.400 21:22:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:05.400 21:22:21 -- keyring/file.sh@46 -- # bperfpid=3246947 00:29:05.400 21:22:21 -- keyring/file.sh@48 -- # waitforlisten 3246947 /var/tmp/bperf.sock 00:29:05.400 21:22:21 -- common/autotest_common.sh@817 -- # '[' -z 3246947 ']' 00:29:05.400 21:22:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:05.400 21:22:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:05.400 21:22:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:05.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:05.400 21:22:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:05.400 21:22:21 -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:29:05.400 21:22:21 -- common/autotest_common.sh@10 -- # set +x 00:29:05.400 [2024-04-18 21:22:21.327018] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:29:05.400 [2024-04-18 21:22:21.327059] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3246947 ] 00:29:05.659 EAL: No free 2048 kB hugepages reported on node 1 00:29:05.659 [2024-04-18 21:22:21.386683] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:05.659 [2024-04-18 21:22:21.460026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:06.227 21:22:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:06.227 21:22:22 -- common/autotest_common.sh@850 -- # return 0 00:29:06.228 21:22:22 -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.rrMP2Q7N18 00:29:06.228 21:22:22 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.rrMP2Q7N18 00:29:06.487 21:22:22 -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.2CLPNATrAU 00:29:06.487 21:22:22 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.2CLPNATrAU 00:29:06.746 21:22:22 -- keyring/file.sh@51 -- # get_key key0 00:29:06.746 21:22:22 -- keyring/file.sh@51 -- # jq -r .path 00:29:06.746 21:22:22 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:06.746 21:22:22 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:06.746 21:22:22 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:06.746 21:22:22 -- keyring/file.sh@51 -- # [[ /tmp/tmp.rrMP2Q7N18 == \/\t\m\p\/\t\m\p\.\r\r\M\P\2\Q\7\N\1\8 ]] 00:29:06.746 21:22:22 -- keyring/file.sh@52 -- # get_key key1 00:29:06.746 21:22:22 -- keyring/file.sh@52 -- # jq -r .path 00:29:06.746 21:22:22 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:06.746 21:22:22 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:06.746 21:22:22 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:07.005 21:22:22 -- keyring/file.sh@52 -- # [[ /tmp/tmp.2CLPNATrAU == \/\t\m\p\/\t\m\p\.\2\C\L\P\N\A\T\r\A\U ]] 00:29:07.005 21:22:22 -- keyring/file.sh@53 -- # get_refcnt key0 00:29:07.005 21:22:22 -- keyring/common.sh@12 -- # get_key key0 00:29:07.005 21:22:22 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:07.005 21:22:22 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:07.005 21:22:22 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:07.005 21:22:22 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:07.265 21:22:23 -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:29:07.265 21:22:23 -- keyring/file.sh@54 -- # get_refcnt key1 00:29:07.265 21:22:23 -- keyring/common.sh@12 -- # get_key key1 00:29:07.265 21:22:23 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:07.265 21:22:23 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:07.265 21:22:23 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:07.265 21:22:23 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:07.524 21:22:23 -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:29:07.524 21:22:23 -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:07.524 21:22:23 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:07.524 [2024-04-18 21:22:23.357388] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:07.524 nvme0n1 00:29:07.524 21:22:23 -- keyring/file.sh@59 -- # get_refcnt key0 00:29:07.524 21:22:23 -- keyring/common.sh@12 -- # get_key key0 00:29:07.524 21:22:23 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:07.524 21:22:23 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:07.524 21:22:23 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:07.524 21:22:23 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:07.783 21:22:23 -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:29:07.783 21:22:23 -- keyring/file.sh@60 -- # get_refcnt key1 00:29:07.783 21:22:23 -- keyring/common.sh@12 -- # get_key key1 00:29:07.783 21:22:23 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:07.783 21:22:23 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:07.783 21:22:23 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:07.783 21:22:23 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:08.042 21:22:23 -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:29:08.042 21:22:23 -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:08.042 Running I/O for 1 seconds... 00:29:09.419 00:29:09.419 Latency(us) 00:29:09.419 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:09.420 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:29:09.420 nvme0n1 : 1.03 6635.83 25.92 0.00 0.00 19151.41 10542.75 139506.20 00:29:09.420 =================================================================================================================== 00:29:09.420 Total : 6635.83 25.92 0.00 0.00 19151.41 10542.75 139506.20 00:29:09.420 0 00:29:09.420 21:22:24 -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:09.420 21:22:24 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:09.420 21:22:25 -- keyring/file.sh@65 -- # get_refcnt key0 00:29:09.420 21:22:25 -- keyring/common.sh@12 -- # get_key key0 00:29:09.420 21:22:25 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:09.420 21:22:25 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:09.420 21:22:25 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:09.420 21:22:25 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:09.420 21:22:25 -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:29:09.420 21:22:25 -- keyring/file.sh@66 -- # get_refcnt key1 00:29:09.420 21:22:25 -- keyring/common.sh@12 -- # get_key key1 00:29:09.420 21:22:25 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:09.420 21:22:25 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:09.420 21:22:25 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:09.420 21:22:25 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:09.679 21:22:25 -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:29:09.679 21:22:25 -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:09.679 21:22:25 -- common/autotest_common.sh@638 -- # local es=0 00:29:09.679 21:22:25 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:09.679 21:22:25 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:29:09.679 21:22:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:09.679 21:22:25 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:29:09.679 21:22:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:09.679 21:22:25 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:09.679 21:22:25 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:09.939 [2024-04-18 21:22:25.648470] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:09.939 [2024-04-18 21:22:25.649116] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941e20 (107): Transport endpoint is not connected 00:29:09.939 [2024-04-18 21:22:25.650109] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941e20 (9): Bad file descriptor 00:29:09.939 [2024-04-18 21:22:25.651110] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:09.939 [2024-04-18 21:22:25.651121] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:09.939 [2024-04-18 21:22:25.651128] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:09.939 request: 00:29:09.939 { 00:29:09.939 "name": "nvme0", 00:29:09.939 "trtype": "tcp", 00:29:09.939 "traddr": "127.0.0.1", 00:29:09.939 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:09.939 "adrfam": "ipv4", 00:29:09.939 "trsvcid": "4420", 00:29:09.939 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:09.939 "psk": "key1", 00:29:09.939 "method": "bdev_nvme_attach_controller", 00:29:09.939 "req_id": 1 00:29:09.939 } 00:29:09.939 Got JSON-RPC error response 00:29:09.939 response: 00:29:09.939 { 00:29:09.939 "code": -32602, 00:29:09.939 "message": "Invalid parameters" 00:29:09.939 } 00:29:09.939 21:22:25 -- common/autotest_common.sh@641 -- # es=1 00:29:09.939 21:22:25 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:09.939 21:22:25 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:09.939 21:22:25 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:09.939 21:22:25 -- keyring/file.sh@71 -- # get_refcnt key0 00:29:09.939 21:22:25 -- keyring/common.sh@12 -- # get_key key0 00:29:09.939 21:22:25 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:09.939 21:22:25 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:09.939 21:22:25 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:09.939 21:22:25 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:09.939 21:22:25 -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:29:09.939 21:22:25 -- keyring/file.sh@72 -- # get_refcnt key1 00:29:09.939 21:22:25 -- keyring/common.sh@12 -- # get_key key1 00:29:09.939 21:22:25 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:09.939 21:22:25 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:09.939 21:22:25 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:09.939 21:22:25 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:10.199 21:22:26 -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:29:10.199 21:22:26 -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:29:10.199 21:22:26 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:10.458 21:22:26 -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:29:10.458 21:22:26 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:29:10.458 21:22:26 -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:29:10.458 21:22:26 -- keyring/file.sh@77 -- # jq length 00:29:10.458 21:22:26 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:10.717 21:22:26 -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:29:10.717 21:22:26 -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.rrMP2Q7N18 00:29:10.717 21:22:26 -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.rrMP2Q7N18 00:29:10.717 21:22:26 -- common/autotest_common.sh@638 -- # local es=0 00:29:10.717 21:22:26 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.rrMP2Q7N18 00:29:10.717 21:22:26 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:29:10.717 21:22:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:10.717 21:22:26 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:29:10.717 21:22:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:10.717 21:22:26 -- common/autotest_common.sh@641 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.rrMP2Q7N18 00:29:10.717 21:22:26 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.rrMP2Q7N18 00:29:10.978 [2024-04-18 21:22:26.696121] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.rrMP2Q7N18': 0100660 00:29:10.978 [2024-04-18 21:22:26.696145] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:29:10.978 request: 00:29:10.978 { 00:29:10.978 "name": "key0", 00:29:10.978 "path": "/tmp/tmp.rrMP2Q7N18", 00:29:10.978 "method": "keyring_file_add_key", 00:29:10.978 "req_id": 1 00:29:10.978 } 00:29:10.978 Got JSON-RPC error response 00:29:10.978 response: 00:29:10.978 { 00:29:10.978 "code": -1, 00:29:10.978 "message": "Operation not permitted" 00:29:10.978 } 00:29:10.978 21:22:26 -- common/autotest_common.sh@641 -- # es=1 00:29:10.978 21:22:26 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:10.978 21:22:26 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:10.978 21:22:26 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:10.978 21:22:26 -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.rrMP2Q7N18 00:29:10.978 21:22:26 -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.rrMP2Q7N18 00:29:10.978 21:22:26 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.rrMP2Q7N18 00:29:10.978 21:22:26 -- keyring/file.sh@86 -- # rm -f /tmp/tmp.rrMP2Q7N18 00:29:10.978 21:22:26 -- keyring/file.sh@88 -- # get_refcnt key0 00:29:10.978 21:22:26 -- keyring/common.sh@12 -- # get_key key0 00:29:10.978 21:22:26 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:10.978 21:22:26 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:10.978 21:22:26 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:10.978 21:22:26 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:11.267 21:22:27 -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:29:11.267 21:22:27 -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:11.267 21:22:27 -- common/autotest_common.sh@638 -- # local es=0 00:29:11.267 21:22:27 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:11.267 21:22:27 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:29:11.267 21:22:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:11.267 21:22:27 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:29:11.267 21:22:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:11.267 21:22:27 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:11.268 21:22:27 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:11.527 [2024-04-18 21:22:27.225534] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.rrMP2Q7N18': No such file or directory 00:29:11.527 [2024-04-18 21:22:27.225559] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:29:11.527 [2024-04-18 21:22:27.225580] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:29:11.527 [2024-04-18 21:22:27.225587] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:11.527 [2024-04-18 21:22:27.225592] bdev_nvme.c:6215:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:29:11.527 request: 00:29:11.527 { 00:29:11.527 "name": "nvme0", 00:29:11.527 "trtype": "tcp", 00:29:11.527 "traddr": "127.0.0.1", 00:29:11.527 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:11.527 "adrfam": "ipv4", 00:29:11.527 "trsvcid": "4420", 00:29:11.527 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:11.527 "psk": "key0", 00:29:11.527 "method": "bdev_nvme_attach_controller", 00:29:11.527 "req_id": 1 00:29:11.527 } 00:29:11.527 Got JSON-RPC error response 00:29:11.527 response: 00:29:11.527 { 00:29:11.527 "code": -19, 00:29:11.527 "message": "No such device" 00:29:11.527 } 00:29:11.527 21:22:27 -- common/autotest_common.sh@641 -- # es=1 00:29:11.527 21:22:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:11.527 21:22:27 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:11.527 21:22:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:11.527 21:22:27 -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:29:11.527 21:22:27 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:11.527 21:22:27 -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:11.527 21:22:27 -- keyring/common.sh@15 -- # local name key digest path 00:29:11.527 21:22:27 -- keyring/common.sh@17 -- # name=key0 00:29:11.527 21:22:27 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:11.527 21:22:27 -- keyring/common.sh@17 -- # digest=0 00:29:11.527 21:22:27 -- keyring/common.sh@18 -- # mktemp 00:29:11.527 21:22:27 -- keyring/common.sh@18 -- # path=/tmp/tmp.syscXyk3Sm 00:29:11.527 21:22:27 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:11.527 21:22:27 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:11.527 21:22:27 -- nvmf/common.sh@691 -- # local prefix key digest 00:29:11.527 21:22:27 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:29:11.527 21:22:27 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:29:11.527 21:22:27 -- nvmf/common.sh@693 -- # digest=0 00:29:11.527 21:22:27 -- nvmf/common.sh@694 -- # python - 00:29:11.527 21:22:27 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.syscXyk3Sm 00:29:11.786 21:22:27 -- keyring/common.sh@23 -- # echo /tmp/tmp.syscXyk3Sm 00:29:11.786 21:22:27 -- keyring/file.sh@95 -- # key0path=/tmp/tmp.syscXyk3Sm 00:29:11.786 21:22:27 -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.syscXyk3Sm 00:29:11.786 21:22:27 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.syscXyk3Sm 00:29:11.786 21:22:27 -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:11.786 21:22:27 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:12.045 nvme0n1 00:29:12.046 21:22:27 -- keyring/file.sh@99 -- # get_refcnt key0 00:29:12.046 21:22:27 -- keyring/common.sh@12 -- # get_key key0 00:29:12.046 21:22:27 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:12.046 21:22:27 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:12.046 21:22:27 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:12.046 21:22:27 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:12.305 21:22:28 -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:29:12.305 21:22:28 -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:29:12.305 21:22:28 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:12.305 21:22:28 -- keyring/file.sh@101 -- # get_key key0 00:29:12.305 21:22:28 -- keyring/file.sh@101 -- # jq -r .removed 00:29:12.305 21:22:28 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:12.305 21:22:28 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:12.305 21:22:28 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:12.565 21:22:28 -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:29:12.565 21:22:28 -- keyring/file.sh@102 -- # get_refcnt key0 00:29:12.565 21:22:28 -- keyring/common.sh@12 -- # get_key key0 00:29:12.565 21:22:28 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:12.565 21:22:28 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:12.565 21:22:28 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:12.565 21:22:28 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:12.824 21:22:28 -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:29:12.824 21:22:28 -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:12.824 21:22:28 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:13.093 21:22:28 -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:29:13.093 21:22:28 -- keyring/file.sh@104 -- # jq length 00:29:13.093 21:22:28 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:13.093 21:22:28 -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:29:13.093 21:22:28 -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.syscXyk3Sm 00:29:13.093 21:22:28 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.syscXyk3Sm 00:29:13.357 21:22:29 -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.2CLPNATrAU 00:29:13.357 21:22:29 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.2CLPNATrAU 00:29:13.616 21:22:29 -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:13.616 21:22:29 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:13.616 nvme0n1 00:29:13.875 21:22:29 -- keyring/file.sh@112 -- # bperf_cmd save_config 00:29:13.875 21:22:29 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:29:13.875 21:22:29 -- keyring/file.sh@112 -- # config='{ 00:29:13.875 "subsystems": [ 00:29:13.875 { 00:29:13.875 "subsystem": "keyring", 00:29:13.875 "config": [ 00:29:13.875 { 00:29:13.875 "method": "keyring_file_add_key", 00:29:13.875 "params": { 00:29:13.875 "name": "key0", 00:29:13.875 "path": "/tmp/tmp.syscXyk3Sm" 00:29:13.875 } 00:29:13.875 }, 00:29:13.875 { 00:29:13.875 "method": "keyring_file_add_key", 00:29:13.875 "params": { 00:29:13.875 "name": "key1", 00:29:13.875 "path": "/tmp/tmp.2CLPNATrAU" 00:29:13.875 } 00:29:13.875 } 00:29:13.875 ] 00:29:13.875 }, 00:29:13.875 { 00:29:13.875 "subsystem": "iobuf", 00:29:13.875 "config": [ 00:29:13.875 { 00:29:13.875 "method": "iobuf_set_options", 00:29:13.875 "params": { 00:29:13.875 "small_pool_count": 8192, 00:29:13.875 "large_pool_count": 1024, 00:29:13.875 "small_bufsize": 8192, 00:29:13.875 "large_bufsize": 135168 00:29:13.875 } 00:29:13.875 } 00:29:13.875 ] 00:29:13.875 }, 00:29:13.875 { 00:29:13.875 "subsystem": "sock", 00:29:13.875 "config": [ 00:29:13.875 { 00:29:13.875 "method": "sock_impl_set_options", 00:29:13.875 "params": { 00:29:13.875 "impl_name": "posix", 00:29:13.875 "recv_buf_size": 2097152, 00:29:13.875 "send_buf_size": 2097152, 00:29:13.875 "enable_recv_pipe": true, 00:29:13.875 "enable_quickack": false, 00:29:13.875 "enable_placement_id": 0, 00:29:13.875 "enable_zerocopy_send_server": true, 00:29:13.875 "enable_zerocopy_send_client": false, 00:29:13.875 "zerocopy_threshold": 0, 00:29:13.875 "tls_version": 0, 00:29:13.875 "enable_ktls": false 00:29:13.875 } 00:29:13.875 }, 00:29:13.875 { 00:29:13.875 "method": "sock_impl_set_options", 00:29:13.875 "params": { 00:29:13.875 "impl_name": "ssl", 00:29:13.875 "recv_buf_size": 4096, 00:29:13.875 "send_buf_size": 4096, 00:29:13.875 "enable_recv_pipe": true, 00:29:13.875 "enable_quickack": false, 00:29:13.875 "enable_placement_id": 0, 00:29:13.875 "enable_zerocopy_send_server": true, 00:29:13.875 "enable_zerocopy_send_client": false, 00:29:13.875 "zerocopy_threshold": 0, 00:29:13.875 "tls_version": 0, 00:29:13.875 "enable_ktls": false 00:29:13.875 } 00:29:13.875 } 00:29:13.875 ] 00:29:13.875 }, 00:29:13.875 { 00:29:13.875 "subsystem": "vmd", 00:29:13.875 "config": [] 00:29:13.875 }, 00:29:13.875 { 00:29:13.875 "subsystem": "accel", 00:29:13.875 "config": [ 00:29:13.875 { 00:29:13.875 "method": "accel_set_options", 00:29:13.875 "params": { 00:29:13.875 "small_cache_size": 128, 00:29:13.875 "large_cache_size": 16, 00:29:13.875 "task_count": 2048, 00:29:13.875 "sequence_count": 2048, 00:29:13.875 "buf_count": 2048 00:29:13.875 } 00:29:13.875 } 00:29:13.875 ] 00:29:13.875 }, 00:29:13.875 { 00:29:13.875 "subsystem": "bdev", 00:29:13.875 "config": [ 00:29:13.875 { 00:29:13.875 "method": "bdev_set_options", 00:29:13.875 "params": { 00:29:13.875 "bdev_io_pool_size": 65535, 00:29:13.875 "bdev_io_cache_size": 256, 00:29:13.875 "bdev_auto_examine": true, 00:29:13.875 "iobuf_small_cache_size": 128, 00:29:13.875 "iobuf_large_cache_size": 16 00:29:13.875 } 00:29:13.875 }, 00:29:13.875 { 00:29:13.875 "method": "bdev_raid_set_options", 00:29:13.875 "params": { 00:29:13.875 "process_window_size_kb": 1024 00:29:13.875 } 00:29:13.875 }, 00:29:13.875 { 00:29:13.875 "method": "bdev_iscsi_set_options", 00:29:13.875 "params": { 00:29:13.875 "timeout_sec": 30 00:29:13.875 } 00:29:13.875 }, 00:29:13.875 { 00:29:13.876 "method": "bdev_nvme_set_options", 00:29:13.876 "params": { 00:29:13.876 "action_on_timeout": "none", 00:29:13.876 "timeout_us": 0, 00:29:13.876 "timeout_admin_us": 0, 00:29:13.876 "keep_alive_timeout_ms": 10000, 00:29:13.876 "arbitration_burst": 0, 00:29:13.876 "low_priority_weight": 0, 00:29:13.876 "medium_priority_weight": 0, 00:29:13.876 "high_priority_weight": 0, 00:29:13.876 "nvme_adminq_poll_period_us": 10000, 00:29:13.876 "nvme_ioq_poll_period_us": 0, 00:29:13.876 "io_queue_requests": 512, 00:29:13.876 "delay_cmd_submit": true, 00:29:13.876 "transport_retry_count": 4, 00:29:13.876 "bdev_retry_count": 3, 00:29:13.876 "transport_ack_timeout": 0, 00:29:13.876 "ctrlr_loss_timeout_sec": 0, 00:29:13.876 "reconnect_delay_sec": 0, 00:29:13.876 "fast_io_fail_timeout_sec": 0, 00:29:13.876 "disable_auto_failback": false, 00:29:13.876 "generate_uuids": false, 00:29:13.876 "transport_tos": 0, 00:29:13.876 "nvme_error_stat": false, 00:29:13.876 "rdma_srq_size": 0, 00:29:13.876 "io_path_stat": false, 00:29:13.876 "allow_accel_sequence": false, 00:29:13.876 "rdma_max_cq_size": 0, 00:29:13.876 "rdma_cm_event_timeout_ms": 0, 00:29:13.876 "dhchap_digests": [ 00:29:13.876 "sha256", 00:29:13.876 "sha384", 00:29:13.876 "sha512" 00:29:13.876 ], 00:29:13.876 "dhchap_dhgroups": [ 00:29:13.876 "null", 00:29:13.876 "ffdhe2048", 00:29:13.876 "ffdhe3072", 00:29:13.876 "ffdhe4096", 00:29:13.876 "ffdhe6144", 00:29:13.876 "ffdhe8192" 00:29:13.876 ] 00:29:13.876 } 00:29:13.876 }, 00:29:13.876 { 00:29:13.876 "method": "bdev_nvme_attach_controller", 00:29:13.876 "params": { 00:29:13.876 "name": "nvme0", 00:29:13.876 "trtype": "TCP", 00:29:13.876 "adrfam": "IPv4", 00:29:13.876 "traddr": "127.0.0.1", 00:29:13.876 "trsvcid": "4420", 00:29:13.876 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:13.876 "prchk_reftag": false, 00:29:13.876 "prchk_guard": false, 00:29:13.876 "ctrlr_loss_timeout_sec": 0, 00:29:13.876 "reconnect_delay_sec": 0, 00:29:13.876 "fast_io_fail_timeout_sec": 0, 00:29:13.876 "psk": "key0", 00:29:13.876 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:13.876 "hdgst": false, 00:29:13.876 "ddgst": false 00:29:13.876 } 00:29:13.876 }, 00:29:13.876 { 00:29:13.876 "method": "bdev_nvme_set_hotplug", 00:29:13.876 "params": { 00:29:13.876 "period_us": 100000, 00:29:13.876 "enable": false 00:29:13.876 } 00:29:13.876 }, 00:29:13.876 { 00:29:13.876 "method": "bdev_wait_for_examine" 00:29:13.876 } 00:29:13.876 ] 00:29:13.876 }, 00:29:13.876 { 00:29:13.876 "subsystem": "nbd", 00:29:13.876 "config": [] 00:29:13.876 } 00:29:13.876 ] 00:29:13.876 }' 00:29:13.876 21:22:29 -- keyring/file.sh@114 -- # killprocess 3246947 00:29:13.876 21:22:29 -- common/autotest_common.sh@936 -- # '[' -z 3246947 ']' 00:29:13.876 21:22:29 -- common/autotest_common.sh@940 -- # kill -0 3246947 00:29:13.876 21:22:29 -- common/autotest_common.sh@941 -- # uname 00:29:13.876 21:22:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:13.876 21:22:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3246947 00:29:14.136 21:22:29 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:14.136 21:22:29 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:14.136 21:22:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3246947' 00:29:14.136 killing process with pid 3246947 00:29:14.136 21:22:29 -- common/autotest_common.sh@955 -- # kill 3246947 00:29:14.136 Received shutdown signal, test time was about 1.000000 seconds 00:29:14.136 00:29:14.136 Latency(us) 00:29:14.136 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:14.136 =================================================================================================================== 00:29:14.136 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:14.136 21:22:29 -- common/autotest_common.sh@960 -- # wait 3246947 00:29:14.136 21:22:30 -- keyring/file.sh@117 -- # bperfpid=3248506 00:29:14.136 21:22:30 -- keyring/file.sh@119 -- # waitforlisten 3248506 /var/tmp/bperf.sock 00:29:14.136 21:22:30 -- common/autotest_common.sh@817 -- # '[' -z 3248506 ']' 00:29:14.136 21:22:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:14.136 21:22:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:14.136 21:22:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:14.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:14.136 21:22:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:14.136 21:22:30 -- common/autotest_common.sh@10 -- # set +x 00:29:14.136 21:22:30 -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:29:14.136 21:22:30 -- keyring/file.sh@115 -- # echo '{ 00:29:14.136 "subsystems": [ 00:29:14.136 { 00:29:14.136 "subsystem": "keyring", 00:29:14.136 "config": [ 00:29:14.136 { 00:29:14.136 "method": "keyring_file_add_key", 00:29:14.136 "params": { 00:29:14.136 "name": "key0", 00:29:14.136 "path": "/tmp/tmp.syscXyk3Sm" 00:29:14.136 } 00:29:14.136 }, 00:29:14.136 { 00:29:14.136 "method": "keyring_file_add_key", 00:29:14.136 "params": { 00:29:14.136 "name": "key1", 00:29:14.136 "path": "/tmp/tmp.2CLPNATrAU" 00:29:14.136 } 00:29:14.136 } 00:29:14.136 ] 00:29:14.136 }, 00:29:14.136 { 00:29:14.136 "subsystem": "iobuf", 00:29:14.136 "config": [ 00:29:14.136 { 00:29:14.136 "method": "iobuf_set_options", 00:29:14.136 "params": { 00:29:14.136 "small_pool_count": 8192, 00:29:14.136 "large_pool_count": 1024, 00:29:14.136 "small_bufsize": 8192, 00:29:14.136 "large_bufsize": 135168 00:29:14.136 } 00:29:14.136 } 00:29:14.136 ] 00:29:14.136 }, 00:29:14.136 { 00:29:14.136 "subsystem": "sock", 00:29:14.136 "config": [ 00:29:14.136 { 00:29:14.136 "method": "sock_impl_set_options", 00:29:14.136 "params": { 00:29:14.136 "impl_name": "posix", 00:29:14.136 "recv_buf_size": 2097152, 00:29:14.136 "send_buf_size": 2097152, 00:29:14.136 "enable_recv_pipe": true, 00:29:14.136 "enable_quickack": false, 00:29:14.136 "enable_placement_id": 0, 00:29:14.136 "enable_zerocopy_send_server": true, 00:29:14.136 "enable_zerocopy_send_client": false, 00:29:14.136 "zerocopy_threshold": 0, 00:29:14.136 "tls_version": 0, 00:29:14.136 "enable_ktls": false 00:29:14.136 } 00:29:14.136 }, 00:29:14.136 { 00:29:14.136 "method": "sock_impl_set_options", 00:29:14.136 "params": { 00:29:14.136 "impl_name": "ssl", 00:29:14.136 "recv_buf_size": 4096, 00:29:14.136 "send_buf_size": 4096, 00:29:14.136 "enable_recv_pipe": true, 00:29:14.136 "enable_quickack": false, 00:29:14.136 "enable_placement_id": 0, 00:29:14.136 "enable_zerocopy_send_server": true, 00:29:14.136 "enable_zerocopy_send_client": false, 00:29:14.136 "zerocopy_threshold": 0, 00:29:14.136 "tls_version": 0, 00:29:14.136 "enable_ktls": false 00:29:14.136 } 00:29:14.136 } 00:29:14.136 ] 00:29:14.136 }, 00:29:14.136 { 00:29:14.136 "subsystem": "vmd", 00:29:14.136 "config": [] 00:29:14.136 }, 00:29:14.136 { 00:29:14.136 "subsystem": "accel", 00:29:14.136 "config": [ 00:29:14.136 { 00:29:14.136 "method": "accel_set_options", 00:29:14.136 "params": { 00:29:14.136 "small_cache_size": 128, 00:29:14.136 "large_cache_size": 16, 00:29:14.136 "task_count": 2048, 00:29:14.136 "sequence_count": 2048, 00:29:14.136 "buf_count": 2048 00:29:14.136 } 00:29:14.136 } 00:29:14.136 ] 00:29:14.136 }, 00:29:14.136 { 00:29:14.136 "subsystem": "bdev", 00:29:14.136 "config": [ 00:29:14.136 { 00:29:14.136 "method": "bdev_set_options", 00:29:14.136 "params": { 00:29:14.136 "bdev_io_pool_size": 65535, 00:29:14.136 "bdev_io_cache_size": 256, 00:29:14.136 "bdev_auto_examine": true, 00:29:14.136 "iobuf_small_cache_size": 128, 00:29:14.136 "iobuf_large_cache_size": 16 00:29:14.136 } 00:29:14.136 }, 00:29:14.136 { 00:29:14.136 "method": "bdev_raid_set_options", 00:29:14.136 "params": { 00:29:14.136 "process_window_size_kb": 1024 00:29:14.136 } 00:29:14.136 }, 00:29:14.136 { 00:29:14.136 "method": "bdev_iscsi_set_options", 00:29:14.136 "params": { 00:29:14.136 "timeout_sec": 30 00:29:14.136 } 00:29:14.136 }, 00:29:14.136 { 00:29:14.136 "method": "bdev_nvme_set_options", 00:29:14.136 "params": { 00:29:14.136 "action_on_timeout": "none", 00:29:14.136 "timeout_us": 0, 00:29:14.136 "timeout_admin_us": 0, 00:29:14.136 "keep_alive_timeout_ms": 10000, 00:29:14.136 "arbitration_burst": 0, 00:29:14.136 "low_priority_weight": 0, 00:29:14.136 "medium_priority_weight": 0, 00:29:14.136 "high_priority_weight": 0, 00:29:14.136 "nvme_adminq_poll_period_us": 10000, 00:29:14.136 "nvme_ioq_poll_period_us": 0, 00:29:14.136 "io_queue_requests": 512, 00:29:14.136 "delay_cmd_submit": true, 00:29:14.136 "transport_retry_count": 4, 00:29:14.136 "bdev_retry_count": 3, 00:29:14.136 "transport_ack_timeout": 0, 00:29:14.136 "ctrlr_loss_timeout_sec": 0, 00:29:14.136 "reconnect_delay_sec": 0, 00:29:14.136 "fast_io_fail_timeout_sec": 0, 00:29:14.136 "disable_auto_failback": false, 00:29:14.136 "generate_uuids": false, 00:29:14.136 "transport_tos": 0, 00:29:14.136 "nvme_error_stat": false, 00:29:14.136 "rdma_srq_size": 0, 00:29:14.136 "io_path_stat": false, 00:29:14.136 "allow_accel_sequence": false, 00:29:14.136 "rdma_max_cq_size": 0, 00:29:14.136 "rdma_cm_event_timeout_ms": 0, 00:29:14.136 "dhchap_digests": [ 00:29:14.136 "sha256", 00:29:14.136 "sha384", 00:29:14.136 "sha512" 00:29:14.136 ], 00:29:14.136 "dhchap_dhgroups": [ 00:29:14.136 "null", 00:29:14.136 "ffdhe2048", 00:29:14.136 "ffdhe3072", 00:29:14.136 "ffdhe4096", 00:29:14.136 "ffdhe6144", 00:29:14.136 "ffdhe8192" 00:29:14.136 ] 00:29:14.136 } 00:29:14.136 }, 00:29:14.136 { 00:29:14.136 "method": "bdev_nvme_attach_controller", 00:29:14.136 "params": { 00:29:14.136 "name": "nvme0", 00:29:14.136 "trtype": "TCP", 00:29:14.136 "adrfam": "IPv4", 00:29:14.136 "traddr": "127.0.0.1", 00:29:14.136 "trsvcid": "4420", 00:29:14.136 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:14.136 "prchk_reftag": false, 00:29:14.136 "prchk_guard": false, 00:29:14.136 "ctrlr_loss_timeout_sec": 0, 00:29:14.136 "reconnect_delay_sec": 0, 00:29:14.136 "fast_io_fail_timeout_sec": 0, 00:29:14.136 "psk": "key0", 00:29:14.136 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:14.136 "hdgst": false, 00:29:14.136 "ddgst": false 00:29:14.136 } 00:29:14.136 }, 00:29:14.136 { 00:29:14.136 "method": "bdev_nvme_set_hotplug", 00:29:14.136 "params": { 00:29:14.136 "period_us": 100000, 00:29:14.136 "enable": false 00:29:14.136 } 00:29:14.136 }, 00:29:14.136 { 00:29:14.136 "method": "bdev_wait_for_examine" 00:29:14.136 } 00:29:14.136 ] 00:29:14.136 }, 00:29:14.136 { 00:29:14.136 "subsystem": "nbd", 00:29:14.136 "config": [] 00:29:14.136 } 00:29:14.136 ] 00:29:14.136 }' 00:29:14.396 [2024-04-18 21:22:30.089963] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:29:14.396 [2024-04-18 21:22:30.090013] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3248506 ] 00:29:14.396 EAL: No free 2048 kB hugepages reported on node 1 00:29:14.396 [2024-04-18 21:22:30.148674] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.396 [2024-04-18 21:22:30.226341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:14.656 [2024-04-18 21:22:30.377201] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:15.224 21:22:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:15.224 21:22:30 -- common/autotest_common.sh@850 -- # return 0 00:29:15.224 21:22:30 -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:29:15.224 21:22:30 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:15.224 21:22:30 -- keyring/file.sh@120 -- # jq length 00:29:15.224 21:22:31 -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:29:15.224 21:22:31 -- keyring/file.sh@121 -- # get_refcnt key0 00:29:15.224 21:22:31 -- keyring/common.sh@12 -- # get_key key0 00:29:15.224 21:22:31 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:15.224 21:22:31 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:15.224 21:22:31 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:15.224 21:22:31 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:15.483 21:22:31 -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:29:15.483 21:22:31 -- keyring/file.sh@122 -- # get_refcnt key1 00:29:15.483 21:22:31 -- keyring/common.sh@12 -- # get_key key1 00:29:15.483 21:22:31 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:15.483 21:22:31 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:15.483 21:22:31 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:15.483 21:22:31 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:15.743 21:22:31 -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:29:15.743 21:22:31 -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:29:15.743 21:22:31 -- keyring/file.sh@123 -- # jq -r '.[].name' 00:29:15.743 21:22:31 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:29:15.743 21:22:31 -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:29:15.743 21:22:31 -- keyring/file.sh@1 -- # cleanup 00:29:15.743 21:22:31 -- keyring/file.sh@19 -- # rm -f /tmp/tmp.syscXyk3Sm /tmp/tmp.2CLPNATrAU 00:29:15.743 21:22:31 -- keyring/file.sh@20 -- # killprocess 3248506 00:29:15.743 21:22:31 -- common/autotest_common.sh@936 -- # '[' -z 3248506 ']' 00:29:15.743 21:22:31 -- common/autotest_common.sh@940 -- # kill -0 3248506 00:29:15.743 21:22:31 -- common/autotest_common.sh@941 -- # uname 00:29:15.743 21:22:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:15.743 21:22:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3248506 00:29:15.743 21:22:31 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:15.743 21:22:31 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:15.743 21:22:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3248506' 00:29:15.743 killing process with pid 3248506 00:29:15.743 21:22:31 -- common/autotest_common.sh@955 -- # kill 3248506 00:29:15.743 Received shutdown signal, test time was about 1.000000 seconds 00:29:15.743 00:29:15.743 Latency(us) 00:29:15.743 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:15.743 =================================================================================================================== 00:29:15.743 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:15.743 21:22:31 -- common/autotest_common.sh@960 -- # wait 3248506 00:29:16.006 21:22:31 -- keyring/file.sh@21 -- # killprocess 3246859 00:29:16.006 21:22:31 -- common/autotest_common.sh@936 -- # '[' -z 3246859 ']' 00:29:16.006 21:22:31 -- common/autotest_common.sh@940 -- # kill -0 3246859 00:29:16.006 21:22:31 -- common/autotest_common.sh@941 -- # uname 00:29:16.006 21:22:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:16.006 21:22:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3246859 00:29:16.006 21:22:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:16.006 21:22:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:16.006 21:22:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3246859' 00:29:16.006 killing process with pid 3246859 00:29:16.006 21:22:31 -- common/autotest_common.sh@955 -- # kill 3246859 00:29:16.006 [2024-04-18 21:22:31.903597] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:29:16.006 21:22:31 -- common/autotest_common.sh@960 -- # wait 3246859 00:29:16.576 00:29:16.576 real 0m12.084s 00:29:16.576 user 0m28.439s 00:29:16.576 sys 0m2.671s 00:29:16.576 21:22:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:16.576 21:22:32 -- common/autotest_common.sh@10 -- # set +x 00:29:16.576 ************************************ 00:29:16.576 END TEST keyring_file 00:29:16.576 ************************************ 00:29:16.576 21:22:32 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:29:16.576 21:22:32 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:29:16.576 21:22:32 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:29:16.576 21:22:32 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:29:16.576 21:22:32 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:29:16.576 21:22:32 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:29:16.576 21:22:32 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:29:16.576 21:22:32 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:29:16.576 21:22:32 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:29:16.576 21:22:32 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:29:16.576 21:22:32 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:29:16.576 21:22:32 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:29:16.576 21:22:32 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:29:16.576 21:22:32 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:29:16.576 21:22:32 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:29:16.576 21:22:32 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:29:16.576 21:22:32 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:29:16.576 21:22:32 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:29:16.576 21:22:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:16.576 21:22:32 -- common/autotest_common.sh@10 -- # set +x 00:29:16.576 21:22:32 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:29:16.576 21:22:32 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:29:16.576 21:22:32 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:29:16.576 21:22:32 -- common/autotest_common.sh@10 -- # set +x 00:29:20.770 INFO: APP EXITING 00:29:20.770 INFO: killing all VMs 00:29:20.770 INFO: killing vhost app 00:29:20.770 INFO: EXIT DONE 00:29:24.061 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:29:24.061 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:29:24.061 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:29:24.061 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:29:24.061 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:29:24.061 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:29:24.061 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:29:24.061 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:29:24.061 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:29:24.061 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:29:24.061 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:29:24.061 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:29:24.061 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:29:24.061 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:29:24.061 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:29:24.061 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:29:24.061 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:29:27.353 Cleaning 00:29:27.353 Removing: /var/run/dpdk/spdk0/config 00:29:27.353 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:27.353 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:27.353 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:27.353 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:27.353 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:29:27.353 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:29:27.353 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:29:27.353 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:29:27.353 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:27.353 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:27.353 Removing: /var/run/dpdk/spdk1/config 00:29:27.353 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:29:27.353 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:29:27.353 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:29:27.353 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:29:27.353 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:29:27.353 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:29:27.353 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:29:27.353 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:29:27.353 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:29:27.353 Removing: /var/run/dpdk/spdk1/hugepage_info 00:29:27.353 Removing: /var/run/dpdk/spdk1/mp_socket 00:29:27.353 Removing: /var/run/dpdk/spdk2/config 00:29:27.353 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:29:27.353 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:29:27.353 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:29:27.353 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:29:27.353 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:29:27.353 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:29:27.353 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:29:27.353 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:29:27.353 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:29:27.353 Removing: /var/run/dpdk/spdk2/hugepage_info 00:29:27.353 Removing: /var/run/dpdk/spdk3/config 00:29:27.353 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:29:27.353 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:29:27.353 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:29:27.353 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:29:27.353 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:29:27.353 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:29:27.353 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:29:27.353 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:29:27.353 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:29:27.353 Removing: /var/run/dpdk/spdk3/hugepage_info 00:29:27.353 Removing: /var/run/dpdk/spdk4/config 00:29:27.353 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:29:27.353 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:29:27.353 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:29:27.353 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:29:27.353 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:29:27.353 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:29:27.353 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:29:27.353 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:29:27.353 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:29:27.353 Removing: /var/run/dpdk/spdk4/hugepage_info 00:29:27.353 Removing: /dev/shm/bdev_svc_trace.1 00:29:27.353 Removing: /dev/shm/nvmf_trace.0 00:29:27.353 Removing: /dev/shm/spdk_tgt_trace.pid2857496 00:29:27.353 Removing: /var/run/dpdk/spdk0 00:29:27.353 Removing: /var/run/dpdk/spdk1 00:29:27.353 Removing: /var/run/dpdk/spdk2 00:29:27.353 Removing: /var/run/dpdk/spdk3 00:29:27.353 Removing: /var/run/dpdk/spdk4 00:29:27.353 Removing: /var/run/dpdk/spdk_pid2855108 00:29:27.353 Removing: /var/run/dpdk/spdk_pid2856181 00:29:27.353 Removing: /var/run/dpdk/spdk_pid2857496 00:29:27.353 Removing: /var/run/dpdk/spdk_pid2858179 00:29:27.353 Removing: /var/run/dpdk/spdk_pid2859134 00:29:27.353 Removing: /var/run/dpdk/spdk_pid2859370 00:29:27.353 Removing: /var/run/dpdk/spdk_pid2860361 00:29:27.353 Removing: /var/run/dpdk/spdk_pid2860590 00:29:27.353 Removing: /var/run/dpdk/spdk_pid2860940 00:29:27.353 Removing: /var/run/dpdk/spdk_pid2862448 00:29:27.353 Removing: /var/run/dpdk/spdk_pid2863724 00:29:27.353 Removing: /var/run/dpdk/spdk_pid2864013 00:29:27.353 Removing: /var/run/dpdk/spdk_pid2864313 00:29:27.353 Removing: /var/run/dpdk/spdk_pid2864658 00:29:27.353 Removing: /var/run/dpdk/spdk_pid2865142 00:29:27.353 Removing: /var/run/dpdk/spdk_pid2865407 00:29:27.353 Removing: /var/run/dpdk/spdk_pid2865660 00:29:27.353 Removing: /var/run/dpdk/spdk_pid2865950 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2866931 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2869936 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2870212 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2870484 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2870708 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2871208 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2871242 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2871722 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2871950 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2872223 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2872399 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2872575 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2872734 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2873302 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2873557 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2873858 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2874143 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2874321 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2874482 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2874736 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2875004 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2875335 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2875698 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2875980 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2876240 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2876497 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2876754 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2877014 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2877273 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2877597 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2877951 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2878258 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2878519 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2878772 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2879033 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2879290 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2879555 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2879829 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2880191 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2880364 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2880709 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2885076 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2932282 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2936829 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2946537 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2952303 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2956790 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2957465 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2969805 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2969852 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2970729 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2971640 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2972553 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2973024 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2973043 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2973317 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2973487 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2973493 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2974406 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2975307 00:29:27.354 Removing: /var/run/dpdk/spdk_pid2976095 00:29:27.613 Removing: /var/run/dpdk/spdk_pid2976713 00:29:27.613 Removing: /var/run/dpdk/spdk_pid2976715 00:29:27.613 Removing: /var/run/dpdk/spdk_pid2976949 00:29:27.613 Removing: /var/run/dpdk/spdk_pid2978198 00:29:27.613 Removing: /var/run/dpdk/spdk_pid2979407 00:29:27.613 Removing: /var/run/dpdk/spdk_pid2988172 00:29:27.613 Removing: /var/run/dpdk/spdk_pid2988502 00:29:27.613 Removing: /var/run/dpdk/spdk_pid2993592 00:29:27.613 Removing: /var/run/dpdk/spdk_pid2999760 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3002365 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3013589 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3023291 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3025121 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3026039 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3044503 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3048801 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3066126 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3071040 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3072643 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3074474 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3074718 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3074953 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3075191 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3075700 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3077626 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3078607 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3079191 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3081865 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3082591 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3083314 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3087876 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3098635 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3102454 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3108844 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3110266 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3111820 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3116427 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3120958 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3129361 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3129367 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3134887 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3134991 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3135230 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3135585 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3135611 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3140356 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3140924 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3145562 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3148319 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3154080 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3159846 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3167665 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3167668 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3187672 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3188365 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3188990 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3189570 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3190525 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3191221 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3191917 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3192401 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3197176 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3197403 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3203773 00:29:27.613 Removing: /var/run/dpdk/spdk_pid3204047 00:29:27.873 Removing: /var/run/dpdk/spdk_pid3206268 00:29:27.873 Removing: /var/run/dpdk/spdk_pid3214427 00:29:27.873 Removing: /var/run/dpdk/spdk_pid3214520 00:29:27.873 Removing: /var/run/dpdk/spdk_pid3220170 00:29:27.873 Removing: /var/run/dpdk/spdk_pid3222145 00:29:27.873 Removing: /var/run/dpdk/spdk_pid3224069 00:29:27.873 Removing: /var/run/dpdk/spdk_pid3225167 00:29:27.873 Removing: /var/run/dpdk/spdk_pid3227262 00:29:27.873 Removing: /var/run/dpdk/spdk_pid3228846 00:29:27.873 Removing: /var/run/dpdk/spdk_pid3238190 00:29:27.873 Removing: /var/run/dpdk/spdk_pid3238650 00:29:27.873 Removing: /var/run/dpdk/spdk_pid3239310 00:29:27.873 Removing: /var/run/dpdk/spdk_pid3241691 00:29:27.873 Removing: /var/run/dpdk/spdk_pid3242155 00:29:27.873 Removing: /var/run/dpdk/spdk_pid3242622 00:29:27.873 Removing: /var/run/dpdk/spdk_pid3246859 00:29:27.873 Removing: /var/run/dpdk/spdk_pid3246947 00:29:27.873 Removing: /var/run/dpdk/spdk_pid3248506 00:29:27.873 Clean 00:29:27.873 21:22:43 -- common/autotest_common.sh@1437 -- # return 0 00:29:27.873 21:22:43 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:29:27.873 21:22:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:27.873 21:22:43 -- common/autotest_common.sh@10 -- # set +x 00:29:27.873 21:22:43 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:29:27.873 21:22:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:27.873 21:22:43 -- common/autotest_common.sh@10 -- # set +x 00:29:28.133 21:22:43 -- spdk/autotest.sh@385 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:29:28.133 21:22:43 -- spdk/autotest.sh@387 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:29:28.133 21:22:43 -- spdk/autotest.sh@387 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:29:28.133 21:22:43 -- spdk/autotest.sh@389 -- # hash lcov 00:29:28.133 21:22:43 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:29:28.133 21:22:43 -- spdk/autotest.sh@391 -- # hostname 00:29:28.133 21:22:43 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:29:28.133 geninfo: WARNING: invalid characters removed from testname! 00:29:50.111 21:23:03 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:50.111 21:23:05 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:52.016 21:23:07 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:53.395 21:23:09 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:55.300 21:23:11 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:57.205 21:23:12 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:59.109 21:23:14 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:59.109 21:23:14 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:59.109 21:23:14 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:29:59.109 21:23:14 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:59.109 21:23:14 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:59.110 21:23:14 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.110 21:23:14 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.110 21:23:14 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.110 21:23:14 -- paths/export.sh@5 -- $ export PATH 00:29:59.110 21:23:14 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.110 21:23:14 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:29:59.110 21:23:14 -- common/autobuild_common.sh@435 -- $ date +%s 00:29:59.110 21:23:14 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713468194.XXXXXX 00:29:59.110 21:23:14 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713468194.qc0iiY 00:29:59.110 21:23:14 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:29:59.110 21:23:14 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:29:59.110 21:23:14 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:29:59.110 21:23:14 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:29:59.110 21:23:14 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:29:59.110 21:23:14 -- common/autobuild_common.sh@451 -- $ get_config_params 00:29:59.110 21:23:14 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:29:59.110 21:23:14 -- common/autotest_common.sh@10 -- $ set +x 00:29:59.110 21:23:14 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:29:59.110 21:23:14 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:29:59.110 21:23:14 -- pm/common@17 -- $ local monitor 00:29:59.110 21:23:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:59.110 21:23:14 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3258453 00:29:59.110 21:23:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:59.110 21:23:14 -- pm/common@21 -- $ date +%s 00:29:59.110 21:23:14 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3258455 00:29:59.110 21:23:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:59.110 21:23:14 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3258458 00:29:59.110 21:23:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:59.110 21:23:14 -- pm/common@21 -- $ date +%s 00:29:59.110 21:23:14 -- pm/common@21 -- $ date +%s 00:29:59.110 21:23:14 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713468194 00:29:59.110 21:23:14 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3258460 00:29:59.110 21:23:14 -- pm/common@26 -- $ sleep 1 00:29:59.110 21:23:14 -- pm/common@21 -- $ date +%s 00:29:59.110 21:23:14 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713468194 00:29:59.110 21:23:14 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713468194 00:29:59.110 21:23:14 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713468194 00:29:59.110 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713468194_collect-vmstat.pm.log 00:29:59.110 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713468194_collect-cpu-load.pm.log 00:29:59.110 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713468194_collect-cpu-temp.pm.log 00:29:59.110 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713468194_collect-bmc-pm.bmc.pm.log 00:30:00.050 21:23:15 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:30:00.050 21:23:15 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:30:00.050 21:23:15 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:00.050 21:23:15 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:00.050 21:23:15 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:00.050 21:23:15 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:00.050 21:23:15 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:00.050 21:23:15 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:00.050 21:23:15 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:30:00.050 21:23:15 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:00.050 21:23:15 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:30:00.050 21:23:15 -- pm/common@30 -- $ signal_monitor_resources TERM 00:30:00.050 21:23:15 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:30:00.050 21:23:15 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:00.050 21:23:15 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:30:00.050 21:23:15 -- pm/common@45 -- $ pid=3258469 00:30:00.050 21:23:15 -- pm/common@52 -- $ sudo kill -TERM 3258469 00:30:00.050 21:23:15 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:00.050 21:23:15 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:30:00.050 21:23:15 -- pm/common@45 -- $ pid=3258481 00:30:00.050 21:23:15 -- pm/common@52 -- $ sudo kill -TERM 3258481 00:30:00.050 21:23:15 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:00.050 21:23:15 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:30:00.050 21:23:15 -- pm/common@45 -- $ pid=3258480 00:30:00.050 21:23:15 -- pm/common@52 -- $ sudo kill -TERM 3258480 00:30:00.050 21:23:15 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:00.050 21:23:15 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:30:00.050 21:23:15 -- pm/common@45 -- $ pid=3258482 00:30:00.050 21:23:15 -- pm/common@52 -- $ sudo kill -TERM 3258482 00:30:00.050 + [[ -n 2747781 ]] 00:30:00.050 + sudo kill 2747781 00:30:00.059 [Pipeline] } 00:30:00.075 [Pipeline] // stage 00:30:00.080 [Pipeline] } 00:30:00.094 [Pipeline] // timeout 00:30:00.099 [Pipeline] } 00:30:00.113 [Pipeline] // catchError 00:30:00.120 [Pipeline] } 00:30:00.137 [Pipeline] // wrap 00:30:00.143 [Pipeline] } 00:30:00.159 [Pipeline] // catchError 00:30:00.167 [Pipeline] stage 00:30:00.169 [Pipeline] { (Epilogue) 00:30:00.183 [Pipeline] catchError 00:30:00.185 [Pipeline] { 00:30:00.199 [Pipeline] echo 00:30:00.201 Cleanup processes 00:30:00.205 [Pipeline] sh 00:30:00.486 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:00.486 3258593 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:30:00.486 3258884 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:00.500 [Pipeline] sh 00:30:00.785 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:00.785 ++ grep -v 'sudo pgrep' 00:30:00.785 ++ awk '{print $1}' 00:30:00.785 + sudo kill -9 3258593 00:30:00.797 [Pipeline] sh 00:30:01.080 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:11.079 [Pipeline] sh 00:30:11.364 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:11.365 Artifacts sizes are good 00:30:11.378 [Pipeline] archiveArtifacts 00:30:11.395 Archiving artifacts 00:30:11.560 [Pipeline] sh 00:30:11.844 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:30:11.859 [Pipeline] cleanWs 00:30:11.869 [WS-CLEANUP] Deleting project workspace... 00:30:11.869 [WS-CLEANUP] Deferred wipeout is used... 00:30:11.876 [WS-CLEANUP] done 00:30:11.877 [Pipeline] } 00:30:11.897 [Pipeline] // catchError 00:30:11.909 [Pipeline] sh 00:30:12.192 + logger -p user.info -t JENKINS-CI 00:30:12.201 [Pipeline] } 00:30:12.217 [Pipeline] // stage 00:30:12.222 [Pipeline] } 00:30:12.239 [Pipeline] // node 00:30:12.244 [Pipeline] End of Pipeline 00:30:12.283 Finished: SUCCESS